id
stringlengths
10
10
title
stringlengths
5
246
abstract
stringlengths
42
3.32k
authors
stringlengths
5
21.5k
published_date
timestamp[s]
link
stringlengths
33
34
markdown
stringlengths
140
1.08M
abstract_ja
stringlengths
0
1.35k
2309.05894
Fast Constraint Screening for Multi-Interval Unit Commitment
Power systems Unit Commitment (UC) problem determines the generator commitment schedule and dispatch decisions for power networks based on forecasted electricity demand. However, with the increasing penetration of renewables and stochastic demand behaviors, it becomes challenging to solve the large-scale, multi-interval UC problem in an efficient manner. The main objective of this paper is to propose a fast and reliable scheme to eliminate a set of redundant or inactive physical constraints in the high-dimensional, multi-interval, mixed-integer UC problem, while the reduced problem is equivalent to the original full problem in terms of commitment decisions. Our key insights lie on pre-screening the constraints based on the load distribution and considering the physical feasibility regions of multi-interval UC problem. For the multistep UC formulation, we overcome screening conservativeness by utilizing the multi-step ramping relationships, and can reliably screen out more constraints compared to current practice. Extensive simulations on both specific load samples and load regions validate the proposed technique can screen out more than 80% constraints while preserving the feasibility of multi-interval UC problem.
Xuan He, Jiayu Tian, Yufan Zhang, Honglin Wen, Yize Chen
2023-09-12T00:38:35
http://arxiv.org/abs/2309.05894v1
# Fast Constraint Screening for Multi-Interval Unit Commitment ###### Abstract Power systems Unit Commitment (UC) problem determines the generator commitment schedule and dispatch decisions for power networks based on forecasted electricity demand. However, with the increasing penetration of renewables and stochastic demand behaviors, it becomes challenging to solve the large-scale, multi-interval UC problem in an efficient manner. The main objective of this paper is to propose a fast and reliable scheme to eliminate a set of redundant or inactive physical constraints in the high-dimensional, multi-interval, mixed-integer UC problem, while the reduced problem is equivalent to the original full problem in terms of commitment decisions. Our key insights lie on pre-screening the constraints based on the load distribution and considering the physical feasibility regions of multi-interval UC problem. For the multi-step UC formulation, we overcome screening conservativeness by utilizing the multi-step ramping relationships, and can reliably screen out more constraints compared to current practice. Extensive simulations on both specific load samples and load regions validate the proposed technique can screen out more than 80% constraints while preserving the feasibility of multi-interval UC problem. ## I Introduction Obtaining accurate solutions for unit commitment in an efficient manner is crucial for ensuring reliable generation dispatch [1]. For transmission grid operations, Unit commitment (UC) problems are typically formulated as mixed integer programming (MIP) problems involving both discrete variables for generator statuses and continuous variables for dispatch levels. In particular, for the multi-interval UC problems, temporal constraints, such as ramping constraints, are incorporated to regulate the capacity at which generators can adjust their output levels in response to dynamic changes of electricity demand and system conditions. However, due to the NP-hard nature of such nonconvex MIP problems, the solution process can be exceedingly time-consuming [2, 3]. Such computation complexity can be further increased by the existence of numerous network constraints such as line flow limits [4, 5]. The need to accelerate the unit commitment (UC) solution process has prompted research into developing a surrogate UC model with fewer line flow limits while maintaining the solution equivalence of the resulting UC problem and the original UC problem. Such technique is backed up by the observation that only a subset of security constraints is binding in the real world systems. The process to identify the appropriate subset of line flow limits is termed _constraint screening_[6, 7]. [6] proposes to eliminate the constraints whose line flow cannot reach the boundary given load inputs and show that the feasible region will be the same as the original UC problem. However, for the standard screening model [6, 8], screening strategies can be conservative, while many redundant or inactive constraints are still kept rather than screened out. [9] and [10] propose the cost-driven screening models to utilize the operating cost constraint to limit the line flow value range further. Yet most of the literature focus on single-step formulation of UC constraint screening, while ignoring the impact of temporal constraints such as generation ramping constraints on the feasible region of screening problems. On the other hand, relaxed binary variables of generator commitment schedule in the standard screening model for a load sample also enlarge the line flow value range [11]. The strong representation capability by machine learning (ML) models [12, 13] can be utilized to predict the value of the binary variables, i.e., the decisions of generator states, which shows potential for further integrating the predictions to the screening model to handle the loose line flow range issue. Another concern is the computation costs in standard constraint screening setup, which come from solving the optimization-based screening problem for each network constraint and each electricity load instance. For the former, the ML models [14, 15, 16, 17, 18] directly use ML predictions to classify if a network constraint is redundant, while there is no guarantee the reduced UC is equivalent to the original one. For the latter, in fact, empirical evidence shows that the samples belonging to some typical load regions can have the same set of redundant constraints in their corresponding UC problems [11]. It is then sufficient to implement screening on the load region rather than working on individual load sample [19, 20]. To address such challenges, we develop one of the first multi-interval UC constraint screening models to narrow down the search space of line flow constraints reliably, leading to screening strategy with more efficient performance. Our key insights lie on integrating the temporal constraints, i.e., the ramping constraints to the standard screening model, and such approach can be applied for either given load region or individual load sample. The potential of utilizing the ML predictions of generator states to improve the screening efficiency is also explored. Specifically, we formulate a tractable linear programming problem for multi-interval UC, and prove that more inactive constraints can be eliminated without changing the feasible region of original UC problem. Moreover, our method can be flexibly integrated to screen constraints for either one specific load vector, or to achieve solver warm-start (offline constraint screening beforehand to get the reduced problem which can be later used for real-time operation problem) for a given load region. We term these two cases as _sample-aware_[6] and _sample-agnostic_[11] constraint screening respectively. Specially, for the _sample-aware_ case, we further propose to make use of ML prediction on commitment schedule to better limit the search space of the constraint screening problem. In the sample-aware case on IEEE 39-bus system, our proposed method with ground truth generator states can achieve 95.3% screening rate, achieving a boost compared to the standard constraint screening [6] (82%). With partial predictions, the feasible rate of our method reach 84% while the solution gap is only 0.32%. In the sample-agnostic case on 118-bus, our procedure can also find the most compact form of UC problem after the more efficient screening process. ## II Multi-Interval UC Problem Formulation In this paper, we assume the system operators need to decide both the ON/OFF statuses (the commitment schedule) as well as dispatch level for all generators. Herein we consider the day-ahead UC problem taking ramp constraints into consideration. For the \(T\) timesteps UC problem with \(n\) generators involved, the problem is formulated as \[\min_{\mathbf{u},\mathbf{x},\mathbf{f}} \sum_{t=1}^{T}\sum_{i=1}^{n}c_{i}x_{i}(t)\] (1a) s.t. \[u_{i}\underline{x}_{i}\leq x_{i}(t)\leq u_{i}\bar{x}_{i},\quad \forall i,t \tag{1b}\] \[-\overline{\mathbf{f}}\leq\mathbf{K}\mathbf{f}(\mathbf{t})\leq \overline{\mathbf{f}},\quad\forall t\] (1c) \[\mathbf{x}(\mathbf{t})+\mathbf{A}\mathbf{f}(\mathbf{t})=\boldsymbol {\ell}(t),\;\forall t\] (1d) \[u_{i}(t)\in\{0,1\},\quad\forall i,t\] (1e) \[x_{i}(t)-x_{i}(t-1)\leq\mathsf{R}_{i}^{up}u_{i}(t-1)\] \[+\mathsf{R}_{i}^{su}(u_{i}(t)-u_{i}(t-1))+\bar{x}_{i}(1-u_{i}(t)) \;\forall i,t\] (1f) \[x_{i}(t-1)-x_{i}(t)\leq\mathsf{R}_{i}^{dn}u_{i}(t)\] \[+\mathsf{R}_{i}^{sd}(u_{i}(t-1)-u_{i}(t))+\bar{x}_{i}(1-u_{i}(t-1 ))\;\forall i,t. \tag{1g}\] In the UC problem, we optimize over the generator statuses \(\mathbf{u}\), the generator dispatch \(\mathbf{x}\) and the line power flow \(\mathbf{f}\) to find the least-cost solutions with cost defined in the objective function (1a). \(c_{i}\) denotes the cost coefficient. Constraint (1b), (1c) and (1d) denotes the generation bound, the flow bound and the nodal power balance respectively. Note that the power flows are modeled as a DC approximation, while the phase angles are absorbed into the fundamental flows \(\mathbf{f}\in\mathbb{R}^{n-1}\)[21, 22]; \(K\) and \(\mathbf{A}\) map such fundamental flows to flow constraints and nodal power balance respectively. (1e) enforces the binary constraint of generator statuses, where \(u_{i}=1\) indicates the generator is on. (1f) and (1f) are the ramping constraints to enforce limitations on the speed at which generators are able to modify their output levels in response to fluctuations in demand or system conditions. \(R_{i}^{up},R_{i}^{su},R_{i}^{dn},R_{i}^{sd}\) are the upward limits, downward, start-up and shut-down capacity. For the power system with numerous generators and networks, (1) can be a large-scale MILP problem, which is intractable to solve efficiently so as to satisfy the computation requirement of finding generation dispatch decisions. Meanwhile, there is a large number of inactive constraints, especially the inactive network constraints, giving the potential to simplify (1) by screening out such inactive constraints. Then it is possible to solve a reduced UC problem with fewer engineering constraints. ## III Multi-Interval Constraint Screening Model ### _Modeling of Inactive Constraints_ In this work, we follow the definition of inactive constraint firstly formulated by [6]. One constraint can be treated as inactive if it has no influence on the feasible region of the multi-interval UC problem, and can thus be eliminated as illustrated in Fig. 1. For the constraint screening problem, it is thus of interest to correctly screen out as many inactive constraints as possible. And ultimately, we want to tighten the feasible region and make it close to the original UC problem, so constraints like those marked in red in Fig. 1 can be screened out. To formulate the feasible region mathematically, we use \(P\) to denote the feasible region defined by a constraint set \(C^{P}\). \(C^{P}\) is a subset of the original UC problem's constraint set. Then the actual feasible region of the original problem defined by (1b)-(1g) can be naturally a subregion of \(P\). **Definition 1**: _A network constraint \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\leq\overline{\mathbf{f}}\) or \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\geq-\overline{\mathbf{f}}\) is defined inactive to \(P\) if and only if,_ \[P\supseteq P^{-\{j\}}; \tag{2}\] _where \(P^{-\{j\}}\) is the region defined by the constraint set \(C^{P}\) eliminating \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\leq\overline{\mathbf{f}}_{j}\) or \(\mathbf{K}_{j}\mathbf{f}(\mathbf{t})\geq-\overline{\mathbf{f}}_{j}\), and we denote this set as \(C^{P/j}\)._ ### _Single-Step Constraint Screening_ To identify each inactive flow constraint, the standard optimization-based screening model [6] tries to evaluate whether the line \(j\) will be binding given the load input. In (3) we describe such formulation, which can be used to conduct Fig. 1: Illustration of the inactive constraints corresponding to the different feasible region. By using our technique, it is possible to screen more constraints (line in red) compared to standard single timestep screening method. the _sample-aware constraint screening_ for the \(j\)-th line at time step \(k\). \[\max_{\mathbf{u}_{\mathbf{k}},\mathbf{x}_{\mathbf{k}},\mathbf{f}_{ \mathbf{k}}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Sample-Aware Screening with Generator States_ Here _sample-aware_ means we implement constraint screening for a known load instance. As the binary variables make the basic screening model a MILP problem, we first consider relaxing the value of binary variables to \(u\in[0,1]\). Then, to further lower the model complexity, we replace the nodal balance constraints (5c) with the load balance constraints (6d) and (6e), and reduce the network constraints (5b) to (6c), as described in the following formulation (6). **Corollary 1**: _Consider the following multi-step screening model with the relaxation of the binary variables, part of nodal balance constraints and network constraints:_ \[S^{*}_{aware_{j}}(k)= \max_{\mathbf{u}_{k},\mathbf{x}_{k},\mathbf{f}_{k}}\mathbf{K}_{ \mathbf{f}}(k)\] (6a) s.l. \[\mathbf{\underline{x}}\leq\mathbf{x}(\mathbf{t})\leq\mathbf{ \underline{\bar{f}}},\quad t\leq k,\] (6b) \[-\overline{\mathbf{f}}_{\mathcal{F}/j}\leq\mathbf{K}_{\mathcal{F}/ j}\mathbf{f}(\mathbf{k})\leq\overline{\mathbf{f}}_{\mathcal{F}/j},\quad t=k,\] (6c) \[\mathbf{x}(\mathbf{k})+\mathbf{Af}(\mathbf{t})=\boldsymbol{\ell} (t),\quad t=k,\] (6d) \[\sum x(t)=\sum l(t),\quad t\leq k,\] (6e) \[\eqref{eq:constraint_constraint_constraint_constraint_constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint_ constraint_ constraint_ constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint_ constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint constraint_ constraint constraint_ constraint constraint constraint_ constraint \(\mathbf{\ell}\in\mathcal{L}\), while in the screening model the load sample is transformed to \(\mathbf{\ell}^{r}\). **Corollary 4**: _Consider the following multi-step screening model for a specified load region:_ \[S_{R_{j}}^{*}(k) =\max_{\mathbf{u}_{k},\mathbf{x}_{k},\mathbf{f}_{k},\mathbf{f}_{k} }\mathbf{K}_{j}\mathbf{f}(k)\] (9a) s.t. \[\quad\mathbf{\bar{x}}\leq\mathbf{x}(\mathbf{t})\leq\mathbf{\bar{ x}},\quad t\leq k,\] (9b) \[\quad-\mathbf{\bar{f}}_{\mathcal{F}/j}\leq\mathbf{K}_{\mathcal{F }/j}\mathbf{f}(\mathbf{t})\leq\mathbf{\bar{f}}_{\mathcal{F}/j},\quad t=k,\] (9c) \[\quad\mathbf{x}(\mathbf{t})+\mathbf{Af}(\mathbf{t})=\mathbf{\ell}^{r }(t),\quad t=k,\] (9d) \[\quad\sum_{k}x(t)=\sum l^{r}(t),\quad t\leq k,\] (9e) \[\quad\mathbf{\ell}^{r}\in\mathcal{L},\] (9f) \[\quad\eqref{eq:S_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_FF_F_F_F_F_F_F_F_FF_F_F_F_F_FF_F_F_F_FF_F_FF_F_F_FF_F_F_F_F_FF_F_FF_FF_FF_F_FF_F_FF_F_FF_F_FF_F_FF_F_FF_F_FF_FF_F_FF_FF_FF_FF_FF_FF_FF_FF_FF_FF_FFF_FFF_F ### _Simulation Results_ _Sample-aware screening tasks_: We consider three scenarios: i) without generator statuses predictions, ii) with partial predictions, and iii) with ground truth are considered to verify the effectiveness of the proposed method. When there is no generator statuses predictions, our multi-step method reduces 81.16% of line limits as shown in Table.II. With partial predictions, the number of remaining constraints of multi-step is only 69% of single-step method, and the CPU time solving the original problem is 1.103 times faster than single-step method according to Table IV. It can be seen that multi-step method can always eliminate more inactive constraints, and thus taking less time to solve the reduced UC problem. Besides, the infeasible rate of the original problem is only 16%, and the gap of objective value between the reduced and the original UC problem is only 0.32%. Note that the bias between the solution of binary variables obtained by the screening model and the UC problem does not cause the infeasible case due to the line limits misidentification, which verifies Corollary 3. According to Fig. 4, the case with the ground truth can eliminate the most inactive constraints, while with the partial predictions the remaining constraints can still be sufficiently reduced compared to the case without prediction. _Sample-agnostic screening tasks_: In this setting, the cases both on 39-bus and 118-bus systems with 20%, 50%, and 80% load variations are considered to verify the scalability of the proposed method. As shown in Fig. 5, the number of remaining constraints decrease by 60 and 100 for 39-bus and 118-bus system over three load ranges respectively. It can be seen that with larger load region, the number of remaining constraints increases. This may be due to the increasing patterns of inactive constraints with wider load variation range, i.e., the percentage of the inactive constraints decreases when widening the load variation. Results above validate that the proposed multi-step screening models can help boost screening performance accurately in both sample-aware and sample-agnostic settings. This demonstrates both the possibility and the necessity of including the practical multi-step ramping information into the screening problem. the searching space from the explicit constraints of UC problem, in the future work we would like to seek theoretical understandings about the implicit decision spaces impacting the screening and final UC solution results.
電力システムのユニットコミットメント(UC)問題は、予測された電気需要に基づき、発電所のコミットメントスケジュールと配電決定を決定する。しかし、再生可能エネルギーの浸透と確率的需要の行動が強まることで、大規模で多段UC問題を効率的に解決するのが難しくなっている。本論文の主な目的は、高次元で多段UC問題に存在する不要な物理的制約のセットを削除するための高速で信頼性の高いスキームを提案することである。削減された問題点は、コミットメント決定に関してオリジナルの全問題と等価である。このスキームの主要な見解は、負荷分布に基づいて制約を事前に screening することと、多段UC問題の物理的現実可能性領域を考慮することである。多段UC表現で、このスキームは、制約の screening の保守性を克服するために、多段の ramping relationships を利用し、現在の prática
2309.14741
Rethinking Session Variability: Leveraging Session Embeddings for Session Robustness in Speaker Verification
In the field of speaker verification, session or channel variability poses a significant challenge. While many contemporary methods aim to disentangle session information from speaker embeddings, we introduce a novel approach using an additional embedding to represent the session information. This is achieved by training an auxiliary network appended to the speaker embedding extractor which remains fixed in this training process. This results in two similarity scores: one for the speakers information and one for the session information. The latter score acts as a compensator for the former that might be skewed due to session variations. Our extensive experiments demonstrate that session information can be effectively compensated without retraining of the embedding extractor.
Hee-Soo Heo, KiHyun Nam, Bong-Jin Lee, Youngki Kwon, Minjae Lee, You Jin Kim, Joon Son Chung
2023-09-26T08:09:30
http://arxiv.org/abs/2309.14741v1
# Rethinking Session Variability: Leveraging ###### Abstract In the field of speaker verification, session or channel variability poses a significant challenge. While many contemporary methods aim to disentangle session information from speaker embeddings, we introduce a novel approach using an additional embedding to represent the session information. This is achieved by training an auxiliary network appended to the speaker embedding extractor which remains fixed in this training process. This results in two similarity scores: one for the speakers information and one for the session information. The latter score acts as a compensator for the former that might be skewed due to session variations. Our extensive experiments demonstrate that session information can be effectively compensated without retraining of the embedding extractor. Hee-Soo Heo\({}^{1}\), Kikhyun Nam\({}^{2}\), Bong-Jin Lee\({}^{1}\), Youngki Kwon\({}^{1}\), Minjae Lee\({}^{1}\), You Jin Kim\({}^{1}\), Joon Son Chung\({}^{2}\)\({}^{1}\)NAVER Cloud Corporation, South Korea \({}^{2}\)Korea Advanced Institute of Science and Technology, South Korea Speaker verification, speaker embedding, session information ## 1 Introduction In the evolving domain of speech processing, speaker verification plays a crucial role, having various real-world applications ranging from voice-based security systems to personalised speech assistants. Central to robust speaker verification is the extraction of speaker embeddings, which encapsulate the unique characteristics of an individual's voice [1, 2, 3]. However, these embeddings are susceptible to extraneous information, largely influenced from the recording environment. Variabilities in recording devices, ambient noise, room acoustics, and other session-related factors can significantly affect the accuracy of these embeddings, creating misleading similarities even among distinct speakers in similar recording situations [4, 5]. Historically, when the i-vector approach was prevalent in the speaker embedding space, techniques such as linear discriminant analysis (LDA) and within-class covariance normalization (WCCN) were employed as countermeasures to diminish these unexpected session similarities [1, 6]. With the advances of deep learning and its application to this domain, efforts have shifted towards disentangling speaker information from session information directly within the embedding [7, 5, 8]. Various strategies have been studied in this direction - while some leverage the adversarial approach, others design novel loss functions to achieve the same goal [9]. However, a clear problem with these methods is that while trying to separate session-related information from speaker-specific details, important characteristics of the speaker might be lost. In simpler terms, in the process of removing unwanted session information, one might also unintentionally remove features that help identify the speaker. In light of these challenges, this paper introduces an alternative approach. Instead of disentangling session-related information from the embedding, we present a framework to compensate for it at the score level. Our methodology capitalises on the use of an auxiliary network, seamlessly appended to the original speaker embedding extractor. The auxiliary network is designed to represent session information found within speaker embeddings. A key facet of our framework ensures that the primary speaker embedding extractor remains fixed during this process. Consequently, our system yields a twofold output; a similarity score reflecting speaker characteristics and another gauging session attributes. The latter, acting as a compensator, has the potential to rectify any discrepancies in the speaker score induced by analogous or differing session conditions. Our empirical evaluations, spanning various model architectures and evaluation configurations, underscore the feasibility of session compensation without the need for retraining the original embedding extractor. The paper is organised as follows. Section 2 introduces the proposed framework. Experiments and result analysis are presented in Section 3, followed by conclusion in Section 4. ## 2 Framework for Session Variability Compensation In this section, we present a novel framework specifically designed to address and compensate for session variability in speaker verification tasks. ### Speaker Embedding Extraction For this study, we leverage pre-trained embedding extractors, drawing from methods that have proven efficacy in conventional recipes. Specifically, we have evaluated three models that rep resent a diverse cross-section of state-of-the-art architectures. These models are ECAPA-TDNN [10], RawNet3 [11], and MFA-Conformer-based speaker embedding extractors [12, 13]. ### Session Embedding Extraction Within the domain of speaker verification, speaker embeddings efficiently capture the intrinsic attributes of a speaker's speech. However, these embeddings may also contain subtle information specific to the recording session, like background noise or recording device characteristics. Recognising the need to isolate such session-specific nuances from the core speaker features, we introduce the session network. **Network architecture.** This network is attached to the speaker embedding network. Simplistically composed of several fully-connected layers, drop-out and GELU activation [14], the session network's primary role is to extract session information contained within the speaker embedding. Figure 1-(a) shows the detailed composition of the session network. It's designed to differentiate between the inherent speaker characteristics and the variabilities introduced by different recording sessions. **Training strategy.** For effective extraction of session information, it's paramount to train the network using a specially designed loss function. In addition, utilising datasets such as VoxCelebs [15, 16], which offers multiple sessions for individual speakers, is essential. For the session network, the training data comprises pairs - both positive and negative - drawn from the VoxCeleb datasets. These pairs are constructed by pairing two utterances. First, utterances for a positive pair stem from a same session and a same speaker, with identical augmentation techniques applied. This setup ensures that any discrepancy in the embeddings is predominantly due to session variations. Conversely, a negative pair includes two utterances from the same speaker but from different sessions, with distinct augmentations applied. This highlights the impact of session differences manifested within speaker embeddings. To elaborate further, consider a speaker denoted as \(i\), randomly selected from our dataset. For our training, we aim to consider the speaker's utterances across two distinct sessions. Thus, for each chosen session, two random utterances are selected. This process gives us a notation, \(u_{i,s,u}\)\(s\)\(\in\) {0,1},\(u\)\(\in\) {0,1}, where \(i\) stands for the selected speaker, \(s\) denotes the session and \(u\) indicates the utterance. Now, for a definition of the loss function, we consider all possible combinations of sessions (\(s\)) and utterances (\(u\)). Our objective is to compute a loss value, \(\mathcal{L}\), which would measure the difference or similarity between these combinations. This loss is determined as: \[\mathcal{L}\!=\!\begin{cases}1\!-\!S(se(u_{i,s1,u1}),\!se(u_{i,s2,u2})),&\text {if }s1==s2\\ S(se(u_{i,s1,u1}),\!se(u_{i,s2,u2})),&\text{otherwise}\end{cases} \tag{1}\] where \(S(\cdot,\cdot)\) is a function indicating cosine similarity between two embeddings and \(se(u)\) is session embedding from utterance \(u\). It's worth noting that we do not consider pairs from different speakers while training the session network, ensuring the focus remains strictly on session variability. The session information is directly inferred from the video ID in the VoxCeleb datasets. In our context, two utterances are considered to be from the same session if they originate from an identical video. ### Speaker Verification Using the Proposed Framework In this section, we present our speaker verification procedure underpinned by our novel framework. In our study, we consider each verification trial to be constructed from a pair of utterances. From each of these utterances, two types of embeddings can be extracted: one that represents the characteristics of the speaker (the speaker embedding) and another that embodies the particularities of the recording session (the session embedding). Figure 1: The session compensating framework for speaker verification. (a) Illustration of the session network. The network receives speaker embeddings and, via multiple pre-norm residual blocks, produces session embeddings. (b) Outline of the speaker verification process within the proposed framework: Upon receiving two utterances, the system extracts corresponding session and speaker embeddings. Similarities between these embeddings are then calculated. The computed similarities are subsequently input into the Q-stack classifier to determine whether the two utterances originate from a same speaker or two distinct speakers. **Score-level compensator.** Once we have these embeddings, we can measure how similar they are. We compare the speaker embeddings from both utterances to get a "speaker similarity" score. This value essentially offers a metric that quantifies how alike the two utterances are, based on the characteristics of the speakers. On a parallel track, the session similarity is determined through the cosine similarity of the two session embeddings. This similarity shows how alike the two recordings are, based just on details from the recording session. Having obtained these similarities, the final step is to integrate them into a composite score that would be instrumental for verification. The formula we propose for this is: \[\mathit{score}\!=\!spk\!-\!w\!*\!sess, \tag{2}\] where \(spk\) and \(sess\) indicate speaker and session similarities, respectively, and \(w\) stands as a weighting factor for the session similarity. By subtracting a weighted session similarity from the speaker similarity, we aim to rectify any biases present in the speaker similarity attributed to session-related variations. Thus, the goal is to compensate for the session-induced biases, ensuring that the speaker's inherent characteristics shine through without the unexpected influence of session-specific attributes. To discern the impact of the session similarity on speaker verification, we carried out simple experiments utilising embeddings derived from the three embedding extractors. The focal point of this experiment was to adjust a weight value, and subsequently, observe how it influenced the performance of speaker verification. We conducted our tests using the VoxCeleb1 original test set, and the results are shown in Figure 2. The results reveal that simple action of subtracting the session similarity can reduce the error in speaker verification. **Q-stack-based compensator.** Nonetheless, there exists a limitation to the above approach. The foundational premise of the approach is predicated on the assumption that the correlation between the speaker and session similarities is linear. However, in practical scenarios, this relationship might exhibit a more complex nature, suggesting the necessity for a sophisticated approach to accurately compensate for these interactions. To address this, we utilised an additional classifier which takes in both the speaker and session similarities and makes a binary decision. Essentially, it determines whether the two utterances originate from the same speaker or not. This new approach allows us to capture the non-linear relationship between the two similarities. The concept of this classifier is derived from a framework termed "Q-stack" [17]. The Q-stack classifier is employed to process two separate sets of similarities derived from two utterances, with the primary objective of deciding whether these utterances are from an identical speaker or not. The operation of the Q-stack-based framework is as follows. First, it takes in \(200\) similarities; half represents speaker similarities, and the other half stands for session similarities. These specific quantities originate from the well-known VoxCeleb trainer's recipe 1. This procedure extracts \(10\) embeddings from an individual utterance through a sliding window technique. Consequently, when comparing a pair of utterances, the possible combination results in \(10\!\times\!10\) similarities, leading to a combined total of \(100\) similarities for each type of embedding. For a more detailed architecture of the Q-stack, it is structured with three fully-connected layers, drop-out, and non-linear activation. These layers consist of \(400\) nodes, except the output layer with only two nodes. All hidden nodes are activated by leaky ReLU function for non-linearity. Figure 1-(b) shows the overall operation of the proposed framework, including the structure of the Q-stack classifier. Footnote 1: [https://github.com/clovai/voxceleb_trainer](https://github.com/clovai/voxceleb_trainer) ## 3 Experiments Experiments were conducted to evaluate the proposed speaker verification framework on four independent datasets. The first two subsections describe implementation details and evaluation protocols across all experiments, while subsequent subsections describe experiments on the proposed system configuration. ### Implementation details For the evaluation of the proposed system, various datasets and models were employed. We selected multiple datasets for the training process: VoxCeleb1&2 [16, 18], VOiCES [19], CommonVoice [20] and telephone speeches from NIST SRE corpora. ECAPA-TDNN and RawNet3 models were trained using the VoxCeleb1&2 datasets. The Conformer-based system was trained leveraging the VoxCeleb1&2, NIST SRE 2004, 2006, and 2008 [21, 22], and CommonVoice datasets. The Q-stack system, distinctively, was trained on the test set of the VOiCES dataset. For augmentation, we use reverberations and noises from simulated RIRs and MUSAN datasets [23, 24]. Augmentation configurations follow that of [25]. Figure 2: Variation in speaker verification performance on the original VoxCeleb1 test set for three distinct embedding extractors. The graph shows the influence of session similarity(\(sess\))’s weight \(w\) on each extractor’s performance. A clear trend emerges, highlighting the role of session similarity as a compensatory across all models evaluated. ### Evaluation protocol We evaluated performance using the VoxCeleb1 original test set (Vox1-O), 10sec-10sec protocol of NIST SRE 2010 evaluation (N-SRE) [26], and unique combined datasets. The initial evaluation of our system was carried out using two primary datasets: Vox1-O and N-SRE. These datasets contain audio data from varied sources and were chosen because they internally include session variability. To further evaluation, we introduced two custom datasets, VN-Mix and VC-Mix, crafted to test the systems' performance under challenging scenarios. First, VN-Mix (VoxCeleb and NIST) was composed of trials from Vox1-O and N-SRE. A notable aspect of this combination is the intrinsic domain difference between the two datasets. Specifically, N-SRE includes telephone speech while Vox1-O contains YouTube video clips. Given this contrast in source domains, it's hypothesised that a similarity bias might arise due to these inherent differences. For VC-Mix (VoxCeleb and VoxConverse), we combined positive pairs from Vox1-O with negative pairs from the "single" protocol of VoxConverse [27], as referenced in [28]. The positive pairs from Vox1-O comprise utterances from multiple sessions. In contrast, the negative pairs from VoxConverse are restricted to a singular session. This composition suggests the challenge, presenting both hard positive and negative pairs. In simple words, VC-Mix combines two types of pairs: one with the same speaker from different sessions and another with different speakers from a single session. The structure of VC-Mix was inspired by the dataset used in the VoxSRC 2022 challenge [29]. All telephone speech is up-sampled from 8kHz to 16kHz. The performance metric used to compare the models' performance was the well-known equal error rate (EER). ### Comparison with single system In Table 1, we presented a comprehensive comparison of the baseline system against our proposed systems across varied models and evaluation datasets. A key observation was the robustness and enhancement in EER offered by the proposed systems, which use session embeddings. Focusing on the "score comp" row, the results show the positive impact of session compensation using equation (2). The value of the weighting factor \(w\) was determined using test trials from the VOICES dataset. Furthermore, the "Q-stack" row introduces further improvement from an additional classifier. This suggests that the classifier helps model a non-linear relationship between session and speaker similarities. ### Comparison with ensemble system Table 2 shows the impact of different ensemble techniques on model performance. A conventional ensemble averages multiple scores from various models. However, with our Q-stack system, this ensemble is more sophisticated. Instead of merely averaging, it inputs scores from different models in unison. In particular, we increased the number of input scores from \(200\) to \(600\) when combining the three models. The experimental results highlighted the superior performance of the Q-stack-based ensemble, especially on the N-SRE dataset and the VN-Mix containing the corresponding dataset. Conventional ensemble techniques, on the other hand, exhibited a decrement in performance on the N-SRE dataset, attributed to some models' limited exposure to telephone speech during their training. ## 4 Conclusion In the domain of speaker verification, session variability is a well-known factor that can lead to performance degradation. Traditional methods often aim to modify or enhance the speaker embedding to handle this issue. Contrary to this, we suggest a novel approach; rather than adjusting the speaker embedding, we propose that session information should be treated as a separate entity. Comprehensive experiments, spanning a variety of models and datasets, demonstrate that the proposed method not only mitigates the effects of session variability but also has valuable implications for model ensemble and score calibration. \begin{table} \begin{tabular}{l|c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{EER(\%)} & \multicolumn{3}{c|}{RawNet3} & \multicolumn{3}{c|}{ECAPA-TDNN} & \multicolumn{3}{c}{Conformer} \\ & Vox1-O & N-SRE & VN-Mix & VC-Mix & Vox1-O & N-SRE & VN-Mix & VC-Mix & Vox1-O & N-SRE & VN-Mix & VC-Mix \\ \hline Baseline & 1.11 & 13.52 & 10.51 & 3.32 & 0.77 & 11.29 & 6.90 & 2.17 & 0.70 & 8.70 & 3.48 & 1.99 \\ \hline Score comp & 1.12 & 13.33 & 8.91 & 3.05 & 0.75 & 10.92 & 5.84 & 2.02 & 0.69 & 8.58 & 3.43 & 1.88 \\ Q-stack & 1.06 & 12.98 & 7.34 & 3.03 & 0.71 & 10.64 & 4.22 & 1.98 & 0.65 & 8.39 & 3.34 & 1.51 \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of the performances using different models and evaluation sets. “Baseline” shows results from the usual speaker embedding. “Score comp” shows the outcomes when session variability is compensated at the score level. “Q-stack” denotes results when session variability is addressed using session embedding complemented by an additional classifier. \begin{table} \begin{tabular}{l c c c c} \hline \hline EER(\%) & Vox1-O & N-SRE & VN-Mix & VC-Mix \\ \hline Single best & 0.70 & 8.70 & 3.48 & 1.99 \\ Averaging scores & 0.63 & 8.88 & 5.16 & 1.97 \\ Proposed & 0.56 & 8.14 & 3.17 & 1.44 \\ \hline \hline \end{tabular} \end{table} Table 2: A comparison of the effect of the ensemble methods. “Single best” shows the top-performing model on its own. “Averaging scores” displays results when we combine scores from several models the usual way. “Proposed” gives results using our new ensemble method with Q-stack.
スピーカー Verifikasi の分野において、セッションまたはチャネルの変動は、重要な課題となっています。多くの現代的な手法は、セッション情報をスピーカエmbeddingから分離することを目指していますが、私たちは、セッション情報を表すための追加のembeddingを用いた新しいアプローチを導入します。これは、スピーカエmbeddingエキスパッターに付加された補助ネットワークをトレーニングすることによって達成されます。このトレーニング過程において、この補助ネットワークは固定されます。この結果、二つの類似度スコアが得られます。一つはスピーカ情報に、もう一つはセッション情報に。後者は、セッション変動に起因する可能性のあるスピーカ情報への偏りを補正する役割を果たします。私たちの広範な実験結果によると、セッション情報は、エキスパッターの学習を再訓練することなく効果的に補正できます。
2301.00116
Diffusion within pores fully revealed by magnetic resonance
Probing the transport of fluids within confined domains is important in many areas including material science, catalysis, food science, and cell biology. The diffusion propagator fully characterizes the diffusion process, which is highly sensitive to the confining boundaries as well as the structure within enclosed pores. While magnetic resonance has been used extensively to observe various features of the diffusion process, its full characterization has been elusive. Here, we address this challenge by employing a special sequence of magnetic field gradient pulses for measuring the diffusion propagator, which allows for `listening to the drum' and determining not only the pore's shape but also diffusive dynamics within it.
Evren Özarslan, Cem Yolcu, Alfredo Ordinola, Deneb Boito, Tom Dela Haije, Mathias Højgaard Jensen, Magnus Herberthson
2022-12-31T04:13:59
http://arxiv.org/abs/2301.00116v1
# Diffusion within pores fully revealed ###### Abstract Probing the transport of fluids within confined domains is important in many areas including material science, catalysis, food science, and cell biology. The diffusion propagator fully characterizes the diffusion process, which is highly sensitive to the confining boundaries as well as the structure within enclosed pores. While magnetic resonance has been used extensively to observe various features of the diffusion process, its full characterization has been elusive. Here, we address this challenge by employing a special sequence of magnetic field gradient pulses for measuring the diffusion propagator, which allows for 'listening to the drum' and determining not only the pore's shape but also diffusive dynamics within it. The diffusion propagator indicates the probability that a particle located at position \(\mathbf{x}\) moves to \(\mathbf{x}^{\prime}\) between two specified times. The diffusion propagator fully describes the diffusive motion in environments having restricting or semi-permeable walls, spatially varying diffusivity, external forces, etc. Let us consider a time-invariant diffusion scenario in \(d\) dimensions within a closed and connected domain \(\Omega\) under the dimensionless potential energy field \(U(\mathbf{x})\). The diffusion propagator for a time interval of duration \(t\), \(p(\mathbf{x}^{\prime},t|\mathbf{x})\), is then the solution to the system of equations \[\nabla\cdot\left(\mathbf{D}(\mathbf{x}^{\prime})e^{-U(\mathbf{x}^{\prime })}\nabla e^{U(\mathbf{x}^{\prime})}p(\mathbf{x}^{\prime},t|\mathbf{x})\right) =\frac{\partial p(\mathbf{x}^{\prime},t|\mathbf{x})}{\partial t} \tag{1a}\] \[\lim_{t\to 0}p(\mathbf{x}^{\prime},t|\mathbf{x}) =\delta(\mathbf{x}^{\prime}-\mathbf{x})\] (1b) \[\hat{\mathbf{n}}\cdot\mathbf{D}(\mathbf{x}^{\prime})\,e^{-U(\mathbf{x}^{ \prime})}\,\nabla e^{U(\mathbf{x}^{\prime})}p(\mathbf{x}^{\prime},t|\mathbf{x}) =0,\quad\mathbf{x}^{\prime}\in\partial\Omega\, \tag{1c}\] where \(\nabla\) is a vector of partial derivatives with respect to the components of \(\mathbf{x}^{\prime}\), and \(\hat{\mathbf{n}}\) is the surface normal at \(\mathbf{x}^{\prime}\). The first of these is the diffusion equation with diffusion tensor \(\mathbf{D}(\mathbf{x}^{\prime})\). The initial condition is given by Eq. (1b), while the last equation is the reflective boundary condition. In this example, \(U(\mathbf{x})\), \(\mathbf{D}(\mathbf{x})\) and \(\Omega\) are quantities describing the fluid properties or a static picture of the environment all of which give rise to the particular diffusive dynamics, which is captured by the propagator. If the diffusion propagator is available, the diffusion tensor and the potential landscape can be determined, respectively, from its short-time and long-time behaviors, while \(\Omega\) is given by its support. Clearly, the diffusive process is an indirect yet powerful means of recovering the structure of the medium, making it relevant to many disciplines. Magnetic resonance has been the method of choice for many characterization studies due to its noninvasive nature and exquisite sensitivity to diffusion, which has been realized since its early days [4, 5]. In a typical MR experiment, the specimen is subjected to a magnetic field \(B_{z}\) whose direction defines the \(z\)-axis by convention. The magnetic moments of the spin-bearing particles exhibit coherence, synergistically yielding a magnetization vector that develops along the \(z\)-axis. By applying electromagnetic radiation at a specific frequency, magnetization due to the nuclei of the atoms of interest can be tilted towards the \(xy\)-plane, upon which it undergoes Larmor precession at an angular frequency given by \(\omega=\gamma B_{z}\), where \(\gamma\) is the gyromagnetic ratio, which is specific to the particular atomic nuclei being examined. Such precession leads to changing magnetic flux around it, inducing a potential difference in a nearby antenna, which is referred to as the MR signal. During the course of the MR experiment, different particles acquire different phase shifts \(\left(-\int\omega(\mathbf{x},t)\,\mathrm{d}t\right)\) due to the differences in the local magnetic field and experimental manipulations of \(B_{z}\). One such manipulation introduced by Stejskal and Tanner in 1965 involves incorporating pulsed magnetic field gradients (\(\nabla B_{z}\)) into MR acquisitions for performing diffusion measurements in a controllable way [6]; gradient pulses have also been the building blocks of MR imaging [7]. Stejskal and Tanner's experiment (see Figure 1a) featuring two gradient pulses of equal duration is still the most widely employed diffusion encoding method. Here, \(\mathbf{q}_{\mathrm{a}}\) denotes the integral of the gradient vector over its duration, multiplied by \(\gamma\). A spin bearing particle, whose average positions during the application of the first and second pulses denoted by \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), suffers phase shifts of \(\mathbf{q}_{\mathrm{a}}\cdot\mathbf{x}\) and \(-\mathbf{q}_{\mathrm{a}}\cdot\mathbf{x}^{\prime}\), respectively, due to the Larmor precession frequency being proportional to the magnetic field. Consequently, the MR signal intensity (divided by the intensity with \(\mathbf{q}_{\mathrm{a}}=0\)) is given by \[E_{\Delta}^{\mathrm{(a)}}(\mathbf{q}_{\mathrm{a}})=\int_{\Omega}\mathrm{d}\mathbf{x} \,\rho(\mathbf{x})\int_{\Omega}\mathrm{d}\mathbf{x}^{\prime}\,p(\mathbf{x}^{\prime},\Delta |\mathbf{x})\,e^{-i\mathbf{q}_{\mathrm{a}}\cdot(\mathbf{x}^{\prime}-\mathbf{x})}\;, \tag{2}\] where \(\rho(\mathbf{x})\) is the initial spin density and for simplicity, we assumed short pulses that encode the instantaneous positions of the particles. Conventional experiments for measuring self-diffusion start at the steady state, i.e., with \(\rho(\mathbf{x}^{\prime})=\lim_{t\to\infty}p(\mathbf{x}^{\prime},t|\mathbf{x})\) in the absence of sources and relaxation sinks. The signal is just the Fourier transform of the ensemble averaged propagator (EAP) defined by \[\bar{P}_{\Delta}(\mathbf{x}_{\mathrm{net}})=\int_{\Omega}\mathrm{d}\mathbf{x}\,\rho( \mathbf{x})\,p(\mathbf{x}+\mathbf{x}_{\mathrm{net}},\Delta|\mathbf{x})\;, \tag{3}\] where \(\mathbf{x}_{\mathrm{net}}=\mathbf{x}^{\prime}-\mathbf{x}\) is the net displacement vector. Thus, EAP can be computed from the inverse Fourier transform of the signal \[\bar{P}_{\Delta}(\mathbf{x}_{\mathrm{net}})=\frac{1}{(2\pi)^{d}}\int_{\mathbb{R}^{d }}\mathrm{d}\mathbf{q}_{\mathrm{a}}\,e^{i\mathbf{q}_{\mathrm{a}}\cdot\mathbf{x}_{\mathrm{ net}}}\,E_{\Delta}^{(a)}(\mathbf{q}_{\mathrm{a}})\;. \tag{4}\] The EAP is a substantially compromised version of the propagator, indicating the likelihood of net displacements averaged for all spins irrespective of where they are within the structure. Despite this limitation, it exhibits very interesting features enabling some understanding of the underlying structure, thus has been widely utilized in characterizing porous media [9, 10, 11] as well as tissues [12, 13]. Recently, Laun et al. introduced another two-pulse experiment, one pulse being long, the other narrow [8] as shown in Fig. 1b. Assuming closed pores and uniform structure within, the particles visit every site within the pore with equal probability during the application of the long pulse. Thus, the positional average of each and every trajectory is very tightly distributed around the pore's center-of-mass. As such, the long pulse has no effect other than diminishing the integral of the waveform, which is a necessary condition for making the signal independent of the pore's position within the specimen. As a result, the signal from all pores add up, generating a detectable signal level even for a specimen comprising small amount of fluid. If the second pulse is short, the sequence simply introduces a phase shift proportional to each spin's location. The total signal for a connected pore is then given by \[E^{\rm(b)}(\mathbf{q}_{\rm b})=\int_{\tilde{\Omega}}\mathrm{d}\mathbf{x}\,\tilde{\rho}( \mathbf{x})\,e^{-i\mathbf{q}_{\rm b}\cdot\mathbf{x}}\, \tag{5}\] where \(\mathbf{x}\) is the position of the spin with respect to the pore's center-of-mass located at \(\mathbf{x}_{\rm cm}\) while \(\tilde{\rho}(\mathbf{x})=\rho(\mathbf{x}+\mathbf{x}_{\rm cm})\) and \(\tilde{\Omega}\) indicates the domain translated so that the center of mass of the pore is at the origin. Thus, the sequence is indeed "an imaging experiment in disguise," [8] making it possible to obtain the image of the pore indicator function through an inverse Fourier transform of \(E^{\rm(b)}(\mathbf{q}_{\rm b})\). In more general terms, the obtained quantity is the steady-state distribution of the fluid [14], thus not informative of the diffusion process. ### Measuring the diffusion propagator Here, we consider the sequence in Figure 1c, which combines the key elements of the two sequences discussed above. The long pulse is there so that the integral of the waveform vanishes, and contributions from all pores are independent of their position within the sample. The two subsequent pulses \(\mathbf{q}\) and Figure 1: The diffusion encoding pulse sequences considered. (a) Stejskal-Tanner sequence [6] allows the measurement of the ensemble average propagator. (b) The gradient waveform introduced by Laun et al. [8] enables measurement of the long diffusion time limit of the propagator. (c) The pulse sequence introduced here makes it possible to map the diffusion propagator. introduce phase shifts that depend on the particles' positions during their application (in a frame of reference whose origin is at \(\mathbf{x}_{\rm cm}\)--the center of mass of the fluid filling up the pore), denoted by \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), respectively. When the second and third pulses are short, the signal is given by \[E_{\Delta}^{\rm(c)}(\mathbf{q},\mathbf{q}^{\prime})=\int_{\tilde{\Omega}}{\rm d}\mathbf{x} \,\tilde{\rho}(\mathbf{x})\int_{\tilde{\Omega}}{\rm d}\mathbf{x}^{\prime}\,\tilde{p}( \mathbf{x}^{\prime},\Delta|\mathbf{x})\,e^{-i(\mathbf{q}\cdot\mathbf{x}+\mathbf{q}^{\prime}\cdot\bm {x}^{\prime})}\;, \tag{6}\] where \(\tilde{p}(\mathbf{x}^{\prime},\Delta|\mathbf{x})=p(\mathbf{x}^{\prime}+\mathbf{x}_{\rm cm}, \Delta|\mathbf{x}+\mathbf{x}_{\rm cm})\). The propagator, can be obtained via the \(2d\)-dimensional inverse Fourier transform of the signal \[W_{\Delta}(\mathbf{x},\mathbf{x}^{\prime}):=\frac{1}{(2\pi)^{2d}}\int_{ \mathbb{R}^{d}}{\rm d}\mathbf{q}\,\int_{\mathbb{R}^{d}}{\rm d}\mathbf{q}^{\prime}\,E_ {\Delta}^{\rm(c)}(\mathbf{q},\mathbf{q}^{\prime})\,e^{i(\mathbf{q}\cdot\mathbf{x}+\mathbf{q}^{ \prime}\cdot\mathbf{x}^{\prime})} \tag{7}\] along with an estimate of \(\tilde{\rho}(\mathbf{x})\), which is made available by the \(d\)-dimensional inverse Fourier transform of the subset of the data with \(\mathbf{q}^{\prime}=\mathbf{0}\). In other words, the diffusion propagator is given by \[\tilde{p}(\mathbf{x}^{\prime},\Delta|\mathbf{x})=\frac{\int_{\mathbb{R}^{d}}{\rm d} \mathbf{q}\,e^{i\mathbf{q}\cdot\mathbf{x}}\int_{\mathbb{R}^{d}}{\rm d}\mathbf{q}^{\prime}\,e^ {i\mathbf{q}^{\prime}\cdot\mathbf{x}^{\prime}}\,E_{\Delta}^{\rm(c)}(\mathbf{q},\mathbf{q}^{ \prime})}{(2\pi)^{d}\int_{\mathbb{R}^{d}}{\rm d}\mathbf{q}\,e^{i\mathbf{q}\cdot\mathbf{x} }\,E_{\Delta}^{\rm(c)}(\mathbf{q},\mathbf{0})}\;. \tag{8}\] #### Structure within the pore We demonstrate the estimation of the diffusion propagator of a simulated one-dimensional pore (interval). The pore is partitioned into two exchanging compartments, with diffusion coefficients \(D_{\rm L}\) and \(D_{\rm R}\), separated by a membrane of permeability \(w\). The walls of the pore are purely reflective. The simulations summarized in Figure 2 illustrates the agreement of the reconstructed propagator (second row) with the true propagators (top row) at three different time intervals. The associated EAPs are depicted in the bottom row. The presence of a membrane within the pore space is conspicuous in the estimated propagators while the EAPs are not descriptive. #### Structural dispersity The propagator-sensitive sequence of Figure 0(c) can be used to characterize porous media having structural dispersity. To this end, we consider such a specimen having \(N\) isolated pores where the \(n\)th pore has the non-attenuated signal fraction \(f_{n}\). The Fourier transforms of the signals \(E_{\Delta}^{\rm(c)}(\mathbf{q},\mathbf{q}^{\prime})\) and \(E_{\Delta}^{\rm(c)}(\mathbf{q},\mathbf{0})\) yield \[W_{\Delta}(\mathbf{x},\mathbf{x}^{\prime}) =\sum_{n=1,2,3,\ldots}^{N}f_{n}\,\tilde{\rho}_{n}(\mathbf{x})\,\tilde {p}_{n}(\mathbf{x}^{\prime},\Delta|\mathbf{x}) \tag{9a}\] \[\tilde{\rho}(\mathbf{x}) =\sum_{n=1,2,3,\ldots}^{N}f_{n}\,\tilde{\rho}_{n}(\mathbf{x})\;, \tag{9b}\] which are just weighted averages of the respective quantities for all pores translated so that the pores' centers of mass coincide. Numerous quantities can be introduced for characterizing the underlying dispersity in the specimen. For example, a dimensionless 'variance map' can be obtained through the expression \[\sigma_{g}(\mathbf{x}):=\frac{(W_{\infty}(\mathbf{x},\mathbf{x})-\tilde{\rho}(\mathbf{x})^{2}) ^{1/2}}{\rho_{\rm max}}\;, \tag{10}\] where \(\rho_{\max}:=\tilde{\rho}(\mathbf{x}_{m})\) is the maximum value of \(\rho(\mathbf{x})\). A dispersity index can be introduced through \(\mathrm{DI}:=\sigma_{g}^{2}(\mathbf{x}_{m}),\) which is equal to \(\langle V^{-1}\rangle\langle V\rangle-1\) in the absence of external forces when all pores contribute at \(\mathbf{x}_{m}\) and the signal fraction \(f_{n}\) is proportional to the pore volume \(V_{n}\). Here, \(\langle\cdot\rangle\) denotes averaging over all pores. We shall now consider general \(\mathbf{x}=r\hat{\mathbf{u}}\) and \(\mathbf{x}^{\prime}=r^{\prime}\hat{\mathbf{u}}^{\prime}\) where \(r=|\mathbf{x}|\), \(r^{\prime}=|\mathbf{x}^{\prime}|\), and \(\hat{\mathbf{u}}\) and \(\hat{\mathbf{u}}^{\prime}\) indicate the directions of \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), respectively. One can define the quantity \[\Phi(\hat{\mathbf{u}},\hat{\mathbf{u}}^{\prime}):=\int_{0}^{\infty}W_{\infty}(r\hat{ \mathbf{u}},r\hat{\mathbf{u}}^{\prime})\,r^{2d-1}\,\mathrm{d}r\, \tag{11}\] which is constant for a medium composed of isotropic pores. It has peaks at \(\hat{\mathbf{u}}^{\prime}=\hat{\mathbf{u}}\) when the pores are anisotropic. If the pore shape has antipodal symmetry, \(\hat{\mathbf{u}}^{\prime}=-\hat{\mathbf{u}}\) will exhibit another peak. Similarly, for shapes that exhibit other symmetries (e.g., cross or star-shaped pores) there will be other peaks. In Figure 3, we illustrate the maps of the quantities in (9b)-(11) for ten different media with different compositions of two-dimensional pores. Figure 2: Representative snapshots of the simulated propagator estimation experiment for exchanging intervals of length \(L_{\mathrm{L}}\) and \(L_{\mathrm{R}}\) having diffusivity \(D_{\mathrm{L}}\) and \(D_{\mathrm{R}}\), for the left and right comnpartments, respectively. Shown from top to bottom are: true propagator, estimated propagator, and EAP (true and estimated). The density plots are of \(p^{\prime}(x^{\prime},\Delta|x)=\tanh\frac{p(x^{\prime},\Delta|x)}{2^{7}(L_{ \mathrm{L}}+L_{\mathrm{R}})}\) for better depiction. The true propagator is computed by its (truncated) spectral decomposition. Membrane position is emphasized by dashed lines. The relaxation time scales \(\tau_{\mathrm{L}}=L_{\mathrm{L}}^{2}/\pi^{2}D_{\mathrm{L}}\) and \(\tau_{\mathrm{R}}=L_{\mathrm{R}}^{2}/\pi^{2}D_{\mathrm{R}}\) correspond (roughly) to the process of diffusion within the compartments, and \(\tau_{\mathrm{ex}}=\sqrt{D_{\mathrm{L}}D_{\mathrm{R}}}/w^{2}\) to the exchange between them. ### 'Hearing the drum' In 1966, Kac posed the now famous question "'Can one hear the shape of a drum?' [15], which pertains to recovering the geometry of an enclosing boundary from the eigenspectrum of the Laplacian Figure 3: Maps derived from the long time diffusion measurements for different specimens. The axes of the \(\Phi\) maps vary between \(-\pi\) and \(\pi\). The DI values were estimated to be 0, 0.28, 0.28, \(3.8\times 10^{-5}\), \(5.0\times 10^{-5}\), 0.41, \(3.9\times 10^{-5}\), \(4.3\times 10^{-5}\), \(1.2\times 10^{-5}\), \(1.5\times 10^{-5}\) (top to bottom). operator. The pulse sequence of Laun et al. (depicted in Figure 1b) demonstrated that the shape can be recovered from the MR signal. By enabling the measurement of the diffusion propagator, our gradient waveform illustrated in Figure 1c provides access to the diffusion dynamics within the pore and indeed to the spectrum of the Laplacian. To see this, one can exploit the eigenfunction expansion of the propagator, which leads to the expression \[\int_{\Omega}p(\mathbf{x},t|\mathbf{x})\,\mathrm{d}\mathbf{x}=\int_{0}^{\infty}g(\lambda)\,e ^{-\lambda t}\,\mathrm{d}\lambda. \tag{12}\] Clearly, the density of states, \(g(\lambda)\), is accessible from the propagator through an inverse Laplace transform while the propagator is obtained from the signal of the waveform in Figure 1c through Eq. (8). In Figure 4, we illustrate the recovery of the density of states from simulated signals for the one-dimensional scenario involving diffusion in the direction perpendicular to two parallel plates. Figure 4: How magnetic resonance ‘‘hears the drum.”’ (a) First row: MR signal profiles obtained using the diffusion encoding in Figure 1c. Second row: Propagators obtained via Eq. (8). (b) Left-hand-side of Eq. (12) plotted against time and (c) its inverse Laplace transform revealing the density of states function. The numerical Laplace inversion [16] was performed using the package at [https://github.com/caizkun/pyilt](https://github.com/caizkun/pyilt). ## Concluding remarks In conclusion, we introduced a new technique, which facilitates the characterization of the diffusion process within pores in full. This is accomplished by mapping the diffusion propagator through Fourier transforms and can be related to the density of states function making the shape of the drum "heard" by magnetic resonance. The technique allows for mapping the structure within closed pores as well as characterizing disperse specimens with unprecedented detail. ## Acknowledgments EO thanks Carl-Fredrik Westin for a stimulating conversation, and Nicolas Moutal and Denis Grebenkov for sharing their code on diffusion separated by semi-permeable membranes [17].
confine domains を含む多くの分野で、流体の輸送を調査することは重要です。 材料科学、触媒学、食品科学、細胞生物学などです。 拡散プロパgatorは拡散プロセスを完全に特徴付け、 境界の confining と内部構造に非常に敏感です。 磁気共鳴は、拡散プロセスをさまざまな特徴で観察するのに広く使われていますが、 その完全な特徴付けは難題でした。 そこで、この課題に対処するために、 特別な磁場勾配パルスシーケンスを用いて拡散プロパgator を測定しています。 これにより、 「ドラムを聞く」 ということができ、 pore の形状だけでなく、内部の diffusive Dynamics を決定することができる。
2309.04552
Motion Compensated Unsupervised Deep Learning for 5D MRI
We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects.
Joseph Kettelkamp, Ludovica Romanin, Davide Piccini, Sarv Priya, Mathews Jacob
2023-09-08T18:46:42
http://arxiv.org/abs/2309.04552v1
# Motion Compensated Unsupervised Deep Learning for 5D MRI+ ###### Abstract We propose an unsupervised deep learning algorithm for the motion-compensated reconstruction of 5D cardiac MRI data from 3D radial acquisitions. Ungated free-breathing 5D MRI simplifies the scan planning, improves patient comfort, and offers several clinical benefits over breath-held 2D exams, including isotropic spatial resolution and the ability to reslice the data to arbitrary views. However, the current reconstruction algorithms for 5D MRI take very long computational time, and their outcome is greatly dependent on the uniformity of the binning of the acquired data into different physiological phases. The proposed algorithm is a more data-efficient alternative to current motion-resolved reconstructions. This motion-compensated approach models the data in each cardiac/respiratory bin as Fourier samples of the deformed version of a 3D image template. The deformation maps are modeled by a convolutional neural network driven by the physiological phase information. The deformation maps and the template are then jointly estimated from the measured data. The cardiac and respiratory phases are estimated from 1D navigators using an auto-encoder. The proposed algorithm is validated on 5D bSSFP datasets acquired from two subjects. Keywords:Free Running MRI 5D MRI Cardiac MRI. ## 1 Introduction Magnetic Resonance Imaging (MRI) is currently the gold standard for assessing cardiac function. It provides detailed images of the heart's anatomy and enables accurate measurements of parameters such as ventricular volumes, ejection fraction, and myocardial mass. Current clinical protocols, which rely on serial breath-held imaging of the different cardiac slices with different views, often require long scan times and are associated with reduced patient comfort. Compressed sensing [1], deep learning [10, 6], and motion-compensated approaches [13] were introduced to reduce the breath-hold duration in cardiac CINE MRI. Unfortunately, many subject groups, including pediatric and older subjects, cannot comply with even the shorter breath-hold durations. 5D free-breathing MRI approaches that rely on 3D radial readouts [3, 11] have been introduced to overcome the above challenges. These methods resolve the respiratory and cardiac motion from either the center of k-space or Superior-Inferior (SI) k-space navigators. The k-space data is then binned into different cardiac/respiratory phases and jointly reconstructed using compressed sensing. The main benefit of this motion-resolved strategy is the ability to acquire the whole heart with isotropic spatial resolution as high as 1mm\({}^{3}\). This approach allows the images to be reformatted into different views to visualize specific anatomical regions at different cardiac and/or respiratory phases. Despite the great potential of 5D MRI, current methods have some challenges that limit their use in routine clinical applications. Firstly, the motion-resolved compressed sensing reconstruction is very computationally intensive, and it can take several hours to have a dynamic 3D volume. And secondly, compressed sensing reconstructions require fine tuning of several regularization parameters, which greatly affect the final image quality, depending on the undersampling factor and the binning uniformity. The main focus of this work is to introduce a motion-compensated reconstruction algorithm for 5D MRI. The proposed approach models the images at every time instance as a deformed version of a static image template. Such an image model may not be a good approximation in 2D schemes [13], where the organs may move in and out of the slice. However, the proposed model is more accurate for the 3D case. We introduce an auto-encoder to estimate the cardiac and respiratory phases from the superior-inferior (SI) k-t space navigators. We disentangle the latent variables to cardiac and respiratory phases by using the prior information of the cardiac and respiratory rates. The latent variables allow us to bin the data into different cardiac and respiratory phases. We use an unsupervised deep learning algorithm to recover the image volumes from the clustered data. The algorithm models the deformation maps as points on a smooth low-dimensional manifold in high dimensions, which is a non-linear function of the low-dimensional latent vectors. We model the non-linear mapping by a Convolutional Neural Network (CNN). When fed with the corresponding latent vectors, this CNN outputs the deformation maps corresponding to a specific cardiac or respiratory phase. We learn the parameters of the CNN and the image template from the measured k-t space data. We note that several manifold based approaches that model the images in the time series by a CNN were introduced in the recent years. [15, 5, 9]. All of these methods rely on motion resolved reconstruction, which is conceptually different from the proposed motion compensated reconstruction. We validate the proposed scheme on cardiac MRI datasets acquired from two healthy volunteers. The results show that the approach is capable of resolving the cardiac motion, while offering similar image quality for all the different phases. In particular, the motion-compensated approach can combine the image information from all the motion states to obtain good quality images. ## 2 Methods ### Acquisition scheme In vivo acquisitions were performed on a 1.5T clinical MRI scanner (MAGNETOM Sola, Siemens Healthcare, Erlangen, Germany). The free-running research sequence used in this work is a bSSFP sequence, in which all chemically shift-selective fat saturation pulses and ramp-up RF excitations were removed, in order to reduce the specific absorption rate (SAR) and to enable a completely uninterrupted acquisition [8]. K-space data were continuously sampled using a 3D golden angle kooshball phyllotayis trajectory [7], interleaved with the acquisition of a readout oriented along the superior-inferior (SI) direction for cardiac and respiratory self-gating [11]. The main sequence parameters were: radio frequency excitation angle of 55 with an axial slab-selective sinc pulse, resolution of 1.1 mm3, FOV of 220 mm3, TE/TR of 1.87/3.78 ms, and readout bandwidth of 898 Hz/pixel. The total fixed scan time was 7:58 minutes. ### Forward model We model the measured k-space data at the time instant \(t\) as the multichannel Fourier measurements of \(\boldsymbol{\rho}_{t}=\boldsymbol{\rho}(\mathbf{r},t)\), which is the image volume at the time instance \(t\): \[\mathbf{b}_{t}=\underbrace{\mathbf{F}_{\mathbf{k}_{t}}\,\mathbf{C}\,\boldsymbol {\rho}_{t}}_{\mathcal{A}_{t}(\boldsymbol{\rho}_{t})} \tag{1}\] Here, \(\mathbf{C}\) denotes the multiplication of the images by the multi-channel coil sensitivities, while \(\mathbf{F}_{\mathbf{k}}\) denotes the multichannel Fourier operator. \(\mathbf{k}_{t}\) denotes the k-space trajectory at the time instant \(t\).In this work, we group 22 radial spokes corresponding to a temporal resolution of 88 ms. An important challenge associated with the bSSFP acquisition without intermittent fat saturation pulses is the relatively high-fat signal compared to the myocardium and blood pool. Traditional parallel MRI and coil combination strategies often result in significant streaking artifacts from the fat onto the myocardial regions, especially in the undersampled setting considered in this work. We used the coil combination approach introduced in [4] to obtain virtual channels that are maximally sensitive to the cardiac region. A spherical region covering the heart was manually selected as the region of interest (ROI), while its complement multiplied by the distance function to the heart was chosen as the noise mask. We chose the number of virtual coils that preserve 75% of the energy within the ROI. This approach minimizes the strong fat signals, which are distant from the myocardium. We used the JSENSE algorithm [14, 12] to compute the sensitivity maps of the virtual coils. ### Image and motion models The overview of the proposed scheme is shown in Fig. 1. The recovery of \(\rho_{t}\) from very few of their measurements \(\mathbf{b}_{t}\) is ill-posed. To constrain the recovery, we model \(\mathbf{\rho}_{t}\) as the deformed version of a static image template \(\mathbf{\eta}(\mathbf{r})\): \[\rho(\mathbf{r},t)=\mathcal{I}\left[\mathbf{\eta},\phi_{t}(\mathbf{r})\right] \tag{2}\] Here, \(\mathbf{\phi}_{t}\) is the deformation map and the operator \(\mathcal{I}\) denotes the deformation of \(\mathbf{\eta}\). We implement (2) using cubic Bspline interpolation. This approach allows us to use the k-space data from all the time points to update the template, once the motion maps \(\mathbf{\phi}_{t}\) are estimated. Classical MoCo approaches use image registration to estimate the motion maps \(\mathbf{\phi}_{t}\) from approximate (e.g. low-resolution) reconstructions of the images \(\rho(\mathbf{r},t)\). However, the quality of motion estimates depends on the quality of the reconstructed images, which are often low when we aim to recover the images at a fine temporal resolution (e.g. 88 ms). We propose to estimate the motion maps directly from the measured \(k-t\) space data. In particular, we estimate the motion maps \(\mathbf{\phi}_{t}\) such that the multi-channel measurements of \(\rho(\mathbf{r},t)\) specified by (2) match the measurements \(\mathbf{b}_{t}\). We also estimate the template \(\mathbf{\eta}\) from the k-t space data of all the time points. To constrain the recovery of the deformation maps, we model the deformation maps as the output of a convolutional neural network \[\phi_{t}=\mathcal{G}_{\theta}[\mathbf{z}_{t}],\] in response to low-dimensional latent vectors \(\mathbf{z}_{t}\). Here, \(\mathcal{G}_{\theta}\) is a convolutional neural network, parameterized by the weights \(\theta\). We note that this approach constrains the deformation maps as points on a low-dimensional manifold. They are obtained as non-linear mappings of the low-dimensional latent vectors \(\mathbf{z}_{t}\), which capture the motion attributes. The non-linear mapping itself is modeled by the CNN. ### Estimation of latent vectors from SI navigators We propose to estimate the latent vectors \(\mathbf{z}_{t}\) from the SI navigators using an auto-encoder. In this work, we applied a low pass filter with cut-off frequency of 2.8 Hz to the SI navigators to remove high-frequency oscillations. Similarly, an eighth-degree Chebyshev polynomial is fit to each navigator voxel and is subtracted from the signal to remove drifts. The auto-encoder involves an encoder that generates the latent vectors \(\mathbf{z}_{t}=\mathcal{E}_{\varphi}(\mathbf{y}_{t})\), are the navigator signals. The decoder reconstructs the navigator signals as \(\mathbf{y}_{t}=\mathcal{D}_{\psi}(\mathbf{z}_{t})\). In this work, we restrict the dimension of the latent space to three, two corresponding to respiratory motion and one corresponding to cardiac motion. To encourage the disentangling of the latent vectors to respiratory and cardiac signals, we use the prior information on the range of cardiac and respiratory frequencies as in [2]. We solve for the auto-encoder parameters from the navigator signals of each subject as \[\{\varphi^{*},\psi^{*}\}=\arg\min_{\varphi,\psi}\left\|\mathcal{F}\left\{D_{\psi} \left(\underbrace{\mathcal{E}_{\varphi}(\mathbf{Y})}_{\mathbf{z}}\right)- \mathbf{Y}\right\}\right\|_{I=1}+\lambda\ \left\|\mathbf{Z}\bigotimes\mathcal{B}\right\|_{2}^{2} \tag{3}\] Here, \(\mathbf{Z}\in\mathbb{R}^{3\times T}\) and \(\mathbf{Y}\) are matrices whose columns are the latent vectors and the navigator signals at different time points. \(\mathcal{F}\) is the Fourier transformation in the time domain. \(\bigotimes\) denotes the convolution of the latent vectors with band-stop filters with appropriate stop bands. In particular, the stopband of the respiratory latent vectors was chosen to be \(0.05-0.7\) Hz, while the stopband was chosen as the complement of the respiratory bandstop filter Hz for the cardiac latent vectors. We observe that the median-seeking \(\ell_{1}\) loss in the Fourier domain is able to offer improved performance compared to the standard \(\ell_{2}\) loss used in conventional auto-encoders. ### Motion compensated image recovery Once the auto-encoder parameters \(\varphi,\psi\) described in (3) are estimated from the navigator signals of the subject, we derive the latent vectors as \(\mathbf{Z}=\mathcal{E}_{\varphi^{*}}(\mathbf{Y})\). Using the latent vectors, we pose the joint recovery of the static image template \(\boldsymbol{\eta}(\mathbf{r})\) and the deformation maps as \[\{\boldsymbol{\eta}^{*},\theta^{*}\}=\arg\min_{\boldsymbol{\eta},\theta}\sum_ {t=1}^{T}\|\mathcal{A}_{t}\left(\rho(\mathbf{r},t)\right)-\mathbf{b}_{t}\|^{2 }\ \ \text{where}\ \ \rho(\mathbf{r},t)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{t}]\right) \tag{4}\] The above optimization scheme can be solved using stochastic gradient optimization. Following optimization, one can generate real-time images shown in Fig. 3 and Fig. 3 as \(\mathcal{I}\left(\boldsymbol{\eta}^{*},\mathcal{G}_{\theta^{*}}[\mathbf{z}_{t }]\right)\). The optimization scheme described in (4) requires \(T\) non-uniform fast Fourier transform steps per epoch. When the data is recovered with a high temporal resolution, this approach translates to a high computational complexity. To reduce computational complexity, we introduce a clustering scheme. In particular, we use k-means clustering to group the data to \(N<<T\) clusters. This approach allows us to pool the k-space data from multiple time points, all with similar latent codes. \[\{\boldsymbol{\eta}^{*},\theta^{*}\}=\arg\min_{\boldsymbol{\eta},\theta}\sum_ {n=1}^{N}\|\mathcal{A}_{n}\left(\rho(\mathbf{r},n)\right)-\mathbf{b}_{n}\|^{2 }\ \ \text{where}\ \ \rho(\mathbf{r},n)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{n}]\right) \tag{5}\] The above approach is very similar to (4). Here, \(\mathbf{b}_{n}\), \(\mathcal{A}_{n}\), and \(\mathbf{z}_{n}\) are the grouped k-space data, the corresponding forward operator, and the centroid of the cluster, respectively. Note that once \(N\to T\), both approaches become equivalent. In this work, we used \(N=30\). Once the training is done, one can still generate the real-time images as \(\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[\mathbf{z}_{t}]\right)\). ### Motion resolved 5D reconstruction for comparison We compare the proposed approach against a compressed sensing 5D reconstruction. In particular, we used the SI navigators to bin the data into 16 bins, consisting of four cardiac and four respiratory phases as described. We use a total variation regularization similar to [2] to constrain the reconstructions. We determined the regularization parameter manually to obtain the best reconstructions. We note that the dataset with 6.15 minute acquisition is a highly undersampled setting. In addition, because this dataset was not acquired with intermittent fat saturation pulses, it suffers from streaking artifacts that corrupt the reconstructions. ## 3 Results We show the results from the two normal volunteers in Fig. 3 and 4, respectively. The images correspond to 2-D slices extracted from the 3D volume, corresponding to different cardiac and respiratory phases. We also show the time profile of the real-time reconstructions \(\rho(\mathbf{r},t)=\mathcal{I}\left(\boldsymbol{\eta},\mathcal{G}_{\theta}[ \mathbf{z}_{t}]\right)\) along the red line shown in the top row. We note that the approach can capture the cardiac and respiratory motion in the data. The different phase images shown in the figure were extracted manually from the real-time movies. Figure 1: Overview of the proposed reconstruction algorithm. In the first step shown in (a), we estimate the latent variables that capture the motion in the data using a constrained auto-encoder, as described in Fig. 3. The auto-encoder minimizes a cost function, which is the sum of an \(\ell_{1}\) data consistency term and a prior involving cardiac and frequency ranges. To reduce the computational complexity of the image reconstruction, we cluster the latent space using k-means algorithm as shown in (b). The cluster centers are fed in as inputs to the CNN denoted by \(\mathcal{G}_{\theta}\), which outputs the deformation maps \(\mathcal{G}_{\theta}[\mathbf{z}_{n}]\). We jointly optimize for both the template \(\eta\) and parameters \(\theta\) of the generator. Figure 3: Results from the first subject. (a) The top row shows a 2-D slice of the reconstructed 3D volume at diastole and systole, obtained using the proposed motion compensated approach. The bottom row shows the motion-resolved compressed sensing recovery of the same data. (b) shows the 1D projection versus time profile of the reconstructed datasets using the motion compensated (top) and motion resolved (bottom) approaches. Figure 2: Latent vectors estimated from the SI navigators (bottom curves)[11]. We note that the orange and the green curves estimated using the auto-encoder roughly follow the respiratory motion, while the blue curves capture the cardiac motion. ## 4 Discussion The comparisons in Fig. 3 and 4 show that the proposed approach is able to offer improved reconstructions, where the cardiac phases are well-resolved. We note that the motion resolved reconstruction of the different phases have different image quality, depending on the number of spokes in the specific phases. By contrast, the proposed motion compensated reconstructions are able to combine the data from different motion states; the improved data efficiency translates to reconstructions with reduced streaking artifacts. Additionally, the auto-encoder accurately characterized the SI navigator and disentangled the cardiac and respiratory latent vectors Fig. 2. We note that the comparison in this work is preliminary. The main focus of this work is to introduce the proposed motion-compensated reconstruction algorithm and the auto-encoder approach to estimate the latent vectors and to demonstrate its utility in 5D MRI. In our future work, we will focus on rigorous studies, including comparisons with 2D CINE acquisitions. Figure 4: Results from the second subject. (a) The top row shows a 2-D slice of the reconstructed 3D volume at diastole and systole, obtained using the proposed motion compensated approach. The bottom row shows the motion-resolved compressed sensing recovery of the same data. (b) shows the 1D projection versus time profile of the reconstructed datasets using the motion compensated (top) and motion resolved (bottom) approaches.
Unsupervised深度学習アルゴリズムを提案し、3D径向取得から5D心筋MRIデータの動き補償再構築を行います。 Ungatedフリー呼吸5D MRIは、検査計画を簡略化し、患者への負担を軽減します。また、呼吸を止めずに撮影する2D検査よりも、等方的な空間解像度と任意の視点でのデータの再 slices が可能です。しかし、現在の5D MRIの再構築アルゴリズムは計算時間が非常に長く、その結果がデータの異なる生理学的フェーズへの階層化によって大きく左右されます。提案するアルゴリズムは、現在の動き解析再構築に比べてデータ効率的な代替手段です。この動き補償アプローチでは、各心筋/呼吸フェーズのデータを、3D画像テンプレートの変形バージョンをフーリエサンプルとしてモデル化します。変形マップは、生理学的フェーズ情報によって駆動された convolu
2309.07073
The arions generation by magnetodipole waves of pulsars and magnetars in a constant magnetic field
The influence of the gravitational fields of pulsars and magnetars on the arion emission during the propagation of magnetodipole waves in a constant magnetic field has been evaluated. The solution of the equation was obtained and the flux of arions emitted by magnetodipole waves during their propagation in a constant magnetic field was found. It is shown that the amplitude of the born arion wave at a distance from the source of magnetodipole radiation of a pulsar or magnetar $(r\to\infty)$ in the considered case tends to a constant value. The intensity of the arion emission in the solid angle element and the amount of arion energy $\overline{I}$, emitted in all directions per unit time grow quadratically with increasing distance, traveled by the magnetodipole radiation of a pulsar or magnetar in a constant magnetic field. Such growth of the energy of the born arion wave is due to the fact that in the considered problem constant magnetic field is defined in the whole space. In reality, the galactic and intergalactic magnetic fields can be represented in this form only in regions of space of finite dimensions, outside of which the force lines of their induction vector are curved. Therefore, it is possible to apply these results only in a region of space for which $r\leq L_{coh}<\infty$, where $L_{coh}$ is the coherence length, the distance at which the force lines of the induction vector can be considered as straight lines. An estimate for the value of the coupling constant of photons with arions is obtained.
V. I. Denisov, G. A. Dantsev, V. I. Priclonsky, I. P. Denisova, O. N. Gavrish
2023-09-09T21:25:29
http://arxiv.org/abs/2309.07073v1
# The arions generation by magnetodipole waves ###### Abstract The influence of the gravitational fields of pulsars and magnetars on the arion emission during the propagation of magnetodipole waves in a constant magnetic field has been evaluated. The solution of the equation was obtained and the flux of arions emitted by magnetodipole waves during their propagation in a constant magnetic field was found. It is shown that the amplitude of the born arion wave at a distance from the source of magnetodipole radiation of a pulsar or magnetar (\(r\rightarrow\infty\)) in the considered case tends to a constant value. The intensity of the arion emission in the solid angle element and the amount of arion energy \(\overline{I}\), emitted in all directions per unit time grow quadratically with increasing distance, traveled by the magnetodipole radiation of a pulsar or magnetar in a constant magnetic field. Such growth of the energy of the born arion wave is due to the fact that in the considered problem constant magnetic field is defined in the whole space. In reality, the galactic and intergalactic magnetic fields can be represented in this form only in regions of space of finite dimensions, outside of which the force lines of their induction vector are curved. Therefore, it is possible to apply these results only in a region of space for which \(r\leq L_{coh}<\infty\), where \(L_{coh}\) is the coherence length, the distance at which the force lines of the induction vector can be considered as straight lines. An estimate for the value of the coupling constant of photons with arions is obtained. ## 1. Introduction In the scientific literature of recent years, the processes of photoproduction of various axion-like particles beyond the Standart Model: arions [1,2] axions [3-6], and dilatons [7-10] are actively discussed. These processes are currently regarded as the most realistic processes, with the help of which it is supposed [11-13] to carry out registration of axion-like particles in laboratory and astrophysical conditions. The arion is a strictly massless pseudoscalar Goldstone particle \(a\), which was introduced in 1982, in the papers [14-16] of Prof. A. A. Anselm and his co-authors. The density of the Lagrangian function for the arion field, which is interacting with the electromagnetic field, is usually written in the canonical form: \[L=\frac{\sqrt{-g}}{2}g^{nm}\frac{\partial a}{\partial x^{n}}\frac{\partial a} {\partial x^{m}}-\frac{\sqrt{-g}}{16\pi}F_{nm}F^{nm}-\frac{g_{a\gamma}\sqrt{- g}}{4}F_{nm}\tilde{F}^{nm}a, \tag{1}\] where \(a\) is the pseudoscalar field of the arion, \(g_{a\gamma}\)- coupling constant of the arion with the electromagnetic field, \(F_{nm}\) - electromagnetic field tensor, \(g\) is the determinant of the metric tensor, \(\tilde{F}^{nm}=E^{mnik}F_{ik}/2\) and \(E^{nmik}=e^{nmik}/\sqrt{-g}\) - the axial absolutely antisymmetric Levi-Civita tensor, and \(e^{nmik}\) is the axial absolutely antisymmetric Levi-Civita symbol, and \(e^{0123}=+1\). When studying the processes of arion generation under astrophysical conditions the influence of the gravitational field, in general, cannot be neglected. Therefore, first of all, let us estimate the magnitude of this influence and the size of the region of space in which this influence can be significant. Since the distribution of matter in neutron stars is close to spherically symmetric, then we will use the Schwarzschild solution as the metric tensor of the pseudo-Riemannian space. In the paper [18] it is shown, that the Schwarzschild solution can be the external solution for a non-spherically symmetric distribution of matter. The most convenient coordinates for writing down this solution are isotropic coordinates [17]. The nonzero components of the metric tensor in these coordinates have the form: \[g_{00}=\frac{(4r-r_{g})^{2}}{(4r+r_{g})^{2}},\qquad g_{xx}=g_{yy}=g_{zz}=-(1+ \frac{r_{g}}{4r})^{4}, \tag{2}\] where \(r_{g}\) is the Schwarzschild radius and the notation is used for convenience of reference: \(r=\sqrt{x^{2}+y^{2}+z^{2}}\). Let us estimate the value of the ratio \(r_{g}/r\), included in expressions (2), on the surface of a neutron star. The radius of a neutron star in recent times [19] is taken to be \(R_{s}=10\) km, and the mass \(M\) in the interval from 0.1 to 1.0 solar masses. Thus, in our problem, the ratio \(r_{g}/R_{s}\sim 0.01\). Since at \(r>R_{s}\) the ratio \(r_{g}/r\) takes smaller values, than the ratio \(r_{g}/R_{s}\), the influence of the gravitational field is small and limited to a small neighborhood \(r\leq 100R_{s}\) of the neutron star. Therefore, as a first approximation for the small parameter \(r_{g}/r\) in our problem as a metric tensor we will use the metric tensor of the Minkowski space: \(g_{00}=1,\ g_{11}=g_{22}=g_{33}=-1\). The equations of the arion and electromagnetic fields derived from the Lagrangian density (1), in the Minkowski space have the form: \[\hbox{\hbox to 0.0pt{\vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt \vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt\hrulefill\kern-0.4pt\hbox{\vrule height 6.0pt width 0.4pt depth -0.4pt\kern-0.4pt\hrulefill \kern-0.4pt\hbox{\vrule height 6. where \({\bf B}\) is the magnetic field induction vector, \({\bf E}\) is the electric field strength vector. According to the first equation (3), the source of arions are electromagnetic fields and waves, in which the first invariant of the electromagnetic field tensor is different from zero. In the nature there exist such configurations of electromagnetic fields and waves at which this invariant is different from zero in large, by earthly standards, volumes of space. These are, for example, magnetodipole radiation of pulsars and magnetars, propagating in the constant galactic or intergalactic magnetic field. And although the induction of galactic and intergalactic magnetic fields is relatively small at \(B\sim 10^{-6}\) Gs, the volumes occupied by these fields are significant. Therefore, it is of undoubted interest to study this process. Let us consider it in detail. Calculation of arion emission arising from the propagation of magnetodipole waves of a pulsar or a magnetar in a constant magnetic field Suppose that at a point with radius vector \({\bf r}={\bf r}_{0}=\{x_{0},y_{0},z_{0}\}\) is a pulsar or magnetar with a magnetic dipole moment \({\bf m}\), which rotates at a frequency of \(\omega\) around an axis that makes an angle of \(\psi\) with the vector \({\bf m}\). Then this source emits an electromagnetic wave of frequency \(\omega\), whose components [8] have the form: \[{\bf B}({\bf R},\tau)=\frac{3({\bf m}(\tau)\cdot{\bf R}){\bf R}-R^{2}{\bf m}( \tau)}{R^{5}}-\frac{\dot{\bf m}(\tau)}{cR^{2}}+ \tag{4}\] \[+\frac{3(\dot{\bf m}(\tau)\cdot{\bf R}){\bf R}}{cR^{4}}+\frac{(\ddot{\bf m}( \tau)\cdot{\bf R}){\bf R}-R^{2}\ddot{\bf m}(\tau)}{c^{2}R^{3}},\] \[{\bf E}({\bf R},\tau)=\frac{({\bf R}\times\dot{\bf m}(\tau))}{cR^{3}}+\frac{({ \bf R}\times\ddot{\bf m}(\tau))}{c^{2}R^{2}},\] where \({\bf R}={\bf r}-{\bf r}_{0}\), the dot above the vector means the derivative on retarded time \(\tau=t-R/c\), and the pulsar or the magnetar magnetic dipole moment in the following task has the components: \[{\bf m}(\tau)=|{\bf m}|\{\cos(\omega\tau)\sin\psi,\ \sin(\omega\tau)\sin\psi,\ \cos \psi\}. \tag{5}\] In the coordinate system, the origin of which is placed in the point \({\bf r}={\bf r}_{0}\), the directional diagram of electromagnetic radiation (4) has the form \[\overline{\left(\frac{dI}{d\Omega}\right)}_{EMW}=\frac{[{\bf r}\ddot{\bf m}]^ {2}}{4\pi c^{3}r^{2}}.\] Integrating this expression over the angles \(\theta\) and \(\varphi\), we obtain the total intensity of radiation of a pulsar or a magnetar: \[\overline{I}_{EMW}=\frac{2\ddot{\bf m}^{2}}{3c^{3}}=\frac{2ck^{4}{\bf m}^{2} \sin^{2}\psi}{3}. \tag{6}\] As shown in [2,8], the electromagnetic wave (4) can emit arions and dilatons with frequencies \(\omega\) and \(2\omega\). Let there is a constant magnetic field in the considered region of space \[{\bf B}_{0}=\{B_{0x},B_{0y},B_{0z}\}. \tag{7}\] Substituting expressions (4) and (7) into equation (3) and considering that \[{\bf E}({\bf r},\tau)=\frac{({\bf R}\times\dot{\bf m}(\tau))}{cR^{3}}+\frac{({\bf R }\times\ddot{\bf m}(\tau))}{c^{2}R^{2}}=rot_{{\bf r}_{0}}\frac{\dot{\bf m}(\tau) }{cR},\] where \(rot_{{\bf r}_{0}}\) is the rotor taking operator on coordinates of the vector \({\bf r}_{0}=\{x_{0},y_{0},z_{0}\}\), let's put it in the form \[\begin{array}{c}{\hbox{\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0 pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt}{12.9pt}\rule{0.0pt }{12.9pt}\rule{0. ## 3. Angular distribution of arion radiation Angular distribution of the arion emission, arising from the propagation of the electromagnetic wave (4) of a pulsar or magnetar through a permanent magnetic field (7), following the paper [8], we write it in the form: \[\frac{dI}{d\Omega}=r({\bf r\ W}), \tag{13}\] where \({\bf W}\) is the energy flux density vector associated with the components of the energy-momentum tensor \(T^{nk}\) by the relation \(W^{\beta}=cT^{0\beta}\). Using the expression for the energy-momentum tensor of the free arion field \[T^{nk}=g^{np}g^{km}\{\frac{\partial a}{\partial x^{p}}\frac{\partial a}{ \partial x^{m}}-\frac{1}{2}g^{nk}\frac{\partial a}{\partial x^{p}}\frac{ \partial a}{\partial x^{m}}g^{pm}\},\] from expression (13) we obtain: \[\frac{dI}{d\Omega}=-cr^{2}\frac{\partial a}{\partial r}\frac{\partial a}{ \partial x^{0}}.\] Substituting into this relation the expression (12) for the arion field and leaving asymptotically the main part, after time averaging, we reduce it to the form: \[\frac{\overline{dI}}{d\Omega}=\frac{g_{a\gamma}^{2}c|{\bf m}|^{2}k^{4}r^{2} \sin^{2}\psi}{8}\Big{\{}(B_{0x}^{2}+B_{0y}^{2})\cos^{2}\theta+B_{0z}^{2}\sin ^{2}\theta- \tag{14}\] \[-2B_{0z}\sin\theta\cos\theta[B_{0x}\cos\varphi+B_{0y}\sin\varphi]\Big{\}}.\] Let us now find the amount of arion energy \(\overline{I}\) emitted in all directions per unit time: \[\overline{I}_{AR}=\int\limits_{0}^{\pi}\sin\theta d\theta\int\limits_{0}^{2 \pi}d\varphi\frac{\overline{dI}}{d\Omega}=\frac{\pi g_{a\gamma}^{2}c|{\bf m}| ^{2}k^{4}r^{2}\sin^{2}\psi}{12}\Big{\{}B_{0x}^{2}+B_{0y}^{2}+2B_{0z}^{2}\Big{\}}. \tag{15}\] The different dependence of this result on the components of the constant magnetic field (7) is due to the fact that the directivity diagram of the magnetodipole radiation of a pulsar or a magnetar is not spherically symmetric. ## 4. Conclusion It follows from expression (12) that the amplitude of the born arion wave at a distance from the magnetodipole source of the pulsar or magnetar (\(r\rightarrow\infty\)) in the considered case tends to a constant value. The intensity of the arion emission in the solid angle element and the amount of arion energy \(\overline{I}\), emitted in all directions per unit time grow quadratically with increasing distance, traveled by the magnetodipole radiation of a pulsar or magnetar in a constant magnetic field (7). Such growth of the energy of the born arion wave is due to the fact that in our problem the constant magnetic field (7) is set in the whole space. In reality, the galactic and intergalactic magnetic fields can be represented in the form (7) only in regions of finite dimensions, outside of which the force lines of their induction vectors are curved. Therefore, one can apply the results (12), (14) and (15) only in the region of space, for which \(r\leq L_{coh}<\infty\), where \(L_{coh}\) is the coherence length - the distance at which the lines of force of the induction vector can be considered as straight lines. Let us define the conversion factor of electromagnetic radiation energy into arion energy \(\beta\) as the ratio of the intensity of radiation of arions \(\overline{I}_{ar}\) to the intensity of the electromagnetic field of a pulsar or magnetar \(\overline{I}_{EMW}\): \[\beta=\frac{\overline{I}_{AR}}{\overline{I}_{EMW}}=\frac{\pi L_{cog}^{2}g_{a \gamma}^{2}}{8}\Big{\{}B_{0x}^{2}+B_{0y}^{2}+2B_{0z}^{2}\Big{\}}.\] This coefficient reflects the properties of the converter, i.e., the electromagnetic field. It should be noted, that according to the second equation of the system (3), along with the conversion of the energy of electromagnetic waves into the energy of arions, the opposite process takes place. If \(\beta<<1\), then the reverse process can be ignored; if the coefficient of \(\beta\) is close to unity, it is necessary to consider both processes together. At present the value of the photon-arion coupling constant \(g_{a\gamma}\) is unknown. Let's roughly estimate the value of this constant using the example of the generation of arions by the electric field of pulsar radiation as it propagates through the magnetic field of our Galaxy. According to modern data [20,21], the radius of the Galaxy is 16 kiloparsecs. It is accepted to distinguish the large-scale component of the magnetic field of the Galaxy magnetic field of the Galaxy (the scale of homogeneity of the order of hundreds and thousands of parsecs) and fluctuation component with a wide range of scales (from fractions of parsecs to hundreds of parsecs). The induction of the large-scale magnetic field of the Galaxy is estimated to be 2-3 \(\mu\) Gs. Based on these data, it is reasonable to put \(L_{coh}\sim 10^{3}\) ps \(=3\cdot 10^{21}\) cm, \(B_{0}\sim 10^{-6}\) Gs. Then from the condition \(\beta<<1\) we obtain: \(g_{a\gamma}^{2}<8\cdot 10^{-31}\)\(\frac{cm}{erg}\). Equation (15) was derived by us into the Gaussian system of units of measurement. Let us rewrite this formula in the natural system of units. If we take into account that 1 erg= 624 GeV, and 1 cm \(=0.5\cdot 10^{14}\) GeV\({}^{-1}\), then we get: \(g_{a\gamma}<0.9\cdot 10^{-10}\) GeV\({}^{-1}\). This estimate coincides with the estimates of the coupling constant \(g_{a\gamma}\) obtained in [2,22-25]. ## Acknowledgements This study was conducted within the scientific program of the National Center for Physics and Mathematics, section \(\#5<<\)Particle Physics and Cosmology\(>>.\) Stage 2023-2025.
``` パルサーとマングニスターの重力場の影響が、磁偶極波の伝播において、磁場が一定であるときの、アリアon放出に及んでいる。その解が得られ、磁偶極波の伝播中のアリアon放出の流量が求められた。これは、磁偶極波が、パルサーまたはマングニスターの放射源からの距離$r$が大きくなるにつれて、その波の大きさが一定値に収束することを示している。アリアon放出の強度と、単位時間あたりのアリアonエネルギ$I$は、磁偶極波の伝播距離が大きくなるにつれて二次関数に増加する。このエネルギー増加は、常に磁場が空間全体で一定であるという前提に起因している。実際には、銀河とinterstellar磁場はこの形しか空間の有限な領域では表現できない。そのため、この結果は
2301.13624
A Kubernetes-Based Edge Architecture for Controlling the Trajectory of a Resource-Constrained Aerial Robot by Enabling Model Predictive Control
In recent years, cloud and edge architectures have gained tremendous focus for offloading computationally heavy applications. From machine learning and Internet of Thing (IOT) to industrial procedures and robotics, cloud computing have been used extensively for data processing and storage purposes, thanks to its "infinite" resources. On the other hand, cloud computing is characterized by long time delays due to the long distance between the cloud servers and the machine requesting the resources. In contrast, edge computing provides almost real-time services since edge servers are located significantly closer to the source of data. This capability sets edge computing as an ideal option for real-time applications, like high level control, for resource-constrained platforms. In order to utilize the edge resources, several technologies, with basic ones as containers and orchestrators like Kubernetes, have been developed to provide an environment with many features, based on each application's requirements. In this context, this works presents the implementation and evaluation of a novel edge architecture based on Kubernetes orchestration for controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle (UAV) by enabling Model Predictive Control (MPC).
Achilleas Santi Seisa, Sumeet Gajanan Satpute, George Nikolakopoulos
2023-01-31T13:32:36
http://arxiv.org/abs/2301.13624v1
A Kubernetes-Based Edge Architecture for Controlling the Trajectory of a Resource-Constrained Aerial Robot by Enabling Model Predictive Control ###### Abstract In recent years, cloud and edge architectures have gained tremendous focus for offloading computationally heavy applications. From machine learning and Internet of Thing (IOT) to industrial procedures and robotics, cloud computing have been used extensively for data processing and storage purposes, thanks to its "infinite" resources. On the other hand, cloud computing is characterized by long time delays due to the long distance between the cloud servers and the machine requesting the resources. In contrast, edge computing provides almost real-time services since edge servers are located significantly closer to the source of data. This capability sets edge computing as an ideal option for real-time applications, like high level control, for resource-constrained platforms. In order to utilize the edge resources, several technologies, with basic ones as containers and orchestrators like Kubernetes, have been developed to provide an environment with many features, based on each application's requirements. In this context, this works presents the implementation and evaluation of a novel edge architecture based on Kubernetes orchestration for controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle (UAV) by enabling Model Predictive Control (MPC). Robotics; Edge Computing; Kubernetes; UAV; MPC. ## I Introduction Nowadays, as technology is progressing and the need for computational resources is continuously increasing, different computation layers have been evolved. We can differ these layers into four distinct categories, cloud, fog, edge and devices, while each one of them has its own different characteristics and utilization. At the same time, all of them can be combined with each other to create an ecosystem for the utilization of external computational resources. As these technologies mature, researchers and engineers use them more and more to offload their applications, due to the capabilities and features they provide [1]. Additionally, since the mentioned computation layers have attracted tremendous focus, several state-of-the-art technologies have been developed and are promising to revolutionize many technological fields, like containerized applications and containers' orchestrators. In this framework, robotics can take a huge advantage of the external resources and many resource-constrained platforms can make the most out of them, since they will be able to run algorithms that they can not run on their onboard processors. In this context, edge is emerging since it can provide tremendous resources for enhancing the performance, and the overall efficiency of autonomous operations, and at the same time minimize the travel time delays when transmitting data from UAVs, like in [2] and [3]. Thus, edge can be established as a promising solution for time critical operations, like offloading computationally costly controllers, of resource-constrained platforms. In this article, we propose an architecture where we offload the Model Predictive Control method, which is a relatively heavy controller, to the edge, as a containerized application, and we use Kubernetes for managing the containers. Researchers seek to utilize the advantages of edge computing for the benefit of robots. However, researchers have to overcome some limitations and challenges, in order to use these technologies universally. In [4] an architecture consisting of all four computation layers is used to offload the localization and mapping problem from robots. in this case, edge is operating as a layer between sensor devices, gateways, and cloud servers for enhancing the quality of services, while in [5] edge is used to design a search planner algorithm using deep learning for UAVs. In [6] edge and cloud were utilized in terms of storage and computational resources for deep robot learning including object recognition, grasp planning and localization of the computational. Some works that utilized edge for robotic applications by implementing Kubernetes or container based architectures can be summarized as it follows. In [7] researcher tried to automate, by using Kubernetes orchestration, the process of making decision in terms of placement of the expected workload to edge, fog and cloud, for robotic applications. In [8], a methodology based on docker and Kubernetes for ROS-based robotic application is presented. In that case, the architecture was evaluated by experimental results obtained by a mobile robot interacting with an industrial agile production chain. In the works mentioned above, the approach regarding edge computing is mainly towards non-time critical tasks. High level controllers must operate almost real-time. In [9] an architecture was proposed where the control method consists of the combination of a LQR running on the device and an MPC running the optimization both on edge and cloud, while in [10] two complimentary MPCs are running, one on a local edge and one on the cloud. In comparison to our proposed work, these articles are partly offloading the MPC method on the edge and are focused on evaluating the system in terms of related latency, and the related uncertainty for several cases. The motivation behind this work is to fill the gap regarding edge enabled robotics. Even though edge comput ing has proven to be a promising technology to expand the autonomous capabilities of resource-constrained robotic platforms, especially when combined with 5G networks, the research that has been done around this area is relatively limited. Despite the fact that the great advantage of edge computing is the ability of enabling almost real-time operation by offloading the computing process on the edge, most researchers have focused on utilizing edge for offline procedures. Thus, the contribution of this article is to present a novel edge architecture for enabling the time sensitive operation of controlling the trajectory of a resource-constrained UAV in real-time through MPC. Control is one of the basic components of autonomy, thus the performance and efficiency is the main criteria when choosing a controller. Model predictive controllers are widely used on UAVs due to their characteristics and optimal behavior, but they are computationally costly, thus some UAVs, deployed with light processors, like Raspberry Pi, can not handle them. By utilizing the proposed architecture, we will be able to use edge resources in order to offload the MPC and control resource-constrained platforms by closing the loop over the edge. Additionally, we are using Kubernetes orchestration that provides best practices for cloud and edge applications but inserts some challenges that we have to overcome. The rest of the article unfolds in the following Sections. In Section II, we describe the Kubernetes-based edge architecture, while in Section III, we give a brief overview of the UAV and MPC model. In Section IV, we present the simulation results of the edge architecture in terms of time delays and performance. Finally, in Section V, we conclude the article by highlighting the main points of this work, and we propose future directions. ## II Kubernetes Edge Architecture The proposed architecture is based on Kubernetes. Kubernetes is a container orchestrator, developed by Google. Before we start analyzing the Kubernetes-based architecture, we have to describe the containers developed for this work. Afterward, we are going to present the system's architecture and the Robotic Operating System (ROS) framework that was utilized for the UAV-MPC system. Finally, we will describe the communication layer and network. Containers are based on software that creates an operating environment and are deployed only with the necessary and chosen packages, libraries and dependencies, in order to run a specific application. The application running in this form is called a containerized application. Containers are based on images that are the nominal state of containers before they get deployed. An image can be used to deploy many containers. For our system, we deployed two docker containers. One container is responsible for running the controller and all the necessary libraries and dependencies for its smooth and reliable operation, and the other is responsible for running the ROS master, which takes care of the communication between the ROS nodes. To deploy the two docker containers, we had to developed two different docker images. For both images, we used ROS Noetic on Ubuntu 20.04 entrypoint, and we built on top of them. For the first image, we included several ROS packages and libraries, as well as an optimization engine for the MPC containerized application, while for the second image we just needed to run the ROS master. For a more complex application, we could split it into more containers, each one of them would be assigned a specific task. Once we had developed the docker images, we were able to deploy the docker containers inside the Kubernetes cluster. We decided to use Kubernetes due to the features it provides for our containers. Kubernetes gives us the capability to manage our containers and automates the whole process of deploying the containers, assign them resources, check their health. The services and features that Kubernetes is providing can be extremely helpful for our application, since they give us the chance to manage and monitor our system in an optimal way, and it can even more handful when we have to deploy more containers and the system get more and more complex. The Kubernetes architecture is depicted in Fig. 1. The top part of the Kubernetes cluster consists of four components that make the master node. These are the kube-apiserver that exposes the Kubernetes Application Programming Interface (API), the etcd that is used as the backing store for all cluster data, the kubescheduler that watches for newly created pods with no assigned node, and selects a node for them to run, and finally the kubecontroller that runs the control processes. Besides the master node, we have the worker nodes. In our case, we have only one worker node, inside which we have deployed our containers Fig. 1: Diagram of the Kubernetes-based edge architecture for the UAV-MPC system in the form of pods. A pod is the basic operational unit of Kubernetes and consist of a set of containers that share storage, network resources, and specifications on how to run the containers. The two pods we have deployed are related to the ROS master and the MPC respectively. Apart from the pods, the worker node consists of the kubelet which makes sure that containers are running in a pod, and kube-proxy which makes sure that network rules are met. From Fig. 1 we can describe the block diagram of the close loop system. Let's assume that in the time step, \(k\) the UAV dynamics node generates a signal \(x(k)\) that describes the states of the UAV. These states are the position, velocity, and orientation of the UAV. This signal will arrive at the MPC ROS node, running on the edge, delayed, due to the travel time the signal needs to travel from the UAV to the edge. Thus, the signal carrying the information of \(x(k)\) will arrive at the MPC ROS node as \(x(k-d_{1})\), while at the same time, another signal regarding the desired states for the UAV will arrive at the MPC ROS node as a reference signal \(r(k)\). The controller will have to process this information and generate the command signal \(u(k-d_{2})\). Given that \(u(k-d_{2})\) is corresponding to the signals \(x(k-d_{1})\) and \(r(k)\), the variable \(d_{2}\) is related to \(d_{1}\), as well as the execution time of the MPC. This command signal has to travel from the edge to the UAV in order to close the loop of the system. Thus, the signal arriving to the UAV is denoted as \(u(k-d_{3})\), where \(d_{3}\) is related to \(d_{1}\), \(d_{2}\), as well as to the travel time the command signal needs to travel from the edge to the UAV. Finally, the output of the system is denoted as \(y(k)\). The communication between the UAV model simulation and the controller is taken care by ROS. There should be only one ROS master, and every ROS node has to register to that ROS master to be able to run and communicate with other ROS nodes. When two ROS nodes want to exchange data by subscribing and publishing to the same ROS topic, ROS master opens a random port and allows the two ROS nodes to communicate through that port. Once ROS assigns a random ports, different every time, the nodes running on the edge and the nodes running on the robot try to communicate with each other through these ports. Since the containers are deployed on the Kubernetes cluster of the edge machine (host), we have to specify which ports the containers should be exposed to for communication purposes. The challenge occurs because ROS master do not assign specific ports for communication, but it assigns them randomly. To overcome this issue, we used the host network option when we deployed the containers on the Kubernetes cluster, in order to expose all the host ports to the containers and vice versa. That way, the containers can access all the traffic at the host machine's ports and the host machine can access the traffic at the containers' ports. Now, the data coming from the UAV to the edge machine can be forwarded inside the containers and the data from the containerized applications can be exposed to the edge machine and then sent to the UAV. In this paper, both the edge machine and the UAV are on the same network, thus we were able to use Wi-Fi. Wi-Fi can be an efficient network option for the communication between the UAV and the edge machine and has been used widely, but it is not the optimal solution. 5G is a promising technology that will provide essential features for secure, robust and reliable networking, and can be the field of study for future works. ## III Model Predictive Control Model predictive control is a standard method used for high level control for UAVs, thus there are many works describing in detail the behavior of the controller and the kinematics of the UAV, like in [11], where authors suggested a UAV model that could afford disturbances by stabilizing its location in space. The preference on MPC in comparison to other common controllers, like PID or LQR, is explained by its predictive behavior and performance. Based on these characteristics, we were prompted to use this controller for controlling the trajectory of an UAV, and we were motivated to offload it to the edge so resource-constrained UAVs and robots in general, that can not afford to run this controller onboard, would be able to take advantage of the benefits of MPC. The UAV model and the implementation of the MPC for this work are based on [12]. ### _UAV Model_ In order to develop the MPC methodology, the first step is to describe the UAV kinematics model, which is presented through the Eq. 1. \[\dot{p}(t) =v_{z}(t) \tag{1}\] \[\dot{v}(t) =R_{x,y}(\theta,\phi)\begin{bmatrix}0\\ 0\\ T\end{bmatrix}+\begin{bmatrix}0\\ 0\\ -g\end{bmatrix}-\begin{bmatrix}A_{x}&0&0\\ 0&A_{y}&0\\ 0&0&A_{z}\end{bmatrix}u(t)\] \[\dot{\phi}(t) =\frac{1}{\tau_{\phi}}(K_{\phi}\phi_{d}(t)-\phi(t))\] \[\dot{\theta}(t) =\frac{1}{\tau_{\theta}}(K_{\theta}\theta_{d}(t)-\theta(t)),\] where \(p=[p_{x},p_{y},p_{z}]^{T}\) and \(v=[v_{x},v_{y},v_{z}]^{T}\) are the position and the linear velocity respectively based on the Fig. 2: Coordinate frames, where \(\mathbb{W}\) and \(\mathbb{B}\) represent the world and body coordinate frames respectively on gazebo simulation environment world frame (\(\mathbb{W}\)), as depicted in Fig. 2. We donate as \(R(\phi(t),\theta(t))\in SO(3)\) the rotation matrix that represents the attitude. \(\phi\) and \(\theta\in[-\pi,\pi]\) are the roll and pitch angles, while \(T\geq 0\) describes the total thrust. The acceleration depends on the magnitude and angle of the thrust vector, the gravity, and the linear damping terms \(A_{x},A_{y},A_{z}\in R\)\(g\). \(\phi_{d}\) and \(\theta_{d}\in R\) are the desired roll and pitch inputs with gains \(K_{\phi}\) and \(K_{\theta}\in R\), and time constants \(\tau_{\phi}\) and \(\tau_{\theta}\in R\). ### _Cost Function_ Next step for the MPC methodology is to present the cost function. \(x=[p,v,\phi,\theta]^{T}\) and \(u=[T,\phi_{d},\theta_{d}]^{T}\) represent the UAV's state vector and the control input, respectively. The sampling time of the system is \(\delta_{t}\in\mathbb{Z}^{+}\), while the forward Euler method is used for each time instance \((k+1|k)\). The predictive behavior of the MPC is based on the prediction horizon, which considers a specified number of steps into the future, and is represented as \(N\). In order to minimize the cost of the cost function, an optimizer has been assigned to find the optimal set of control actions. The cost function associates the cost of the configuration of states and inputs at the current time and in the prediction. \(x_{k+j|k}\) represents the predicted states at the time step \(k+j\), produced at the time step \(k\), while \(u_{k+j|k}\) represents the corresponding control actions. Furthermore, \(x_{k}\) represents the predicted states and \(u_{k}\) represents the corresponding control inputs along the prediction horizon. The equation describing the cost function is presented in Eq. 2. \[J=\sum_{j=1}^{N}\underbrace{(x_{d}-x_{k+j|k})^{T}Q_{x}(x_{d}-x_{k +j|k})}_{state\ different trajectory. The blue line represents the real trajectory of the UAV, while the blue line represents the reference points for the desired trajectory of the UAV. From these figures, we can notice that the UAV simulation model can successfully follow the desired trajectory. The time delays seem to not have a significant effect on the performance of the controller. On the next figures, we are investigating in more detail these time delays. Fig. 4 depicts the Euclidean error between the UAV position and the reference point for each time step of the Kubernetes-based architectures, for the circular, spiral and helical trajectories. The blue line represents the error and the red line represents the error tolerance. The controller is responsible to keep the error below the tolerance value. If the error goes above the tolerance, the controller will correct it and the UAV will continue following the desired trajectory. The tolerance was set at 0.4 meters for each axis, thus in total of \(\sqrt{0.68}\) meters. In Fig. 5, the deviation of the different types of time delays for the spiral trajectory are presented. In the left figure, the deviation for the travel time of a signal from the UAV to the edge, in the middle figure the deviation for the execution time of the MPC, and in the right figure the deviation for the travel time of a signal from the edge to the UAV, are depicted. The average measured travel time from the UAV to the edge is 0.0089 seconds, and the maximum 0.1700 seconds. For the execution time, the average measured time is 0.0141 seconds and the maximum is 0.2200 seconds. Finally, for the travel time, from the edge to the UAV, the measured travel time is 0.0161 seconds and the maximum is 0.2600 seconds. To end the evaluation of the system, we measured the resource usage for the execution of the MPC on the edge and the data are depicted in Fig. 6. The red bars represent the time the CPU spends executing processes in user-space (us). Similarly, the blue bar represents the time spent on running system kernel-space (sy) processes. From the figure we can observe that by utilizing the edge machine, the edge does not get overloaded, and the maximum reached value is 84.50% which occurs when the values us and sy are 46.70% and 37.80% respectively. The maximum values that us and sy reach independently are 54.40% and 37.80% respectively, and their average values are 20.225% for the us and 4.582 Fig. 4: Euclidean error between UAV position and reference point for each time step of the Kubernetes-based architectures, for A) the circular, B) spiral, and C) helical trajectory. The blue line represents the error and the red line represents the error tolerance Fig. 5: Deviation of the different types of time delays for the spiral trajectory: A) Deviation for the travel time of a signal from the UAV to the edge. B) Deviation for the execution time of the MPC. C) Deviation for the travel time of a signal from the edge to the UAV. Fig. 6: Edge resources usage during the spiral trajectory. The red bar represents the user space and the blue bar represents the system kernel-space. for the sy. From these measurements and figure, we can notice that the relatively immense assigned edge resources are adequate in order to run the computationally demanding controller, but even in this case, during the \(35^{th}\) second of the trajectory, the usage of resources were almost at \(90\%\). This means that computational light units, like UAVs' onboard processors, might not be able to execute that controller smoothly. ## V Conclusions and Future Work In this work, we presented a novel edge architecture to control the trajectory of an UAV through the edge by enabling an MPC methodology. This architecture can be beneficial for expanding the computational capabilities of resource-constrained platforms like aerial robots, that in many cases are deployed with light microprocessors onboard, like Raspberry Pi, and can not afford to run computationally expensive processes onboard. By utilizing edge, we were able to offload the controller there, and control the trajectory of the UAV in real-time by closing the loop of the system through the edge. Furthermore, we evaluated the proposed architecture, through a series of experiments, through which we examined the performance of the system, as well as the overall time delays. Edge computing is a promising technology for the field of robotics. In the current article, we offloaded the computationally costly MPC, while future works can move towards offloading other time sensitive robotic application, like sensor fusion for online perception, or offload applications that require many resources in order to operate in real-time, like map merging from multiple agents. The end goal would be to create an ecosystem through which multiple agents will be able not only to use edge resources to expand their autonomy capacity, but also communicate and collaborate through the edge.
最近の年、クラウドとエッジアーキテクチャは、膨大な計算量をオフロードするために非常に注目を集めています。機械学習やIoTから産業プロセスやロボットなど、クラウドコンピューティングは、その「無限」なリソースのおかげで、データ処理と保管に広く利用されています。一方、クラウドコンピューティングは、クラウドサーバーとリソース要求マシンとの距離が長いことによって、長時間の遅延を示しています。対照的に、エッジコンピューティングは、エッジサーバーがデータの源に近い場所に位置しているため、ほぼリアルタイムサービスを提供します。この能力は、リアルタイムアプリケーション、例えば高レベルの制御や、リソースの制限されたプラットフォーム向けに理想的な選択肢となっています。エッジリソースを利用するには、コンテナやKubernetesなどの技術が開発され、アプリケーションのニーズに合わせて、多くの機能を持つ環境を提供するのに役立てられています。この論文では、Kubernetesのオーケストレーションに基づいた新しいエ
2309.08340
Formalizing the $\infty$-Categorical Yoneda Lemma
Formalized $1$-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of $1$-dimensional categories, and $\infty$-category theory has thusfar proved unamenable to computer formalization. Using a new proof assistant called Rzk, which is designed to support Riehl-Shulman's simplicial extension of homotopy type theory for synthetic $\infty$-category theory, we provide the first formalizations of results from $\infty$-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzk to formalize further results from $\infty$-category theory, such as the theory of limits and colimits and adjunctions.
Nikolai Kudasov, Emily Riehl, Jonathan Weinberger
2023-09-15T11:51:40
http://arxiv.org/abs/2309.08340v3
# Formalizing the \(\infty\)-categorical Yoneda lemma ###### Abstract. The field of category theory seeks to unify and generalize concepts and constructions across different areas of mathematics, from algebra to geometry to topology and also to logic and theoretical computer science. Formalized 1-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of 1-dimensional categories, and \(\infty\)-category theory has thusfar proved unamenable to computer formalization. Using a new proof assistant called Rzx, which is designed to support Riehl-Sublman's simplicial extension of homotopy type theory for synthetic \(\infty\)-category theory, we provide the first formalizations of results from \(\infty\)-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzx to formalize further results from \(\infty\)-category theory, such as the theory of limits and colimits and adjunctions. category theory, homotopy type theory, formalization, directed type theory, \(\infty\)-category theory, Yoneda lemma, 2010 Mathematics Subject Classification: (2010) 000.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.0.00.00.00.00.00.00.0.00.00.00.00.0.00.00.0.00.00.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.00.00.00.00.0.00.00.00.00.0.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.00.0.00.00.00.00.00.0.00.00.00.00.00.00.00.0.00.0.00.00.00.00.0.00.00.00.00.0.00.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.00.0.00.00.0.00.00.00.00.0.00.00.0.00.00.00.0.00.00.00.00.00.00.0.00.0.00.0.00.00.0.00.00.00.00.0.00.00.00.00.00.0.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.0.00.00.00.00.0.00.00.00.00.00.0.00.00.0.00.00.0.00.00.0.00.00.0.00.00.00.00.00.00.00.0.00.00.00.0.000.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.0.00.00.00.00.0.00.0.00.00.0.00.00.00.0.00.0.00.00.00.00.00.00.0.00.00.0.00.00.00.0.00.00.00.00.00.00.00.00.0.00.00.00.00.00.00.0.00.00.0 models -- such as _quasi-categories_[18; 48], _complete Segal spaces_[85], and _Segal categories_[46; 79] -- are used at various places in the literature, and theorems are often proven "analytically," in reference to the "coordinates" of a particular model. A computer formalizer is thus faced with an unattractive choice of either * picking one model, which must then be used for the entire library of subsequent results, or * formalizing multiple models and the comparisons between them [50], which significantly increases the workload.1 Footnote 1: Experts in the field often prefer to work “model-independently” [9; 58] which can be done either by using \(\infty\)-category theory itself as the ambient metatheory, or deploying the formalism of \(\infty\)_-cosmoi_ (_i.e._, categories of co-categories) [92], but either approach would require some initial formalization in a specific model of co-categories. ### Reimagining the foundations of \(\infty\)-category theory A radical-sounding alternative, which we argue is worth taking seriously, is to change the foundation system. The article "Could \(\infty\)-category theory be taught to undergraduates?" [86] argues that it is possible to narrow the gap between co-category theory and ordinary \(1\)-category theory by replacing the traditional foundations with a directed extension of _homotopy type theory_[93; 106]. The basis for this claim is the paper [89] (and the follow-up work of [11; 25; 133]), which develops the basic theory of \(\infty\)-categories in an alternative foundational framework established there. The _simplicial type theory_ is a formal framework that permits one to make the following intuitive definitions rigorous: * A type is a _pre-\(\infty\)-category_ (aka a _Segal type_) if every composable pair of arrows has a unique composite. * A pre-\(\infty\)-category is an _\(\infty\)-category_ (aka a _Rezk type_) if equalities are equivalent to isomorphisms. * A type is an _\(\infty\)-groupoid_ (aka a _discrete type_) if equalities are equivalent to arrows. * A type family is a _left fibration_ (aka a _covariant type family_) if every arrow in the base type has a unique lift with specified domain. The intended model of this formal system is Rezk's complete Segal spaces [85], in which pre-\(\infty\)-categories correspond to Segal spaces [85; 99], \(\infty\)-categories correspond to complete Segal spaces [85], and left fibrations correspond to left fibrations [31; 52]. This extends Shulman's model of homotopy theory, in which types are interpreted as Reedy fibrant simplicial spaces [100]. The phrases "for all...there exists...unique" are meant in the standard sense of homotopy type theory [93; 106]. In particular, following the homotopical extension of the Curry-Howard correspondence [7; 47; 108], _uniqueness_ means _contractibility_ -- which is precisely what is true semantically for the composition operation in an \(\infty\)-category.2 Footnote 2: Those familiar with the Segal space model of \(\infty\)-categories may be surprised that the definition of a pre-\(\infty\)-category refers to binary sequences of composable arrows and not also composable triples and quadruples and so on. Here the binary statement subsumes the \(n\)-ary ones for \(n\geq 0\) because it is interpreted _intramally_ in the model as the assertion that the internal mapping types mapping out of the \(2\)-simplex and out of its inner horn are equivalent [89, Section 5]. More generally, Shulman has proven that homotopy type theory has semantics in any \(\infty\)-topos [88; 102] and Weinberger [115] has shown that the simplicial type theory of [89] can be interpreted in simplicial objects in any \(\infty\)-topos [64; 82; 104]. ### Formalizing \(\infty\)-category theory It is relatively standard practice in homotopy type theory to formalize results while writing the corresponding paper proofs.3 At the time of the writing of [89] it was not possible to formalize any of its results because the work is done in an _extension_ of traditional homotopy type theory, with multi-level contexts and a new type-forming operation providing _extension types_. But this is possible now thanks to the proof assistant Rzk developed by Kudasov [54]. Thus, finally one can formally test the claims made in the article [86]. This is the content of our project. Footnote 3: While homotopy type theory cannot be formalized in Lean or Idris [22] because their kernels assume that all types are sets, contradicting Voevodsky’s _univalence axiom_, it can be done in Agda, Coq, and a growing variety of experimental proof assistants. In SS2, we describe the simplicial type theory, and in SS3 we introduce synthetic \(\infty\)-category theory. In SS4, we describe our formalization of the \(\infty\)-categorical Yoneda lemma in Rzk. In SS6, we compare this formalization with parallel formalizations of the \(1\)-categorical Yoneda lemma in the agda-unimath [94] and mathlib libraries. In SS7, we offer a few takeaways from this formalization project and describe related future work. ### Related work A roughly parallel synthetic framework for \(\infty\)-category theory has been proposed by Weaver and Licata using bicubical sets as an intended model [112]. An alternate approach to formalizing higher category is within the framework of _two-level type theory_, using extensional type theory as a meta-theory, see e.g. [6; 53; 109]. A conceptual discussion of the approach behind simplicial type theory with comparisons is done by Buchholtz in [23]. A self-contained overview of both syntactic and semantic aspects of simplicial type theory is given in the master's thesis of Bakke [10]. Furthermore, there has been extensive work on directed type theories [55; 76; 77; 111], though most of this was not created to describe \(\infty\)-category theory. Other work includes domain-specific languages for two-dimensional categories [3; 39] and virtual equipments [73]. There also exist other type theories capturing infinite-dimensional categorical structures. Notable developments include [34; 36; 37; 38; 4; 19; 34]. However, these systems differ from the one that we are using in two major aspects: their setup and their purposes. Our framework features a synthetic and homotopical theory of \(\infty\)-categories with the aim of developing a range of classical \(\infty\)-categorical results. The other frameworks tend to involve a specific model of either strict or weak infinite-dimensional categories. Aside from direct applications to category theory, new kinds of type theories have been devised for the purpose of doing differential topology and stable homotopy theory synthetically, making heavy use of type-theoretic _modalities_[28; 68; 69; 70; 95; 96; 101]. ## 2. The simplicial type theory In [89], Riehl-Shulman develop a type theory to reason synthetically about \(\infty\)-categories. The key features of their theory is that \(\infty\)-categories can be described in relatively simple terms, and all the results are invariant under _homotopy equivalence_--the right notion of equivalence of \(\infty\)-categories. This is in stark contrast to the more traditional and familiar developments of \(\infty\)-category theory in set theory, cf. e.g. [49; 60]. We will give an overview of the structure and features of the simplicial type theory, with an emphasis on its use for synthetic \(\infty\)-category theory. The theory builds on Martin-Lof intensional type theory (MLTT) [63] whose _intensional identity types_ have homotopically well-behaved _path objects_ as models [105; 7; 51; 87]. This homotopical interpretation, paired with Voevodsky's _univalence axiom_, which allows one to treat homotopy equivalent types as (intensionally) equal, goes by the name _homotopy type theory_ (HoTT) or _univalent foundations_ cf. [106; 108; 7]. Homotopy type theory may be thought of as a synthetic theory for \(\infty\)-groupoids (aka homotopy types) and thus provides a fertile basis for the simplicial type theory. ### Base theory: Martin-Lof intensional type theory Overview. The base theory is intensional Martin-Lof type theory [63] with \(\Sigma\)-, \(\Pi\)-, and identity types. Though Rzx works with a universe types to implement dependent types, this assumption is not necessary [89, Remark 2.5].4 To stay in line with the notation of [89], we also rotate a dependent type \(x:A\vdash C(x)\) as a _type family_\(C:A\to\mathcal{U}\), pretending \(\mathcal{U}\) is a universe type (without being explicit about universe hierarchy or different levels of size). Footnote 4: In particular, though convenient for certain applications, univalence is not necessary for our development. \(\Sigma\)**-types (05-sigma).** The type formers \(\Sigma\) and \(\Pi\), resp., generalize existential and universal quantification, resp., as follows. For \(C:A\to\mathcal{U}\), the _dependent sum_\(\sum_{x:A}C(x)\) is the type consisting of dependent pairs \((a,c)\) with \(a:A\) and \(c:C(a)\). This is also referred to as the _total type_ of the family \(C\). The \(\Sigma\)-type comes with the usual set of rules for formation, introduction (by forming dependent pairs), and elimination (by projecting to the factors). We also assume the \(\beta\)- and \(\eta\)-computation rules to be satisfied, meaning that introduction and elimination are inverse to each other in the strictest possible way, _i.e., up to judgmental equality._ The family \(C:A\to\mathcal{U}\) can alternatively be encoded as a map \(p_{C}:\widetilde{C}\to A\), with the total type \(\widetilde{C}:=\sum_{x:A}C(x)\), and the projection \(p_{C}(a,c):=a\). The total type is then the "sum" of all the fibers of \(C\), canonically indexed by \(A\). If \(C\) is a constant family, _i.e._, \(C(a)\equiv B\) for all \(a:A\) and some type \(B\), the \(\Sigma\)-type becomes the _cartesian product_\(A\times B\). \(\Pi\)**-types.** Of particular interest is the notion of _dependent function_ or _section_ of a family \(C:A\to\mathcal{U}\), which is an assignment \(\sigma\) to each element \(x:A\) of some element \(\sigma(x):C(x)\) in the corresponding fiber. This is reified as the _dependent product_ type \(\prod_{x:A}C(x)\), with introduction rule given by \(\lambda\)-abstraction and elimination rule by function application. Likewise, we require the \(\beta\)- and \(\eta\)-rules to hold judgmentally. When the type family \(C\) is constant with value some type \(B\), the dependent function type reduces to an ordinary function type, denoted by \(A\to B\) or \(B^{A}\). **Identity types (01-paths).** The _Martin-Lof identity types_ (\(a=_{A}b\)) for a type \(A\) and elements \(a,b:A\) capture the idea that equality between terms of a type is witnessed proof-relevantly by a term \(p:a=_{A}b\). In the homotopical models, identity types get interpreted as _path objects_ in the sense of homotopical algebra [7], so elements \(p:(a=_{A}b)\) can be seen as paths from \(a\) to \(b\) in \(A\). The introduction rule is given by the canonical _reflexivity terms_\(\text{refl}_{a}:(a=_{A}a)\) witnessing self-identity. Elimination is given by the _path induction principle_. Intuitively, this says the following. First, for a type \(A\), fix \(a:A\). Then, for a family \(C:(\sum_{x:A}(a=_{A}x))\to\mathcal{U}\) the type of sections \(\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\) is equivalent to \(C(a,\text{refl}_{a})\) via the map \[\text{evrefl}_{C,a}:\left(\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\right)\to C (a,\text{refl}_{a}).\] In particular, given \(d:C(a,\text{refl}_{a})\) we obtain a section \[\text{ind}_{=}(d):\prod_{(y,p):\sum_{x:A}(a=_{A}x)}C(y,p)\] (ind-path) such that \(\text{ind}_{=}(d)(a,\text{refl}_{a})\equiv d\). Thus, for type families over (based) path types, to produce a section of the whole family it suffices to produce a section only at the reflexivity loop. **The homotopy theory of types.** The following notions are due to Voevodsky [108], cf. also [7; 51; 87; 105]. According to the idea that terms \(p:(a=_{A}b)\) encode paths in a type we want to express when a type is homotopically trivial _aka contractible_. This is witnessed by the type \[\operatorname{isContr}(A)\coloneqq\sum_{x:A}\ \prod_{y:A}(x=_{A}y).\] (is-contr) A contractible type \(A\) comes equipped with a canonical inhabitant, the _center of contraction_\(c_{A}:A\)(contraction-center) and a homotopy \(H_{A}:\prod_{y:A}(c_{A}=_{A}y)\)(contracting-htpy). Contractible types are equivalent to the point or _terminal type_\(\mathbf{1}\), see (contr-iff-terminal-map-is-equiv). Traditional homotopy theory involves constructions on topological spaces that are invariant under _homotopy equivalence_, which is a pair of maps between two spaces in opposite directions whose composites are homotopic to the identity. Translating this into type theory, a map \(f:A\to B\) between types is a (homotopy) equivalence when there is a term inhabiting the type \[\operatorname{isEquiv}(f)\coloneqq\sum_{g:B\to A}(g\circ f=_{A\to A} \operatorname{id}_{A})\times\sum_{h:B\to A}(f\circ h=_{B\to B} \operatorname{id}_{B}).\] (is-equiv) This type is a _proposition_ in the sense that it is contractible whenever if it is inhabited. By (93, 12.1.3), this can equivalently be captured by the type \[\operatorname{isProp}(A)\coloneqq\prod_{x,y:A}(x=_{A}y).\] (is-prop) When a type \(A\) is a proposition (_i.e._, \(\operatorname{isProp}(A)\) is inhabited), then it can be treated as a _mere property_ (up to homotopy, _i.e._, a contractible and thus trivial choice of data) rather than additional _structure_. The fact that \(\operatorname{isEquiv}(f)\) is always a proposition hence means that being an equivalence is, in fact, a _property_ of a map, much in line with the expected intuition. It turns out there is a further equivalent characterization of when a map is an equivalence in that sense, namely if and only if all its _fibers_\(\operatorname{fib}(f,b)\coloneqq\sum_{x:A}(f(x)=_{A}b)\) are contractible, _i.e._, \[\operatorname{isEquiv}(f)\simeq\prod_{b:B}\ \operatorname{isContr}( \operatorname{fib}(f,b)).\] If type families are understood as _fibrations_\(p:\sum_{x:A}C(x)\to A\), then equivalences in this sense behave like _trivial fibrations_ (\(\operatorname{10-trivial-fibrations}\)) whose fibers are all contractible. These homotopical interpretations of Martin Lof's dependent type theory open up a whole area of research doing homotopy theory synthetically, cf. (93; 106). **Function extensionality (FunExt).** While we do not require the univalence axiom in our formalization, we do make use of _function extensionality_, which is one of its consequences (93, Theorem 17.3.2): we will postulate the map \[\operatorname{htpy-eq}:\prod_{X:\mathcal{U}}\prod_{A:X\to\mathcal{U}}\prod_{f \not\in\mathbb{I}_{X}A}(f=g)\to\prod_{x:X}(fx=gx)\] defined via path induction by \[\operatorname{htpy-eq}(X,A,f,f,\operatorname{refl}_{f},x)\coloneqq \operatorname{refl}_{f^{X}}\] (htpy-eq) is an equivalence, _i.e._, there exists a term \[\operatorname{funext}:\prod_{X:\mathcal{U}}\prod_{A:X\to\mathcal{U}}\prod_{f :g:\mathbb{I}_{X}A}\operatorname{isEquiv}(\operatorname{htpy-eq}_{X,A,f,g}).\] **The \(\infty\)-groupoid structure on a type.** By (iterated) path induction one can prove the existence of functions \[\prod_{x,y:A}(x=_{A}y)\to(y=_{A}x),\] (rev) \[\prod_{x,y:x:A}(x=_{A}y)\to(y=_{A}z)\to(x=_{A}z)\] (concat) serving to reverse paths as well as concatenating them. One can show that these satisfy the expected groupoid laws, but only up to propositional equality, endowing every type canonically with the structure of a (weak) \(\infty\)-groupoid, cf. (47; 107). While \(\infty\)-groupoids are special cases of \(\infty\)-categories, in a general \(\infty\)-category we require directed "arrows" that are not necessarily reversible. This suggests the following extensions of the underlying type theory. ### Extension 1: Cube and tope layers Intuitively, a synthetic \(\infty\)-category is a type where directed arrows can be composed up to homotopy. To reason about directed arrows, their composites, and other shapes arising from this the idea is to introduce an appropriate shape theory to the type theory. The shapes will be part of the contexts so that type families and sections can depend on them. Each shape is viewed as a _subshape_ embedded inside a _higher dimensional_ (_directed_) _cube_. This is reminiscent of the basic setup of _cubical type theory_(5; 16; 27; 30; 78). For the _cube layer_, consider a new pretype \(2\), equipped with two distinct elements \(0,1:2\), and a binary relation \(\leq\) making \(2\) into a strict partial order with bottom element \(0\) and top element \(1\). The Lawvere theory generated by \(2\) constitutes the cube layer, _i.e._, the cubes are exactly the finite powers \(2^{n}\), with \(2^{0}\equiv 1\). The partial order is captured by a new judgment form, called a tope: \[x,y:2+x\leq y\operatorname{\text{\rm\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rmrm{\rm{ \rm{ }}}}}}}}}}}}}\] The _tope layer_ is a finitary intuitionistic logic over the cube layer. The intention is to carve out _subshapes_\(\Phi\subseteq I\) of a cube \(I\) by describing it via a formula on the cube variables. In general: if \(I\) is a cube and \(\varphi\) is a tope in context \(t:I\), written as a judgment \(t:I\vdash\varphi\) tope, then \(\Phi\coloneqq\{t:I\ |\ \ \varphi\}\) is the _shape_ corresponding to \(\varphi\). This way, one can define important shapes such as the \(n\)-simplex \(\Delta^{n}\), for \(n\in\mathbb{N}\), its boundaries \(\partial\Delta^{n}\), the _\((n,k)\)-horns_\(\Lambda^{n}_{k}\) for \(k\leq n\), and more. E.g., we have the following formulas, cf. also Figure 1: \[\Delta^{1} \coloneqq\{t:2\mid\top\}\subseteq 2\] \[\partial\Delta^{1} \coloneqq\{t:2\mid(t\equiv 0)\vee(t\equiv 1)\}\subseteq 2\] \[\Delta^{2} \coloneqq\{(t,s):2^{2}\mid\top\}\subseteq 2^{2}\] \[\partial\Delta^{2} \coloneqq\{(t,s):2^{2}\mid(s\equiv 0)\vee(s\equiv t)\vee(t\equiv 1 )\}\subseteq 2^{2}\] \[\Lambda^{2}_{1} \coloneqq\{(t,s):2^{2}\mid(s\equiv 0)\vee(t\equiv 1)\}\subseteq 2^{2}\] (03-simplicial-type-theory) Like in cubical type theory, we connect the standard type layer with the cube and tope layer through a three-part context, which allows type families \(A\) to depend on a cube context \(\Xi\), a tope context \(\Phi\), and a type context \(\Gamma\), written as \(\Xi\mid\Phi\mid\Gamma\vdash A\). The directed arrows in a type are now defined using our interval shape \(\Delta^{1}\) and another feature to be introduced, the _extension types_. ### Extension 2: Extension types (04-extension-types) Let \(\Phi\subseteq\Psi\) be an inclusion of subshapes, in cube context \(I\). An _extension type_ as introduced in (Kurz, 2017), originally due to unpublished work by Lumsdaine and Shulman, captures the strict extension of a section defined on the smaller shape \(\Phi\) to the larger shape \(\Psi\). Concretely, assume given a type family \(I\mid\Psi\mid\Gamma\vdash A\) together with a partial section \(t:I\mid\Phi\mid\Gamma\vdash a(t):A(t)\) over the subshape \(\Phi\subseteq\Psi\). Then, the corresponding _extension type_ has as elements the strict extensions \(t:I\mid\Psi\mid\Gamma\vdash b(t):A(t)\) such that \(a|_{\Phi}\equiv b\). We denote the extension type by \(\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\). In case \(A\) is a constant type, the ensuing extension type will be written as \(\big{\langle}\Psi\to A\big{|}_{a}^{\Phi}\big{\rangle}\). In analogy to ordinary type-to-type function types, we can emulate shape-to-type function types by instantiating extension types by the "empty tope" \(\varphi\coloneqq\bot\) and the canonical term \(\mathsf{rec}_{\bot}\), allowing us to define the functions of shape \(\Psi\) into type \(A\) as \(\Psi\to A\coloneqq\big{\langle}\Psi\to A\big{|}_{\mathsf{rec}_{\bot}}^{\bot} \big{\rangle}\), and similarly for the dependent case. **Extension extensionality (ExtExt).** Just as in (Kurz, 2017, Section 4), to make the extension types homotopically well-behaved, we also assume a version of function extensionality for extension types. In Rzk, we postulate an axiom that allows us to extend relative homotopies between extensions of a given partial section. Namely, let \(I\) be a cube and \(\Phi\subseteq\Psi\subseteq I\) be a shape inclusion. Consider a type family \(A:\Psi\to\mathcal{U}\) with a partial section \(a:\prod_{t:\Phi}A(t)\). As in the case of dependent functions, we may use path induction to define a map for any \(f,g:\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\) of the form \[\mathsf{exthtpyeq}_{A,a,f,g}:(f=g)\to\big{\langle}\prod_{t:\Psi}f(t)=g(t) \big{|}_{\mathsf{ref}}^{\Phi}\big{\rangle}.\] (ext-htpy-eq) As we did for function extensionality, we assert an extension extensionality axiom of the following form. **Axiom 2.1** (ExtExt).: _For any \(A\), \(a\), \(f\), and \(g\) as above, the map (ext-htpy-eq) is an equivalence, i.e., there exists a term_ \[\mathsf{extext}:\prod_{A,a,f,g}\mathsf{isEquiv}(\mathsf{exthtpyeq}_{A,a,f,g})\] In the original paper, Axiom 2.1 is derived from another version of the extension extensionality axiom (Kurz, 2017, Axiom 4.6). This version is analogous to the version of function extensionality that states that, given a family \(B:A\to\mathcal{U}\), then if every fiber \(Bx\) is contractible, then so is the type \(\prod_{xA}Bx\). In the case of function extensionality, this is known to be equivalent to the version of function extensionality (FunExt). However, it is not known whether this equivalence also holds for extension types. Therefore, Riehl-Shulman assume the version appearing as (Kurz, 2017, Axiom 4.6) since they show that the other desired versions, such as (ExtExt), can be derived from it. The axiom (Kurz, 2017, Axiom 4.6) is called _relative function extensionality_ (or _extension extensionality_), and it reads as follows. Let \(\Phi\subseteq\Psi\subseteq I\) be a shape inclusion and let \(A:\Psi\to\mathcal{U}\) be a family such that each \(A(t)\) is contractible. Then, given \(a:\prod_{t:\Phi}A(t)\), the type \(\big{\langle}\prod_{t:\Psi}A(t)\big{|}_{a}^{\Phi}\big{\rangle}\) is contractible. Our version (ExtExt) then follow as one of the consequences established in (Kurz, 2017, Proposition 4.8). ## 3. Synthetic \(\infty\)-categories Simplicial type theory is a combination of the homotopical interpretation of Martin-Lof type theory with strict shape and extension types. As demonstrated in (Kurz, 2017, 2018, 2019, 2020), this framework is powerful enough to develop \(\infty\)-category theory synthetically, within a genuinely homotopical framework. ### Pre-\(\infty\)-categories and \(\infty\)-categories **Hom types (hom).** Let \(A\) be a type with elements \(a,b:A\). The \(1\)-simplex serves as the shape for our directed arrows. Thus, the type of _(directed) arrows_ or _homomorphisms_ from \(a\) to \(b\) is the extension type \[\hom_{A}(a,b)\coloneqq\left\langle\Delta^{1}\to A\big{|}_{[a,b]}^{\partial \Delta^{1}}\right\rangle,\] (hom) where \(t:\partial\Delta^{1}\vdash[a,b](t):A\) is the term with \([a,b](0)\equiv a\) and \([a,b](1)\equiv b\). Figure 1. Some important shapes **Identity arrows (id-arr).** By the introduction rule for extension types, any element \(x:A\) induces an arrow \(\mathsf{id}_{x}:\hom_{A}(x,x)\), \(\mathsf{id}_{x}:\Map{\lambda}s.x\). **Segal types (05-segal-types).** We can now impose a synthetic version of the Segal condition [42, 99], witnessing that a type admits unique composition of arrows up to contractibility. **Definition 3.1** (Segal types; is-segal).: A type is a _Segal type_ or a _pre-\(\infty\)-category_ if any composable pair of arrows has a unique composite, _i.e._, given a pair of arrows \(f:\hom_{A}(x,y)\) and \(g:\hom_{A}(y,z)\) the type of fillers \[\sum_{h\hom_{A}(x,z)}\hom_{A}^{2}(f,g;h)\] is contractible, where \[\hom_{A}^{2}(f,g;h):=\left\langle\Delta^{2}\to A_{\left|\left[f,g;h\right] \right.\right\rangle}^{\left|\Delta\Delta^{2}\right.}\] (hom2) is the type of 2-simplices bounded by a fixed choice of 1-simplices: \[\mathsf{isSegal}(A):=\prod_{\begin{subarray}{c}x,y,xA\\ f\hom_{A}(x,y)\\ g\hom_{A}(y,z)\end{subarray}}\mathsf{isContr}\left(\sum_{h\hom_{A}(x,z)}\hom_{A }^{2}(f,g;h)\right)\] This means, up to homotopy there exists a unique arrow \(g\circ f:\hom_{A}(x,z)\) acting as a composite of \(g\) and \(f\), together with a 2-cell \(\mathsf{comp}_{A,f\circ g}:\hom_{A}^{2}(f,g;g\circ f)\) that witnesses that the 2-simplex bounded by \(f\), \(g\), and \(g\circ f\), is filled, cf. Figure 2. One can show that that the Segal condition of Definition 3.1 can be reexpressed by saying that the type \(A\) is local with respect to an inner horn inclusion. **Theorem 3.2** (is-segal-iff-is-local-horn-inclusion).: _A type \(A\) is Segal if and only if restriction along the shape inclusion \(\Lambda_{1}^{2}\subseteq\Delta^{2}\) is an equivalence_ \[\mathsf{isEquiv}\left(\mathsf{res}_{A}:A^{\Delta^{2}}\to A^{ \Lambda_{1}^{2}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad arrows \(\varphi:\hom_{A\to\mathcal{B}}(f,g)\) (nat-trans). This definition automatically yields the expected naturality squares without having to specify them, (Kurz, 2017, Proposition 6.6). Further instances of automatic naturality appear in SS3.2 and 5. ### Covariant families of discrete types **Discrete types (07-discrete).** We are also interested in synthetic \(\infty\)-groupoids, meaning \(\infty\)-categories where every arrow is invertible.7 E.g., one can show that for any Segal type \(A\), the hom types \(\hom_{A}(x,y)\) are necessarily discrete. This matches up with the traditional theory and the intuition that \(\infty\)-categories are (weakly enriched) in _spaces_ as modeled by \(\infty\)-groupoids (Kurz, 2017). Footnote 7: As shown in (Kurz, 2017, Section 7), one can drop the Rezkness assumption. The groupoidal condition can be understood as a kind of _discreteness condition_. To make it precise, we need a comparison of paths with arrows, similarly to our treatment of Rezk completeness, cf. Section 3.1. Namely, for a type \(A\) we define \[\operatorname{arreg}_{A}:\prod_{x,yA}(x=_{A}y)\to\hom_{A}(x,y)\] (ar-eq) via path induction by \[\operatorname{arreg}_{A}(x,x,\operatorname{ref}_{x}):\rightleftharpoons \operatorname{id}_{x}.\] **Definition 3.4** (Discrete types; is-discrete).: A type \(A\) is _discrete_ or an \(\infty\)-_groupoid_ if \[\operatorname{isDiscrete}(A):=\prod_{x,y:A}\operatorname{isEquiv}( \operatorname{arreg}_{A,x,y}).\] This definition also yields the desired notion in the Segal object models (Kurz, 2017). **Covariant families (08-covariant).** The \(\infty\)-categorical Yoneda lemma deals with families or _fibrations_ of discrete types indexed by a Segal type. These families \(C:A\to\mathcal{U}\) are supposed to be _functorial_ in the sense that an arrow \(f:\hom_{A}(x,y)\) in the base \(A\) should give a functor \(f_{*}:\rightleftharpoons_{C,f}:C(x)\to C(y)\) between the fibers. This is achieved by the notion of _covariant family_, corresponding to what semantically is often called _left fibration_, after (Kurz, 2017, SS8) and (Kurz, 2017, Section 2.1), see also (Kurz, 2017, 2017, 2017, 2017, 2017, 2017, 2017, 2017). To define it, we have to introduce a _dependent_ version of the hom type, capturing arrows in the total type \(\sum_{x:A}C(x)\) that get mapped to a prescribed arrow in the base. This can, once again, conveniently be formulated using extension types. **Definition 3.5** (Dependent hom; dhom).: Let \(C:A\to\mathcal{U}\) be a type family. For elements \(x,y:A\), let \(f:\hom_{A}(x,y)\) be an arrow. For elements in the fibers \(u:C(x)\) and \(v:C(y)\), the corresponding _dependent hom type_ from \(u\) to \(v\) is given by the extension type \[\operatorname{dhom}_{C(f)}(u,v):=\left\langle\prod_{t:\Lambda^{1}}C(f(t)) \right\rangle_{[u,\operatorname{gl}]}^{\Delta_{A}^{t}}.\] The defining property for a covariant family \(C:A\to\mathcal{U}\) says that we can lift an arrow \(f:\hom_{A}(x,y)\) in the base, given a point \(u:C(x)\) in the fiber over its source, to a dependent arrow \[\operatorname{lift}_{C,f,u}:\operatorname{dhom}_{C(f)}(u,f_{*}u)\] (covariant-transport) lying over \(f\), and more so, uniquely up to homotopy, cf. Figure 3. **Definition 3.6** (Covariant family; is-covariant).: Let \(C:A\to\mathcal{U}\) be a type family. We say \(C\) is _covariant_ if the following proposition is inhabited: \[\prod_{x,yA}\prod_{f\hom_{A}(x,y)}\prod_{uC(x)}\operatorname{isContr}\left( \sum_{uC(y)}\operatorname{dhom}_{C(f)}(u,v)\right)\] As shown in (Kurz, 2017, Section 8), it turns out that, over a Segal type \(A\), covariant families \(C:A\to\mathcal{U}\) behave in the expected ways. Namely, the fibers are all discrete (Kurz, 2017, Proposition 8.18), and they are _functorial_ in the following sense: for elements \(x,y,z:A\), morphisms \(f:\hom_{A}(x,y)\), \(g:\hom_{A}(y,z)\), and an element in the fiber \(u:C(x)\), we get identifications \[\begin{array}{l}g_{*}(f_{*}u)=(g\circ f)_{*}u\text{ and }(\operatorname{id}_{x})_{ *}u=u,\\ (\text{id-arr-covariant-transport})\end{array}\] see (Kurz, 2017, Proposition 8.16). A fundamental example are the _representable_ covariant families of the form \(\hom_{A}(x,-):A\to\mathcal{U}\), for \(x:A\), when \(A\) is Segal (is-segal-representable-is-covariant). Furthermore, between covariant families \(C,D:A\to\mathcal{U}\), a fiberwise map \(\varphi:\prod_{x:A}C(x)\to D(x)\) is automatically _natural_: for any arrow \(f:\hom_{A}(x,y)\) and element \(u:C(x)\) we have an identification \[f_{*}(\varphi_{x}(u))=\varphi_{y}(f_{*}u).\] (naturality-covariant-fiberwise-transformation) ## 4. The Rzk proof assistant Kudasov has implemented Rzk, the first proof assistant to support simplicial type theory. In our work since the spring of 2023, we have been developing a library for Rzk, formalizing a range of result from Riehl-Sublman's work (Kurz, 2017), and Figure 3. A covariant family \(C:A\to\mathcal{U}\) in addition to that also the required results from standard homotopy type theory (Steintein, 1977; Steintein, 1978). The formalizations in this paper have been written for and checked with Rzx version 0.5.4. Syntax of the formalized code in Rzx is very close to the underlying theory, allowing for easy correspondence between statements in the code and on paper. However, proofs in Rzx may appear too detailed sometimes, since, being experimental, Rzx has not yet evolved enough syntactic sugar or tools like implicit parameters, tactics, or type classes to simplify proof construction. ### Key features of Rzx The kernel of Rzx provides the following primitive notions and capabilities. **The universes.** There are three fixed universes: \(\mathtt{CUBE}\) of cubes, \(\mathtt{TOPE}\) of topes, and \(\mathtt{U}\) of types. In Rzx, \(\mathtt{U}\) contains \(\mathtt{CUBE}\), \(\mathtt{TOPE}\), and itself, implying an unsound "type in type." We consider such simplification acceptable for the time being and hope that Rzx will evolve proper universes in the future. **Type logic.** This includes both cubes and topes. Rzx has built-in unit cube 1 and directed interval cube 2 (with points \(\ast_{1}:\ 1\) and \(\emptyset_{2}:\ 2\) and \(1_{2}:\ 2\) correspondingly), standard topes (Steintein, 1977, Figure 2), and the inequality topes \(\ast\leq\mathtt{t}\) required for simplicial type theory. When done on paper, proofs in the tope logic are usually omitted as trivial, and we find that in our formalization project, only fairly small problems have been required for coherence checks. In fact, the most complicated checks we have are involved in the formalization of the general result for currying for extension types ((Steintein, 1977, Theorem 4.1); \(\mathtt{curry\text{-}uncurry}\)). Rzx offers full automation of the tope layer which helps keep the Rzx syntax and proofs simpler and automatically locate coherence issues in proof terms. **Dependent types.** Rzx offers basic support for dependent functions (x : A) \(\rightarrow\) B x, dependent pairs \(\Sigma\) (x : A), B x, and identity types x =_{A} y. While at the moment of writing there is no support for user-defined implicit arguments, identity types allow the indices to be implicit with terms x = y and refl instead of x =_{A} y and refl_{x : A}, resp. Absence of implicit arguments and full type inference in Rzx induces more explicit and verbose proof terms. **Extension types.** Rzx offers two separate concepts that result in support for extension types. First, Rzx allows dependent functions to have a cube or a shape (a cube restricted with a tope) argument. These correspond to extension types restricted to \(\mathtt{rec}_{\bot}\) at the empty tope \(\bot\). Second, any type is allowed to have a "refinement," specifying values for arbitrary tope constraints. For example, a type A \([\phi\mapsto\mathtt{x},\ \psi\mapsto\mathtt{y}]\) is a refinement of type A such that values of this type are _computationally_ equal to x when \(\phi\) holds and to y when \(\psi\) holds. Of course, x and y must agree when (\(\phi\wedge\psi\)) holds. Refinements of a type are its subtypes, and A is considered equivalent to A \([\bot\mapsto\mathtt{recBOT}]\). The subtyping is handled by Rzx, removing the need for explicit type coercions. Combining functions depending on shapes with such refinements yields extension types. For instance, \(\hom_{A}(a,b):=\left\langle\Delta^{1}\to A\middle|\middle|\middle|\middle|a \middle|\middle|\middle|\middle|\middle|\middle|\middle|\middle|\right\rangle\) (hom) is defined as follows: #def hom (A : U) (x y : A) : U := (t : \(\Delta^{1}\)) \(\rightarrow\) A [t \(\equiv\emptyset_{2}\mapsto\mathtt{x}\), t \(\equiv\)1\({}_{2}\mapsto\mathtt{y}\)] **Sections and variables.** Rzx supports Coq-style sections,9 allowing for locally defined assumptions (variables) which are automatically added as parameters to definitions that use them. Importantly, Rzx features a mechanism for detecting implicitly used assumptions to avoid accidental circular reasoning in definitions. To ensure that such an implicit assumption is not accidental, Rzx has the uses syntax. For example, the Yoneda lemma (yoneda-lemma) itself is specified in a way that makes explicit the use of function extensionality (funext). #def yoneda-lemma uses (funext) ( A : U) ( is-segal-A : is-segal A) ( a : A) ( C : A \(\rightarrow\) U) ( is-covariant-C : is-covariant A C) : is-equiv ((z : A) \(\rightarrow\) hom A a z \(\rightarrow\) C z) (C a) (evid A a C) :=... Footnote 9: [https://rzk-lang.github.io/rzk/v0.5.4/reference/sections.rzk/](https://rzk-lang.github.io/rzk/v0.5.4/reference/sections.rzk/) We find this particularly useful for readability, highlighting the use of axioms or other assumptions (e.g. that a certain type is Segal). ## 5. The \(\infty\)-categorical Yoneda lemma in Rzx **The statement.** In 1-category theory, the Yoneda lemma says the following. Given a category \(\mathbb{A}\) and a _copresheaf_10_on \(\mathbb{A}\), _i.e._, a functor \(C:\mathbb{A}\rightarrow\mathsf{Set}\), for any \(a\in\operatorname{ob}(\mathbb{A})\) there is a bijection Footnote 10: Our formalization considers the covariant case as well as the dual contravariant case. \[\hom_{[\mathbb{A},\operatorname{Set}]}(\hom_{\mathbb{A}}(a,-),C)\cong C(a)\] mapping a natural transformation \(\alpha\) to \(\alpha(a,\operatorname{id}_{a})\in C(a)\), naturally in both \(C\) and \(a\). In the \(\infty\)-categorical setting, sets get replaced by \(\infty\)-groupoids. Copresheaves are modeled by left fibrations _aka_ covariant families. Accordingly, the synthetic \(\infty\)-categorical Yoneda lemma reads as follows. **Theorem 5.1** (\(\mathsf{v}\mathsf{o}\mathsf{n}\mathsf{a}\mathsf{-}\mathsf{lemma}\)).: _Let \(C:A\to\mathcal{U}\) be a covariant family over a Segal type \(A\). Then, for any \(a:A\) the map_ \[\mathsf{evid}_{A,aC}:\left(\prod_{z:A}\hom_{A}(a,z)\to C(z)\right)\to C(a)\] _defined by_ \[\mathsf{evid}_{A,aC}(\varphi):=\varphi(a,\mathrm{id}_{a})\] (evid) _is an equivalence._ Note this result holds for pre-\(\infty\)-categories, not just \(\infty\)-categories. For semantical accounts of the \(\infty\)-categorical Yoneda lemma see e.g. [52, 64, 84, 91], and [92, Chapter 5]. **The proof.** An inverse map is constructed using the covariant transport of \(C\). Namely, we define \[\mathsf{yon}_{A,aC}:C(a)\to\left(\prod_{z:A}\hom_{A}(a,z)\to C(z)\right)\] by \[\mathsf{yon}_{A,aC}(u):=\pm x.x\mathit{f}.f.u.\] (yon ) In the \(1\)-categorical Yoneda lemma, a crucial part of the work is to show that the terms \(\mathsf{yon}_{A,aC}(u)\) defined by the inverse map are actually morphisms of presheaves, _i.e._, natural transformations. In our setting, this is, in fact, an automatic consequence from both \(C\) and \(\hom_{A}(a,-):A\to\mathcal{U}\) being covariant. In the formalization considerable work goes into showing a type \(A\) is Segal if and only if the type families \(\hom_{A}(a,-)\) are covariant; for the implication relevant here, see (is-segal-representable-is-covariant). In more detail, if \(A\) is a Segal type, \(a:A\), and \(C:A\to\mathcal{U}\) is a covariant family, let \(\varphi:\prod_{z:A}\hom_{A}(a,z)\to C(z)\) be a family of maps. Then for any \(x,y:A\) and arrows \(f:\hom_{A}(a,x)\) and \(g:\hom_{A}(x,y)\), we have \[g_{*}(\varphi(x,f))=\varphi(y,g\circ f)^{11} \tag{1}\] as a special case of \[(\mathsf{n}\mathsf{n}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{ n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a }\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n} \mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a} \mathsf{n}\mathsf{a}\mathsf{n}\mathsf{a}\mathsf{n theory. Both proofs follow the same outline, proving that (**evid**)is an equivalence by constructing a two-sided inverse. A point of difference in the agda-unimath proof is that the data of the inverse involves both the function (**y**on) together with a proof of its naturality. As with our proof in Rzk, one of the composites is directly identifiable with the identity, while the other requires a calculation together with two instances of function extensionality. Other differences arise from the varying ways that categorical data is encoded in Rzk vs agda-unimath. There, precategories are types with additional structure while here pre-co-categories are types satisfying a property. There, representables are encoded as functors valued in the precategory of sets, while here representables are encoded as covariant type families. These differences have more of an effect on the syntax of the proof than its structural content. At our request, Sina Hazratpour wrote a lean formalization of the 1-categorical Yoneda lemma, first as a self-contained formalization in Lean 3,14 with the proof of the Yoneda lemma later updated to Lean 4.15 Formal proofs in Lean are quite different than formal proofs in Rzk or in Agda because of the use of automation tactics in the interactive theorem proving mode, allowing the user to rewrite along known identifications or "simplify" the goal using known lemmas. In addition, Lean's use of type classes and automatic instance inference simplifies the syntax in the statement of the Yoneda lemma, as compared with the agda-unimath proof. Footnote 14: [https://github.com/simp/CovariantYonedaLean3](https://github.com/simp/CovariantYonedaLean3) Footnote 15: [https://github.com/simp/CovariantYonedaLean4](https://github.com/simp/CovariantYonedaLean4) In the Lean 3 proof, the naturality of (**yon**) must again be checked explicitly via a proof that involves unfolding the definition of the representable functor and using the fact that functors preserve composition. The remainder of the proof proceeds as before. Interestingly, in the Lean 4 proof, Hazratpour proves a lemma -- (1) in the case where \(f\) is \(\operatorname{id}_{a}\) -- and then feeds it to the tactic aesop_cat,16 which then automatically verifies the naturality of (**yon**) and checks that the Yoneda maps are inverses. Footnote 16: Asepo (Automated Extensible Search for Obvious Proofs) is a proof search for Lean 4; see [https://github.com/JLimpery/aesop](https://github.com/JLimpery/aesop) Other formalizations of the 1-categorical Yoneda lemma appear in UniMath17(Lean, 2017) and mathlib.18 Footnote 17: [https://github.com/UniMath/UniMath/blob/7d7f6997dbe84b0d0107d4c963281c6efb97ff60/UniMath/](https://github.com/UniMath/UniMath/blob/7d7f6997dbe84b0d0107d4c963281c6efb97ff60/UniMath/) ## 7. Conclusions and future work The formalization process led us to discover a mistake in the paper (Kudasov et al., 2017): the published proof of the "only if" direction of Proposition 8.13 employed circular reasoning.19 Fortunately, the stated result remains true. Our new formalized proof (is-segal-is-covariant-representable) now appears in (Kudasov et al., 2017). Footnote 19: [https://teanprover-community.github.com/mathlib/docs/Mathlib/CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda](https://teanprover-community.github.com/mathlib/docs/Mathlib/CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?Category/Yoneda.html?CategoryTheory/Yoneda.html?Category/Yoneda). principle_, analogous to the well-known path induction principle for the identity types in standard Martin-Lof type theory. Efforts in the direction of formalizing Buchholtz-Weinberger's proof of the cocartesian Yoneda lemma from (Buchholtz and Wein, 1986, Section 7) in Rzx are under way, but will require formalizing if not all then at least some of the preliminary structural properties and operations for cocartesian families from (Buchholtz and Wein, 1986, Section 5). ### Limits and colimits In (Buchholtz and Wein, 2016), Bardomiano Martinez introduces limits and colimits of diagrams valued in Segal types and proves that right adjoints between Segal types preserve limits. We would like to formalize his results and explore further developments of the theory of limits and colimits. ### Improvements to Rzx We note a few improvements for Rzx that would positively affect this and future formalization projects. First, supporting term inference and implicit arguments would help reduce the size of formalizations and, consequently, assist with readability. Second, the current implementation lacks incremental typechecking and proper module support, which makes the feedback on changes less immediate. Finally, while a minimal integration with an IDE exists,10 it still has to acquire proper language server support. We note also that Rzx's experimental diagram rendering feature21 (which is useful on small examples) could be extended further to assist with visualizations (or even interactive capabilities) for statements and constructions in simplicial type theory. Footnote 21: there is a VS Code extension for Rzx at [https://github.com/rzk-lang/vscode-rzk](https://github.com/rzk-lang/vscode-rzk) ### Extensions of simplicial type theory The simplicial type theory is not sufficiently powerful to prove all results of \(\infty\)-category theory contained for instance in (Rzx, 2016). A longer range goal would be to further extend this synthetic framework by including directed higher inductive types to freely generate \(\infty\)-categories, universes to classify covariant fibrations and cocartesian fibrations, and modalities for opposite \(\infty\)-categories and the \(\infty\)-groupoid core as outlined in (Buchholtz and Wein, 2016); see also (Buchholtz and Wein, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016; Weinberger, 2016). If such theoretical developments were paired with experimental extensions to Rzx, that would greatly aid the process of exploring the expanded formal system. ## Acknowledgments The authors are very grateful to Benedikt Ahrens, who first suggested the project of creating a proof assistant for the simplicial type theory. Fredrik Bakke contributed formalizations concerning the 2-category of Segal types and made invaluable improvements to the professionalization of the repository, drafting a style guide, overseeing its implementation, and suggesting improvements to our github workflow. Sina Hazratpour produced a formalized proof of the 1-categorical Yoneda lemma in Lean to provide a useful direct comparison. Abdelrahman Abounegm has contributed an Rzx plugin22 for MkDocs allowing for hyperlinks to the syntax-highlighted code used in this paper. Footnote 22: [https://github.com/rzk-lang/mkdocs-plugin-rzk](https://github.com/rzk-lang/mkdocs-plugin-rzk)
``` 形式的な1次元Category理論は、数多くの数学的証明のライブラリの一部となっています。しかし、代数的幾何学から理論物理学といった分野におけるより高度な結果では、オブジェクトが「より高い構造」を持つ領域では、無限次元カテゴリーが1次元カテゴリーに代わって用いられ、$\infty$カテゴリー理論はこれまでのところコンピューター形式化に適していません。Riehl-Shulmanのホモロジー型理論の単純化拡張に基づき、Rzkという新しい証明アシスタントを開発しました。このアシスタントは、合成$\infty$カテゴリー理論をサポートするものであり、$\infty$カテゴリー理論の結果の最初の形式化を可能にしました。これはYonedaの定理という、カテゴリー論における基礎定理と呼ばれる、ある種の定理を含む。この定理は、あるカテゴリーのオブジェクトは、他のオブジェクトとどのように関係しているかによって決定
2306.00199
Tip of the Quantum Entropy Cone
Relations among von Neumann entropies of different parts of an $N$-partite quantum system have direct impact on our understanding of diverse situations ranging from spin systems to quantum coding theory and black holes. Best formulated in terms of the set $\Sigma^*_N$ of possible vectors comprising the entropies of the whole and its parts, the famous strong subaddivity inequality constrains its closure $\overline\Sigma^*_N$, which is a convex cone. Further homogeneous constrained inequalities are also known. In this work we provide (non-homogeneous) inequalities that constrain $\Sigma_N^*$ near the apex (the vector of zero entropies) of $\overline\Sigma^*_N$, in particular showing that $\Sigma_N^*$ is not a cone for $N\geq 3$. Our inequalities apply to vectors with certain entropy constraints saturated and, in particular, they show that while it is always possible to up-scale an entropy vector to arbitrary integer multiples it is not always possible to down-scale it to arbitrarily small size, thus answering a question posed by A. Winter. Relations of our work to topological materials, entanglement theory, and quantum cryptography are discussed.
Matthias Christandl, Bergfinnur Durhuus, Lasse Harboe Wolff
2023-05-31T21:37:24
http://arxiv.org/abs/2306.00199v2
# The Apex of the Quantum Entropy Cone ###### Abstract Relations among von Neumann entropies of different parts of an \(N\)-partite quantum system have direct impact on our understanding of diverse situations ranging from spin systems to quantum coding theory and black holes. Best formulated in terms of the set \(\Sigma_{N}^{*}\) of possible vectors comprising the entropies of the whole and its parts, the famous strong subaddivity inequality constrains its closure \(\overline{\Sigma}_{N}\), which is a convex cone. Further homogeneous constrained inequalities are also known. In this work we provide (non-homogeneous) inequalities that constrain \(\Sigma_{N}^{*}\) near the apex (the vector of zero entropies) of \(\overline{\Sigma}_{N}\), in particular showing that \(\Sigma_{N}^{*}\) is not a cone for \(N\geq 3\). Our inequalities apply to vectors with certain entropy constraints saturated and, in particular, they show that while it is always possible to up-scale an entropy vector to arbitrary integer multiples it is not always possible to down-scale it to arbitrarily small size, thus answering a question posed by A. Winter. Relations of our work to topological materials, entanglement theory, and quantum cryptography are discussed. ## I Introduction Given a quantum system \(X\) in a state described by a density operator \(\rho\), i.e. a non-negative operator of trace \(1\) on a (finite dimensional) Hilbert space \(\mathcal{H}_{X}\), its von Neumann entropy is given by \[H_{\rho}=-\mathrm{Tr}[\rho\log(\rho)]=-\sum_{i}\lambda_{i}\log(\lambda_{i})\,, \tag{1}\] where \(\lambda_{i}\) are the eigenvalues of \(\rho\), and \(\log\) denotes the binary logarithm. It is well known that \(H_{\rho}=0\) if and only if \(\rho\) is a pure state, i.e. \(\rho=|V\rangle\langle V|\), where \(|V\rangle\) is a unit vector in \(\mathcal{H}_{X}\), and also that \(H\) is additive in the sense that \(H_{\rho\otimes\sigma}=H_{\rho}+H_{\sigma}\) for arbitrary states \(\rho\) and \(\sigma\) on two different systems. We shall be concerned with multipartite systems \(\mathcal{N}\) consisting of \(N\) constituent systems \(X_{1},...,X_{N}\) with associated Hilbert spaces \(\mathcal{H}_{X_{1}},...,\mathcal{H}_{X_{N}}\), such that the state of \(\mathcal{N}\) is given by a density operator \(\rho\) on \(\mathcal{H}_{X_{1}}\otimes...\otimes\mathcal{H}_{X_{N}}\). The reduced state of a subsystem \(\mathcal{X}\subseteq\mathcal{N}\) is then given by \[\rho_{\mathcal{X}}:=\mathrm{Tr}_{\mathcal{N}\setminus\mathcal{X}}[\rho]\,,\] where \(\mathrm{Tr}_{\mathcal{N}\setminus\mathcal{X}}[\cdot]\) denotes the partial trace over \(\otimes_{X,\mathcal{X}}\mathcal{H}_{X_{i}}\) (and in particular \(\rho=\rho_{\mathcal{N}}\)). The entropy of the reduced state \(H_{\rho_{\mathcal{X}}}\) will also be denoted by \(H(\mathcal{X})_{\rho}\) or by \(H(X_{i_{1}}\ldots X_{i_{k}})_{\rho}\), if \(\mathcal{X}=\{X_{i_{1}},\ldots,X_{i_{k}}\}\). These marginal entropies define a vector \(\vec{H}_{\rho}\in\mathbb{R}^{2^{N}-1}\), called the _entropy vector_ of \(\rho\), whose coordinates are labelled by the non-empty subsystems of \(\mathcal{N}\). E.g., for \(N=2\) and \(\mathcal{N}=\{A,B\}\) we have \(\vec{H}_{\rho}=(H(A),H(B),H(AB))_{\rho}\in\mathbb{R}^{3}\), while for \(N=3\) and \(\mathcal{N}=\{A,B,C\}\) we write \[\vec{H}_{\rho} = (H(A),H(B),H(C),H(BC), \tag{2}\] \[H(AC),H(AB),H(ABC))_{\rho} \in\mathbb{R}^{7}\,.\] It follows from additivity of the entropy for composite systems in product states that \[\vec{H}_{\rho\otimes\sigma}=\vec{H}_{\rho}+\vec{H}_{\sigma} \tag{3}\] if \(\rho\) and \(\sigma\) are states of the same system and the constituent systems of the product state are taken to be the products of the respective constituent systems. For later reference we also recall, see e.g. [1], that if \(\rho=|V\rangle\langle V|\) is a pure state, then the entropy of any subsystem equals that of its complement, \[H(\mathcal{X})_{\rho}=H(\mathcal{N}\setminus\mathcal{X})_{\rho}\,, \tag{4}\] thus imposing constraints among the components of \(\vec{H}_{\rho}\). Of obvious interest in physics as well as in (quantum) information science [1; 2] is a proper understanding of the structure of the set \(\Sigma_{N}^{*}\) of all possible entropy vectors associated to \(N\)-party systems, \[\Sigma_{N}^{*}=\{\vec{H}_{\rho}\mid\rho\text{ is a density operator of }\mathcal{N}\}\,.\] It is a fundamental result of Pippenger [3] that the topological closure \(\overline{\Sigma}_{N}^{*}\) of \(\Sigma_{N}^{*}\) in \(\mathbb{R}^{2^{N}-1}\) is a convex cone, called the _quantum entropy cone_ of \(N\)-party systems, i.e. \(\overline{\Sigma}_{N}^{*}\) is closed under addition and under multiplication by positive scalars. It is also known, and easy to demonstrate, that \(\Sigma_{N}^{*}\) has full dimension, i.e. it spans all of \(\mathbb{R}^{2^{N}-1}\) as a vector space, and that \(\overline{\Sigma}_{N}^{*}\) and \(\Sigma_{N}^{*}\) have identical interiors and hence also identical boundaries. For \(N=2\) it is even true that \(\overline{\Sigma}_{2}^{*}=\Sigma_{2}^{*}\) as well be commented on further below. But for general \(N\geq 3\) an appropriate characterisation of the boundary entropy vectors is missing. A related but different long standing problem is to determine whether or not \(\overline{\Sigma}_{N}^{*}\) is a polyhedral cone, i.e. if it can be specified in terms of a finite number of linear inequalities. The known general inequalities of this sort are of two types: \[H(\mathcal{X})+H(\mathcal{Y}) \geq H(\mathcal{X}\cap\mathcal{Y})+H(\mathcal{X}\cup\mathcal{Y}) \tag{5}\] \[H(\mathcal{X})+H(\mathcal{Y}) \geq H(\mathcal{X}\setminus\mathcal{Y})+H(\mathcal{Y}\setminus \mathcal{X})\,, \tag{6}\] called _strong subadditivity_ and _weak monotonocity_, respectively. Here, \(\mathcal{X}\) and \(\mathcal{Y}\) are arbitrary subsystems, and by convention \(H(\emptyset)=0\) in case an empty set occurs on the right-hand side. Strong subadditivity was first established in [4], but a variety of proofs exist in the literature, see e.g. [1; 2; 4; 5; 6; 7]. To obtain weak monotonicity one makes use of the fact, referred to as _purification_[1], that given a state \(\rho\) it is possible to extend \(\mathcal{N}\) by a system \(Y\) and to define a pure state \(\eta=\left|V\right\rangle\left\langle V\right|\) of \(\mathcal{N}\cup Y\) such that \(\rho=\eta_{\mathcal{N}}\), i.e. \(\rho\) is obtained by reducing \(\left|V\right\rangle\left\langle V\right|\) to \(\mathcal{N}\). Applying (5) to the extended system and making use of (4), the inequalities (6) follow, see [1] for more details. We emphasize that the inequalities (5) and (6) are not all independent as shown in [3], where a subset of basic independent inequalities is singled out for each \(N\). The polyhedral cone defined by (5) and (6) is a closed convex cone, and will here be denoted by \(\Sigma_{N}\). The question whether \(\Sigma_{N}=\overline{\Sigma}_{N}^{*}\), or if there exist further independent linear inequalities beyond (5) and (6), remains open for \(N\geq 4\). For \(N\leq 3\) the two closed cones coincide as shown in [3]. While it is quite easy to see that \(\Sigma_{N}=\Sigma_{N}^{*}=\overline{\Sigma}_{N}^{*}\) hold for \(N\leq 2\), the case \(N\geq 3\) is different. Here, it has been shown that for \(N\geq 4\) there exist further constrained homogeneous linear inequalities [8; 9; 10]. This motivates the work presented here, whose main contribution is to identify a part of the boundary close to its apex as being outside the set \(\Sigma_{N}^{*}\) of realisable entropy vectors (Theorem 1 below). As a consequence, we show that \(\Sigma_{N}^{*}\) is not a cone for \(N\geq 3\) and, in particular, it is not closed for \(N\geq 3\). With notation as above, the basic inequalities for \(N=3\) are \[I_{XY}:=H(X)+H(Y)-H(XY)\geq 0\] \[II_{XY}:=H(XZ)+H(YZ)-H(Z)-H(XYZ)\geq 0\] \[III_{XY}:=H(Z)+H(XYZ)-H(XY)\geq 0\] \[IV_{XY}:=H(XZ)+H(YZ)-H(X)-H(Y)\geq 0\,,\] where the relations hold for \(\{X,Y\}\) equaling \(\{A,B\},\{A,C\}\) or \(\{B,C\}\) and \(Z\neq X,Y\). This makes a total of twelve inequalities, three of each type. A key observation is that \[M:=I_{XY}-II_{XY}=III_{XY}-IV_{XY} \tag{7}\] is independent of the choice of \(\{X,Y\}\). It follows that \(\Sigma_{3}\) is a union of two cones \[\Sigma_{3}^{+}: II_{XY}\geq 0\,,\quad IV_{XY}\geq 0\,,\quad M\geq 0 \tag{8}\] \[\Sigma_{3}^{-}: I_{XY}\geq 0\,,\quad III_{XY}\geq 0\,,\quad M\leq 0\,, \tag{9}\] each of which has seven facets, corresponding to the seven defining inequalities. By a slight elaboration of Pippenger's approach [3] it can be shown that \(\Sigma_{3}^{+}\subset\Sigma_{3}^{*}\), while \(\Sigma_{3}^{-}\) behaves differently. For any \(\vec{H}\in\Sigma_{3}^{-}\) one finds that there exists a state \(\rho\) and a vector \(\vec{l}\) belonging to the 1-dimensional face (half-line) \(\ell\) of \(\Sigma_{3}^{-}\) defined by the six equations \[(\ell):\qquad I_{XY}=0\,,\qquad III_{XY}=0\,, \tag{10}\] such that \[\vec{H}=\vec{l}+\vec{H}_{\rho}\,. \tag{11}\] If it so happened that \(\ell\subset\Sigma_{3}^{*}\), it would follow by additivity of entropy vectors (3) that \(\Sigma_{3}^{-}\subset\Sigma_{3}^{*}\) and hence that \(\Sigma_{3}=\Sigma_{3}^{*}\). However, as a consequence of Theorem 1 below there is an open line segment of \(\ell\) ending at the apex which is not contained in \(\Sigma_{3}^{*}\), and so \(\Sigma_{3}\neq\Sigma_{3}^{*}\). On the other hand, Pippenger identifies a state \(\rho^{l}\) such that \(\vec{H}_{\rho^{l}}\in\ell\), which by the cone property implies that \(\ell\subset\overline{\Sigma}_{3}^{*}\). Using (11) one then obtains that \(\Sigma_{3}^{-}\subset\overline{\Sigma}_{3}^{*}\) and consequently \(\Sigma_{3}=\overline{\Sigma}_{3}^{*}\), which is the already mentioned main result of [3]. In order to satisfy (10), the entropy vector \(\vec{H}_{\rho^{l}}\) must satisfy \[I(X:Y)_{\rho^{l}}=0\,,\,H(X)_{\rho^{l}}+H(XYZ)_{\rho^{l}}=H(YZ)_{\rho^{l}} \tag{12}\] for any pair \(\{X,Y\}\) in \(\mathcal{N}=\{A,B,C\}\) with \(Z\neq X,Y\), where the more standard notation \(I(X:Y)\) has been used instead of \(I_{XY}\) for the _quantum mutual information_. By purification we can instead consider a state \(\eta=\left|V\right\rangle\left\langle V\right|\) on a 4-party system \(\{A,B,C,D\}\) such that \(\eta_{\mathcal{N}}=\rho^{l}\) and, using (4), the equations (12) take on the more symmetric form \[I(X_{i}:X_{j})_{\eta}=0 \tag{13}\] for all pairs \(X_{i},X_{j}\) in \(\{A,B,C,D\}\). Indeed, the state \(\rho^{l}\) is obtained in [3] by first constructing such a pure state \(\eta\). The main result proven below establishes, for general values of \(N\geq 4\), that inside certain faces of \(\Sigma_{N}\), defined by requiring one constituent system, say \(X_{1}\), to have vanishing mutual information with all others, there is a strictly positive lower bound on the distance from the apex to any entropy vector corresponding to a pure state with \(H(X_{1})\neq 0\). For the case \(N=3\) this for example implies, via Corollary 1, that there is a positive lower bound on the distance from the apex to any realizable entropy vector on any given ray within the 4-dimensional face of \(\Sigma_{3}\) defined by \[III_{XY}=0\,,\,\,\,\text{and such that}\,\,H(ABC)\neq 0. \tag{14}\] See Figure 1 for a visualization. This answers, in particular, a question posed by A. Winter [11] concerning the possibility of down-scaling certain realizable entropy vectors. For the sake of completeness we exhibit in Appendix C, for arbitrary \(N\geq 4\), states which fulfill the stated conditions, and thus generalising the pure state \(\eta\) mentioned above. ## II Main results The goal of this section is to establish the following entropy bound. **Theorem 1**.: _Let \(\rho\) be a pure state of the \(N\)-partite system \(\mathcal{N}=\{X_{1},...,X_{N}\}\) such that \(H(X_{1})_{\rho}\neq 0\). Suppose further that_ \[I(X_{1}:X_{i})_{\rho}=0\quad\text{for all}\quad i=2,\dots,N\,.\] _Then the following bound holds:_ \[\sum_{i=1}^{N}H(X_{i})_{\rho}\ >\ 1\,. \tag{15}\] To establish Theorem 1, we first list three lemmas below which are the main ingredients in the subsequent proof. Their demonstrations are provided in Appendix A. We will use the following notation. Given a state \(\rho\) of \(\mathcal{N}\), we denote by \(\lambda_{1}^{i}\geq\lambda_{2}^{i},\dots\) the eigenvalues of \(\rho_{X_{i}}\) in decreasing order and by \(\ket{e_{1}^{i}},\ket{e_{2}^{i}},\dots\) a corresponding orthonormal eigenstate basis such that \[(\rho_{X_{i}})_{ab}:=\bra{e_{a}^{i}}\rho_{X_{i}}\ket{e_{b}^{i}}=\lambda_{a}^{ i}\delta_{ab}\,. \tag{16}\] Moreover, we define \[\epsilon_{i}:=1-||\rho_{X_{i}}||_{\infty}=1-\lambda_{1}^{i}\quad\text{and} \quad\varepsilon:=\sum_{i=1}^{N}\epsilon_{i}\,. \tag{17}\] Clearly, \(\sum_{x_{i}>1}\lambda_{x_{i}}^{i}=\epsilon_{i}\) and one easily verifies that \[H(X_{i})\geq\max\{h(\epsilon_{i}),\ -\log(1-\epsilon_{i})\}\geq 2\epsilon_{i}\,, \tag{18}\] where \(h\) denotes the _binary entropy function_, \[h(x)=-x\log x-(1-x)\log(1-x)\,.\] Assuming \(\rho\) to be pure, i.e. \(\rho=\ket{V}\bra{V}\) where \(\bra{V}=1\), we represent \(\ket{V}\) with respect to the basis for \(\mathcal{H_{N}}\) consisting of tensor products of eigenstates \(\ket{e_{a}^{i}}\) for the single-party density matrices, that is \[\ket{V}=\sum_{x_{1},\dots,x_{N}}V_{x_{1}\dots x_{N}}\ket{e_{x_{1} }^{1}\dots e_{x_{N}}^{N}}, \tag{19}\] \[\text{where}\quad\sum_{x_{1},\dots,x_{N}}\ket{V_{x_{1}\dots x_{N }}}^{2}\,=\,1\,. \tag{20}\] A sum over dummy-indices \(x_{i}\in\mathbb{N}\) will here always run up to \(\dim(\mathcal{H}_{X_{i}})\). The matrix elements of \(\rho_{\mathcal{N}}\) and the reduced states are quadratic expressions of the components of \(\ket{V}\); e.g., \[(\rho_{X_{1}})_{a\ b} =\sum_{x_{2},\dots,x_{N}}V_{ax_{2}\dots x_{N}}V_{bx_{2}\dots x_{N}}^ {*} \tag{21}\] \[(\rho_{X_{1}X_{2}})_{a_{1}a_{2}\ b_{1}b_{2}} =\sum_{x_{3},\dots,x_{N}}V_{a_{1}a_{2}x_{3}\dots x_{N}}V_{b_{1}b_{ 2}x_{3}\dots x_{N}}^{*}\,. \tag{22}\] Extensive use will be made of the fact that \(I(X_{i}:X_{j})=0\) implies that \(\rho_{X_{i}X_{j}}\)is a product state, which in our notation and choice of basis means that \[(\rho_{X_{1}X_{j}})_{a_{1}a_{2}\ b_{1}b_{2}}=\lambda_{a_{1}}^{i}\lambda_{a_{2} }^{j}\delta_{a_{1}b_{1}}\delta_{a_{2}b_{2}}\,. \tag{23}\] The announced lemmas relate the \(\epsilon_{i}\)'s to the components of \(V\) as follows. **Lemma 1**.: _For any pure state \(\rho\) it holds that_ \[|V_{1\dots 1}|^{2}\geq 1-\varepsilon \tag{24}\] **Lemma 2**.: _For any pure state \(\rho\) such that \(I(X_{1}:X_{j})=0\) for all \(j\neq 1\) we have_ \[\sum_{x_{1}>1}|V_{x_{1}1\dots 1}|^{2}\geq\epsilon_{1}(1+\epsilon_{1}-\varepsilon)\,. \tag{25}\] **Lemma 3**.: _For any pure state \(\rho\) it holds that_ \[(1-\epsilon_{1})\sum_{x_{1}>1}|V_{x_{1}1\dots 1}|^{2}\leq\epsilon_{1}(\varepsilon- \epsilon_{1})\,. \tag{26}\] We remark that Lemma 1 is used for the proof of Lemma 3, while only Lemma 2 and Lemma 3 are used in the proof of Theorem 1. Figure 1: The solid figure represents the set of permissible values for \((H(A),H(C),H(ABC))\) satisfying (14), given the basic inequalities and Corollary 1. We have further made the projection \(H(A)=H(B)\) to get a 3-dimensional surface. The dashed lines span a part of \(\Sigma_{3}\) ruled out by Corollary 1, and \(O\) denotes the apex of \(\Sigma_{3}\). The red line represents the ray \(\ell\). Figure 2: The conditions of Theorem 1 are here represented with each circle denoting a constituent system \(X_{i}\). The double lines indicate that the mutual information between the two systems is \(0\), and it is assumed that the total state is pure. Proof of Theorem 1.Combining Lemma 2 and Lemma 3 we get \[(1-\epsilon_{1})\epsilon_{1}(1+\epsilon_{1}-\varepsilon)\,\leq\,\epsilon_{1}( \varepsilon-\epsilon_{1})\,.\] Since \(\epsilon_{1}>0\) as a consequence of the assumption \(H(X_{1})\neq 0\), this is equivalent to \[1+(1+\varepsilon-\epsilon_{1})\epsilon_{1}\leq 2\varepsilon\,.\] Since the left-hand side of this inequality is larger than \(1\), it follows that \(\varepsilon>\frac{1}{2}\) which in turn implies (15) by use of (18) and the definition of \(\varepsilon\). This completes the proof of Theorem 1. In case the given state \(\rho\) is not pure, we can apply Theorem 1 to its purification and obtain by use of (4) the following useful consequence (see Appendix B for more details). **Corollary 1**.: _Let \(\vec{H}\) be a realizable entropy-vector for a system \(\mathcal{N}=\{X_{1},...,X_{N}\}\) which fulfills_ \[H(\mathcal{N})>0\;\;\text{and}\;\;H(X_{i})+H(\mathcal{N})=H(\mathcal{N} \backslash X_{i})\] _for all \(i\in\{1,...,N\}\). Then the following bound holds:_ \[H(\mathcal{N})+\sum_{i=1}^{N}H(X_{i})\;>\;1\,. \tag{27}\] Note that for \(N=3\) the conditions of the corollary coincide with (14). For an \(N\)-partite state this result excludes a range of vectors in \(\Sigma_{N}\) from \(\Sigma_{N}^{*}\) that satisfy \(N\) linear constraints and hence can be labeled by \(2^{N}-N-1\) parameters. In Appendix C we provide a \(4\)-parameter family of realizable entropy vectors on the boundary of \(\Sigma_{N}\) satisfying the conditions of the corollary. Since the bound (27) is non-homogeneous, and rules out adequately down-scaled versions of realisable entropy vectors (such as those presented in Appendix C), it follows that \(\Sigma_{N}^{*}\) is not a cone for \(N\geq 3\). On the other hand, the closure of \(\Sigma_{N}^{*}\)_is_ a cone [3], so it likewise follows that \(\Sigma_{N}^{*}\) is not closed for \(N\geq 3\). ## III Conclusion We have presented new general restrictions on the joint values that the von Neumann entropy of subsystems of a multipartite quantum system can assume. We conclude by highlighting the potential impact in macroscopic systems, quantum information theory, and quantum cryptography - and pointing to the importance of a further investigation of the shape of \(\Sigma_{N}^{*}\) beyond its closure. The entropy concept itself originally arose from thermodynamical considerations of macroscopic systems consisting of many particles, such as gases. Quantum correlations of such systems can be quantified in terms of the scaling of the _entanglement entropy_, that is the entropy of a subregion \(A\). It has been found for many systems that this entropy is roughly proportional to the size of the boundary \(\partial A\) and not to the volume, a statement known as the _area law_[12]. For topologically ordered systems it is expected that \[H(A)=\alpha\,|\partial A|-\gamma\] up to terms vanishing as the "area" \(|\partial A|\) gets large. Moreover, the constant additive term \(-\gamma\) is expected to be universal and is dubbed the _topological entanglement entropy_. Actually, \(-\gamma\) equals an alternating sum of entropies, called \(M\), encountered above in (7). As shown in [13; 14] the value of \(\gamma\) in a class of systems is always positive, and \(M\) is thus negative. This is precisely the regime in which we identified restrictions on entropy vectors and they may therefore have implications for the attainable values of the topological entropy. We point out, however, that the entropy vectors of the particular finite systems calculated in [13; 14] in terms of their total quantum dimension do not satisfy the conditions of our theorem. Also, as the constraints we obtained are not balanced [9], our results have no direct bearing on the usual situation when a large system size is considered. Many functions in quantum information theory are defined in terms of optimizations of von Neumann entropies [15]. An example from entanglement theory is the squashed entanglement [16] \[E_{sq}(\rho_{AB})=\inf\frac{1}{2}I(A:B|E)_{\rho}\,,\] where the minimization is over extensions \(\rho_{ABE}\) of \(\rho_{AB}\). The results of the present work directly constrain such optimizations and it remains to be explored whether they could lead to simplified computations in specific cases. Finally, let us consider a cryptographic situation, known as quantum secret sharing [17; 18; 19]: Alice (A) wishes to distribute information to \(N-1\) parties (\(N\geq 4\)) * purely, in the sense that the overall state of her and the constituent systems is pure * secretly, in the sense that every share is in product with hers * non-trivially, in the sense that \(H(A)>0\,.\) Note that these are precisely the conditions of Theorem 1 and thus it follows from our work that she cannot do so unless the average share carries a minimum entropy, equal to \(1/N\), putting a lower bound on the communication required. ###### Acknowledgements. We acknowledge financial support from the VILLUM FONDEN via the QMATH Centre of Excellence (Grant No.10059), the European Research Council (ERC Grant Agreement No. 81876) and the Novo Nordisk Foundation (grant NNF20OC0059939 'Quantum for Life').
**異なるN-partite量子系部分のノイマン entropies の関係は、スピン系から量子符号理論、ブラックホールに至るまでの多様な状況への理解に直接影響を与えます。** **$\Sigma^*_N$ の集合を用いて表すと、全量と部分の entropies を構成する可能なベクトルを包含する。** **有名な強 subaddivity 不等式は、その閉集合 $\overline\Sigma^*_N$ を制約し、これは凸のCone です。** **さらに、均等制約不等式も知られています。** **この仕事では、$\Sigma^*_N$ をゼロ entropies のベクトルに近づける不等式を、非均等不等式として提供します。** **特に、$\Sigma^*_N$ は $N\geq 3$ の場合、Cone とは限りません。** **これらの不等
2309.17260
PlaceNav: Topological Navigation through Place Recognition
Recent results suggest that splitting topological navigation into robot-independent and robot-specific components improves navigation performance by enabling the robot-independent part to be trained with data collected by robots of different types. However, the navigation methods' performance is still limited by the scarcity of suitable training data and they suffer from poor computational scaling. In this work, we present PlaceNav, subdividing the robot-independent part into navigation-specific and generic computer vision components. We utilize visual place recognition for the subgoal selection of the topological navigation pipeline. This makes subgoal selection more efficient and enables leveraging large-scale datasets from non-robotics sources, increasing training data availability. Bayesian filtering, enabled by place recognition, further improves navigation performance by increasing the temporal consistency of subgoals. Our experimental results verify the design and the new method obtains a 76% higher success rate in indoor and 23% higher in outdoor navigation tasks with higher computational efficiency.
Lauri Suomela, Jussi Kalliola, Harry Edelman, Joni-Kristian Kämäräinen
2023-09-29T14:12:54
http://arxiv.org/abs/2309.17260v4
# PlaceNav: Topological Navigation through Place Recognition ###### Abstract Recent results suggest that splitting topological navigation into robot-independent and robot-specific components improves navigation performance by enabling the robot-independent part to be trained with data collected by different robot types. However, the navigation methods are still limited by the scarcity of suitable training data and suffer from poor computational scaling. In this work, we present PlaceNav, subdividing the robot-independent part into navigation-specific and generic computer vision components. We utilize visual place recognition for the subgoal selection of the topological navigation pipeline. This makes subgoal selection more efficient and enables leveraging large-scale datasets from non-robotics sources, increasing training data availability. Bayes filtering, enabled by place recognition, further improves navigation performance by increasing the temporal consistency of subgoals. Our experimental results verify the design and the new model obtains a 76 % higher success rate in indoor and 23 % higher in outdoor navigation tasks with higher computational efficiency. ## I Introduction Autonomous visual navigation is a well-studied problem in the field of robotics [1, 2]. One line of research frames navigation in known environments as _topological navigation_[3, 4, 5], meaning purely vision-based navigation between nodes of a topological map that are represented by images. The advantage of this approach is that it does not require building a geometric reconstruction of the operating environment [6] or training environment-specific control policies [7]. Topological navigation systems have two parts: _subgoal selection_ and a _goal-reaching policy_. First, the subgoal selection module chooses the map node to reach next as a subgoal. Then, the goal-reaching policy produces control commands to take the robot to the selected subgoal. A popular approach to subgoal selection is _temporal distance prediction_, which means predicting the number of time steps between the robot's current camera observation and the subgoal candidates [8, 9, 10, 11, 12, 13]. These learning-based models are trained with offline datasets of robot trajectories. Previous works have demonstrated impressive real-world navigation [10, 12, 13], but the temporal distance prediction approach has two significant shortcomings. First, it utilizes neural architectures that iteratively take the current observation and each subgoal candidate as input, resulting in computational complexity scaling at \(\mathcal{O}(n)\) with the number of candidate images. This requires heuristics to limit the candidates and constrains the methods available for ensuring subgoal temporal consistency. Second, the fact that temporal distance prediction requires training data that originates from robots, actual or simulated, introduces an unnecessary data bottleneck. High-quality robotics datasets are very scarce compared to general web-scale data, and models trained with simulated data suffer from generalization issues [14]. We claim that subgoal selection is not a unique problem but an instance of the broader concept of image retrieval. To address this, we present _PlaceNav_, which frames the selection as a _place recognition_[15] task. This design provides three advantages. First, the large-scale and high-diversity datasets available for training place recognition models enhance subgoal selection robustness against changes in viewpoint and appearance. Second, subgoal selection is performed by a fast nearest-neighbor search over image embeddings. This, as shown in Fig. 1, provides superior scalability and removes the need for heuristics. Finally, place recognition readily integrates with methods for ensuring temporal consistency. In summary, our contributions are * Navigation approach that decouples training of subgoal selection models from robotics datasets by treating the selection as a generic place recognition task. * Integration of learning-based subgoal selection with a Bayesian filter that improves temporal consistency. * Demonstrations with real robots to experimentally validate our design. Code and videos are available on the project page\({}^{\dagger}\). Fig. 1: Visual place recognition finds which map image \(I_{s}\) was captured closest to the robot’s observation \(I_{t}\) by efficient matching of image embeddings \(\mathbf{z}\). ## II Related work **Vision-based topological navigation.** In a recent trend, topological maps have been used to divide long-horizon tasks into short-horizon segments suitable for learned goal-reaching policies [8, 16]. An essential part of such a hierarchical structure is choosing which subgoal to reach next. One approach is to learn to predict the reachability between two images from simulated rollouts [17]. Savinov _et al_. [8] proposed to use the _temporal distance_, or the number of time steps \(\Delta t\), between the current observation and a subgoal as a proxy for reachability. Its key strength is that it can be learned from offline data. While alternative approaches exist [18], temporal distance is popular and has been adopted in several recent works [9, 10, 11, 12, 13]. However, the diversity and size of datasets suitable for temporal distance learning are modest. RECON [19] with 25 h, SACSoSN [20] with 75 h, and TartanDrive [21] with 5 h of navigation trajectories from single location each are notable examples. Furthermore, because of the model architectures utilized, the computational complexity of temporal distance prediction scales at O(n) with the number of subgoal candidates considered. **Place recognition.** Place recognition involves recognizing places from images, often framed as image retrieval across images captured from different viewpoints and varying appearances [15]. It naturally integrates with topological navigation, as subgoal selection can be viewed as an image retrieval problem. Traditional methods for place recognition rely on aggregating handcrafted local features [22, 23, 24], but newer methods utilize deep learning to extract embeddings that can be compared efficiently using nearest-neighbor search [25, 26, 27]. The methods are trained to produce embeddings that are similar for images from the same place and dissimilar for images from different places, typically by classification loss [27] or ranking losses such as contrastive [28], triplet [25] or listwise [26] loss. **Temporal consistency.** While subgoal temporal consistency has been studied in non-learned topological navigation literature [4], it has received limited attention in the context of learning-based methods. A robot moves along a route continually, so the transitions between the subgoals should be smooth. As a heuristic solution, SPTM [8] and GNM [12] adopted the approach of only considering subgoals within a sliding window centered on the previous subgoal. Meng _et al_. [17] utilize a similar approach but resort to global search when the window deviates from the robot's actual location. In place recognition literature, the topic has received more attention. Early approaches [29, 30, 31, 32] utilized feature similarity matrices to find the best-matching image sequences. A newer line of work [33, 34, 35] considers descriptors that represent sequences instead of individual images. As an alternative, Xu _et al_. [36, 37] added a Bayesian filter to the matching process. In this work, we show that learning-based topological navigation also benefits from such methods. ## III System overview In this section, we describe _PlaceNav_, our proposed navigation pipeline. First, we discuss the basic components and definitions of topological navigation. Then, we elaborate on our contributions related to subgoal selection via place recognition and subgoal temporal consistency. ### _Topological navigation fundamentals_ Autonomous navigation using topological maps generally consists of two stages. Before navigation, an operator has to perform a manual'reference run' to capture the desired route. The robot saves images along the route that compose the topological map \(\mathcal{M}\) for navigation. During navigation, the robot-agnostic topological navigation algorithm is combined with a robot-specific controller. At each inference step \(t\), the current robot observation \(I_{t}\) is compared to the different subgoal candidate images \(I_{s}\in\mathcal{M}\) at nodes \(s=[0,1,\ldots,S]\) of the topological map. One of the nodes is selected as the next subgoal, and an image-based goal-reaching policy produces the motion plan to reach it. In this work, we experiment with the different subgoal selection methods and adopt the waypoint estimation approach proposed by Shah _et al_. [12] as the goal-reaching policy. This approach defines the motion plan as a sequence of \(\tau\) waypoints \(\{p_{i},\psi_{i}\}_{i},\ i=[0,1,\ldots,\tau]\) that guide the robot to the subgoal. The waypoints, defined as metric coordinates \(p_{i}\) and heading angle \(\psi_{i}\) in the robot's local coordinate frame, are tracked by a robot-specific controller. ### _Subgoal selection via place recognition_ PlaceNav introduces the following modifications to the subgoal selection procedure. Instead of computing temporal distances \(\Delta t\) between each observation and subgoal candidate pair, we use a place recognition model \(f_{enc}\) to process the observation and map images separately. The model produces image embeddings \(\mathbf{z}_{t}\) and \(\mathbf{z}_{s}\) that can be compared by Euclidean distance, enabling efficient subgoal search. Figure. 1 visualizes the concept. **Training data availability.** The temporal distance prediction models are trained to predict the \(\Delta t\) between two image frames sampled from a trajectory driven by a robot. This limits the amount and diversity of potential training data. Place recognition methods can be trained with data from more generic sources. Images of different places, preferably captured at various points in time, from different viewpoints, and under different environmental conditions. The images' rough position and orientation information provide annotations. Google StreetView images, for example, are well-suited as training data. The sizes of place recognition datasets are in the order of millions of images [38, 39, 40], the SF-XL [27] alone consisting of 41M images. **Computational complexity.** The computational complexity of temporal distance prediction scales linearly with the number of subgoal candidates considered at each inference step. Because of this, the number of subgoal candidates must be limited, which is commonly implemented as a _sliding window_ over the map nodes. The window is centered on the subgoal from the previous step, and only the nodes within the window are considered potential subgoals. Place recognition enables computation and storage of the descriptors for the map images offline before robot operation. Thus, the inference computational complexity of subgoal selection is decoupled from the number of subgoal candidates being considered. The descriptors can be matched in milliseconds by nearest neighbor search [41]. Consequently, heuristics to limit the number of subgoal candidates are not needed from the perspective of computational budget. ### _Temporal consistency_ Limiting the number of subgoal candidates also enhances temporal coherence between inference steps, preventing erratic subgoal selection _e.g_. due to visually similar content. A sliding window over the map achieves this to some extent. However, the window may drift away from the robot's location, making correct subgoal selection impossible. Bayesian filtering, enabled by efficient matching of image embeddings, is an alternative strategy for enforcing temporal consistency. Xu _et al_. [36] propose one such approach for use with place recognition methods, which we adapt for our problem. The idea, illustrated in Fig. 2, is to formulate place recognition as the measurement step of a discrete Bayesian state estimator. This filter maintains a belief of the robot's location over the map nodes by recursively updating its posterior distribution. We present the key equations here for completeness but refer the reader to the original paper for details. Given an initial posterior belief distribution, a motion model propagates the belief into the future in a prediction step. If the robot's local movement (_i.e_. odometry) is not being tracked, as is the case with our system, the motion model is simply \[p(s_{i}|s_{j})\propto\begin{cases}1&w_{l}\leq i-j\leq w_{u}\\ 0&\text{otherwise}\end{cases}\enspace. \tag{1}\] From each node, the robot has an equal probability of moving up to \(w_{u}\) steps toward the goal, staying put, or moving \(w_{l}\) steps back. Other transitions have zero probability. The prediction step is followed by a measurement step, where a place recognition query between the observation and the map nodes produces a measurement belief by the measurement function \[p(\mathbf{z}_{t}|s,\mathcal{M})\propto g(\mathbf{z}_{t},s,\mathcal{M})=\exp(- \lambda_{1}\|\mathbf{z}_{t}-\mathbf{z}_{s}\|_{2})\enspace, \tag{2}\] where \(\mathbf{z}_{t}\) is the observation embedding, \(\mathbf{z}_{s}\) is a map node embedding at the state \(s\) being considered, and \(\mathcal{M}\) is the map. \(\lambda_{1}\) scales the effect of each measurement on the posterior. Its value is automatically determined at the beginning of each navigation session as proposed by the authors [36]. The measurement belief is multiplied by the belief from the prediction step to acquire the posterior belief distribution \(p(s)\). The map node with the highest posterior probability is considered the closest to the latest observation. This filter significantly improves the stability of subgoal selection. Unlike the sliding window approach, the filter maintains full posterior belief over all map nodes, so it cannot get lost. It can solve the 'kidnapped robot' problem [42], whereas the sliding window requires the start node of the robot to be specified manually. ### _Implementation details_ **Architecture & Training.** For the place recognition part of the PlaceNav, we use a CosPlace network [27] because of its high performance and simple training process. The model architecture comprises a convolutional encoder and a generalized mean pooling (GeM) layer. The model is trained via classification loss, enabled by dividing the training data into distinct spatial groups. As the original CosPlace model was trained with high-resolution images (\(512\times 512\)), the training checkpoints provided by the authors do not work well with the \(85\times 64\) images we need to use for comparison with the baseline GNM model. For our experiments, we train a CosPlace model from scratch using an EfficientNet-B0 [43] backbone and the 41.2M images of the San Francisco eXtra Large (SF-XL) dataset [27], resized to \(85\times 85\) during training. The model was configured to extract 512-dimensional descriptors. Otherwise, we followed the training procedure outlined in [27]. We will refer to this low-resolution model as _CosPlace-LR_. The Shah _et al_. [12] waypoint estimation was chosen as the goal-reaching policy. We do not retrain the model and use the pre-trained weights provided by the authors. **Deployment.** During inference, the robot uses place recognition to identify the map node that best matches the current observation. The next node along the route after the best-matching node is selected as the subgoal. We implemented two distinct temporal consistency methods in the subgoal selection. The first is the sliding window used in prior works [8, 12, 17], and the second is our implementation of the discrete Bayesian filter proposed by Xu _et al_. [36]. At the beginning of a route, the sliding window is initialized to the first node. The initial belief distribution of the discrete filter is acquired from the first place recognition query, meaning that it operates in a 'kidnapped robot' mode. We set the discrete filter motion model transition boundaries to \(w_{u}=2\) and \(w_{l}=-1\) based on calibration experiments. Fig. 2: The discrete Bayesian filter alternates place recognition measurement and motion model prediction. \(p(\mathbf{s})\), the posterior belief, determines the best matching node. After choosing the subgoal to reach next, the goal-reaching policy predicts a series of 5 waypoints \(\{p_{i},\psi_{i}\}\). The robot follows these waypoints in an open-loop fashion until the next prediction using the robot's low-level controller. The prediction loop of subgoal selection and waypoint prediction runs at 5 Hz. The place recognition and waypoint estimation models receive \(85\times 64\) resolution images as input. ## IV Experimental setup We performed navigation experiments with real robots in diverse environments to enable an informed comparison of PlaceNav and the prior work. We conducted 360 repetitions of different routes with different robots, subgoal selection methods, and temporal consistency approaches, adding up to a total of 19 km of navigation testing. We also evaluated the subgoal selection methods offline using a place recognition benchmark. With our experiments, we aim to answer the following research questions: * **Q1.** Does subgoal selection require models trained with data originating from robots, or could more efficient place recognition models replace them? * **Q2.** How robust are temporal distance prediction models to viewpoint and appearance changes? * **Q3.** How does subgoal temporal consistency affect navigation performance? ### _Robot navigation experiments_ **The robots.** We experimented with two different robots, a Turtlebot2, and a Robotnik Summit XL Steel, shown in Fig. 3. Turtlebot is a small research platform intended for indoor use. Summit is a 4-wheeled skid-steer ground vehicle with a weight of 90 kg, load capacity of 130 kg, and a maximum velocity of 3 m/s. Both robots carry a front-facing 175\({}^{\circ}\) field-of-view fish-eye camera, which is used for navigation, and a laptop computer that runs the navigation algorithms. The laptop has a Nvidia Geforce GTX 1070 GPU and an Intel i7-7700 CPU. The deep learning components of the navigation stack run on the GPU. **Baseline.** We use the temporal distance prediction from GNM [12] by Shah _et al_. as the baseline. We utilize the GNM for waypoint prediction in all experiments and only experiment with the subgoal selection. The model takes the observation and subgoal images concatenated along the channel dimension as inputs. It was trained with \(85\times 64\) images from 54 hours of navigation trajectories. The inference loop runs at 5 Hz. At each step, the node with the smallest temporal distance \(\Delta t\) above 3 is selected as the subgoal. As the discrete filter requires the distance between the current observation and all the map nodes at each step, using it with the GNM is not computationally feasible. Thus, with GNM, we only utilize the sliding window. **Indoor navigation.** The indoor experiments were conducted with the Turtlebot robot. We tested the methods along 20 different routes, performing 3 repetitions of each route with each method. Fig. 4 shows examples along the routes. Routes where all the methods fail were excluded from the quantitative analysis. The experiments took place at a university campus, with buildings from various eras with diverse appearances. The lengths of the test routes were uniformly distributed in the range from 15 m to 35 m, and they contained various amounts of difficult content, such as turning maneuvers and passing narrow gaps. We chose routes that do not contain content that causes navigation failures because of errors in waypoint estimation. Such content, _e.g_. very narrow gaps, and turning maneuvers in open areas with few salient features can cause the robot to veer off course even though the subgoal selection algorithm is not failing. **Outdoor navigation.** The outdoor experiments were conducted with the Summit XL robot. We experimented on 20 test routes in an urban environment, ranging from 50 m to 150 m in length. Like the indoor experiments, each test route was repeated 3 times with each method. In indoor calibration tests before the actual experiments, correct subgoal selection led to accurate waypoint estimation. In outdoor tests, we observed more variability. This is likely due to increased environmental noise and larger appearance changes between the map images and robot operation. _e.g_. changes in ambient illumination led to complete waypoint estimation failure despite a good subgoal choice. Therefore, in the actual experiments, we maintained test run illuminance within a 10 000 lx range of the reference run. A Konica Minolta CL-70F CRI illuminance meter was used for this purpose. **Evaluation criteria.** We follow the evaluation guidelines for image-goal navigation in [44] and measure the navigation performance by the _success rate_ (SR). Success rate describes the ratio of successful and failed repetitions of a test route. Fig. 4: Examples along the test routes. The top row is from outdoor tests, bottom row is from indoors. Fig. 3: The robots: A Turtlebot2 (left) and a Robotnik Summit XL Steel (right) Repetition is considered successful when the robot reaches a pose where its camera view corresponds to the final node of the topological map. The subgoal selection must also localize to this node, triggering a navigation stop signal. We do not use distance-based success criteria, which are more common but less aligned with the robot's goal determination. While last-mile geometric navigation could enhance goal-reaching precision, as suggested by Wasserman _et al_. [45], it is unnecessary for the scope of this work. Repetition is considered a failure when the robot takes actions that prevent it from reaching the goal, _i.e_. sending the'reached goal' signal without the goal view in sight, risking immediate collisions, or deviating significantly from the test route without recovery prospects. ### _Offline evaluation_ To enable a reproducible comparison of temporal distance prediction and place recognition, we tested GNM on standard place recognition benchmarks. The VPR-Benchmark by Bertoli _et al_. [46] was used to facilitate the experiments. We modified the benchmark to enable the evaluation of GNM that simultaneously takes the query and reference images as inputs. A subset of the VPR-Benchmark datasets where the size of the test database is small enough that GNM can process it in a reasonable time was picked for evaluation. We compared the performance of our low-resolution CosPlace-LR model and the GNM model. The test images were resized to \(85\times 64\). For reference, we additionally evaluate the standard CosPlace model with full-resolution images. Place recognition performance is assessed using the Recall@N score, which measures how often one of the top N images retrieved is within 25 m of the query image location. In the case of temporal distance prediction, the top N images are those with the smallest predicted temporal distances. ## V Results ### _Robot navigation experiments_ Tables I and II display the results of the navigation experiments with the Turtlebot and Summit XL. **Indoors.** The GNM baseline has a notably lower success rate than the proposed place recognition based approaches. PlaceNav shows a 56% SR increase, which rises to 77% with the Bayesian filter. This observation is interesting given that GNM training includes indoor images from the GS4 [47] dataset, while CosPlace models are trained on outdoor Google Streetview images. Place recognition models exhibit broad generalization capabilities, learning features that generalize across domains. Similar to Shah _et al_. [12], we split the test routes into 'Easy' and 'Hard' categories in posterior analysis. The categories are based on the number of narrow turns and tight passages along the routes. We chose a threshold that splits the routes into evenly sized groups. For indoor routes, this threshold was set to 4. 'Easy' routes had fewer than 4 such features, while 'Hard' routes had 4 or more. The categorization provides further insight into the method performances. While the differences in SR are minimal for 'Easy' routes, the advantage of place recognition is evident in the 'Hard' category. GNM's SR decreases by 50% from 'Easy' to 'Hard,' but PlaceNav maintains the same SR, even improving when the Bayesian filter is employed. Introducing the Bayesian filter yields clear advantages over the sliding window method. Visual inspection confirms improved stability and performance during turns, especially in narrow gaps. The filter mitigates errors such as mid-turn direction changes that can arise because of the erratic behavior of the sliding window approach. **Outdoors.** Outdoors, the success rate gap between methods is narrower compared to indoors, despite CosPlace-LR's outdoor training data. One possible explanation is the higher variation in waypoint estimation performance observed in the calibration tests. Consequently, the waypoint estimation module contributes more significantly to the final SR's, diminishing the effect of subgoal selection performance. The magnitude of the performance increase brought by the Bayesian filter is consistent with the indoor experiments, the improvement being around 10 percentage points in both cases. The differences in SR's between the 'Easy' and 'Hard' categories are similar to the indoor experiments. GNM's SR drops by half from 'Easy' to 'Hard,' whereas PlaceNav exhibits minimal to no performance decrease, emphasizing place recognition's effectiveness in maneuvers where having a correct subgoal is crucial. In the analysis of outdoor experiments, an even distribution of routes between the 'Easy' and 'Hard' categories was achieved by a threshold of 3 turns or narrow passages per route. The occurrence of visually bursty content [49], characterized by repetitive geometric patterns that heavily influence image embeddings, poses a challenge for place recognition methods [15, p.12] on certain test routes (see Fig. 5). This issue, causing the relatively low SR of PlaceNav with the sliding window in the 'Easy' category can lead \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & Temporal & Easy & Hard & Total \\ & & filter & \(n=24\) & 36 & 60 \\ \hline GNM [12] & \multirow{2}{*}{\(T\)} & Window & \(\mathbf{0.67}\) & 0.33 & 0.47 \\ \hline \multirow{2}{*}{PlaceNav} & \multirow{2}{*}{P} & Window & 0.46 & 0.44 & 0.45 \\ & & Bayesian & \(\mathbf{0.67}\) & \(\mathbf{0.53}\) & \(\mathbf{0.58}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: **Outdoors experiment** success rates over 60 repetitions, driven with the Summit XL. \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Type} & Temporal & Easy & Hard & Total \\ & & filter & \(n=33\) & \(27\) & 60 \\ \hline GNM [12] & \multirow{2}{*}{\(T\)} & Window & 0.52 & 0.26 & 0.39 \\ \hline \multirow{2}{*}{PlaceNav} & \multirow{2}{*}{\(P\)} & Window & 0.62 & 0.60 & 0.61 \\ & & Bayesian & \(\mathbf{0.65}\) & \(\mathbf{0.77}\) & \(\mathbf{0.69}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: **Indoors experiment** success rates over 60 repetitions, driven with the Turtlebot. to the sliding window method becoming trapped within the bursty map region, resulting in navigation failure. In contrast, the Bayesian filter handles bursty content more effectively by maintaining a full posterior belief across all map nodes and avoiding such entrapment. If the robot traverses the bursty segment successfully, the filter accurately localizes and completes the test route without issues. ### _Offline evaluation_ Here, we present the results of evaluating GNM and CosPlace on the VPR-Benchmark. We also discuss the impact of input resolution and domain shift on recall rates. **Performance Comparison.** Table III shows a comparison of the retrieval performances of the GNM and CosPlace models across several benchmark datasets. Notably, temporal distance prediction performs worse than place recognition across all datasets. GNM's recall follows a similar trend as CosPlace-LR, with GNM achieving higher recall wherever CosPlace-LR excels. However, GNM's recall values are consistently an order of magnitude lower. For instance, on Tokyo24/7 and St. Lucia, where CosPlace-LR attains over 70% recall, GNM only reaches approximately 9%. On other datasets, GNM's performance is significantly lower. **Impact of Input Resolution.** Decreasing image resolution has a substantial impact on CosPlace's performance. Reducing the resolution to \(85\times 64\) pixels decreases recall rates up to 58 percentage points on datasets like Tokyo24/7, MSLS, and SVOX. Interestingly, GNM's temporal distance prediction performs best on datasets where the performance differences between full and low-resolution CosPlace models are minimal, namely Pitts30k and St. Lucia. This suggests that GNM performance, too, would be improved by training the model with higher-resolution images. **Viewpoint and Appearance Change.** Pittsburgh30k and Tokyo24/7 datasets capture images from various angles, while the rest feature images from front-facing vehicle cameras. Despite the similarity in viewpoint variation between GNM's training data and front-facing datasets, this is not reflected in recall rates. GNM performs well on Pittsburgh30k but poorly on SVOX. This discrepancy may stem from other factors contributing to domain shift between query and reference images. Besides viewpoint changes, Pittsburgh30k and St. Lucia exhibit limited variation. The other datasets contain shifts in illumination, weather, and camera which GNM struggles to handle, not having been explicitly trained for such invariance. ### _Runtime analysis_ Table IV shows average runtimes for PlaceNav and the baseline. Replacing temporal distance prediction with place recognition significantly reduces runtime, even without optimization. The runtime of PlaceNav is not affected by the window size. Introducing the discrete Bayesian filter aligns the PlaceNav's runtime with the original temporal distance baseline, allowing resource-performance trade-offs based on computational capacity and navigation needs. ## VI Conclusion In conclusion, our findings show that place recognition enables more accurate subgoal selection than the temporal distance prediction methods at a lower computational cost. The offline evaluation suggests that the presence of appearance change between the reference run and robot operation would further amplify the difference. Our future work will focus on appearance invariant subgoal selection models and goal-reaching policies. ## Acknowledgment The authors thank Jani Kiypyla, Olli Suominen, and Jussi Rantala from CIVIT for access to the Summit XL. We would also like to thank Maria Andrea Cruz Blandon and German F. Torres for their valuable comments on an earlier version of the manuscript.
最近の研究結果から、トポロジーナビゲーションをロボット独立型とロボット特異型に分割することで、ナビゲーション性能を向上させることが示されています。ロボット独立型は、異なる種類のロボットから収集されたデータで訓練されます。しかし、ナビゲーション方法の性能は、適切な訓練データの scarcity と計算コストの増加により制限されています。この研究では、PlaceNavを提案し、ロボット独立型の機能をナビゲーション特異型と一般的なコンピュータビジョンコンポーネントに分割しました。この方法では、場所認識を利用して、トポロジーナビゲーションパイプラインの目標選択を行うことで、目標選択を効率化させ、非ロボット学のデータソースから大規模なデータセットを有効活用し、訓練データの可用性を増加させます。場所認識を介したベイズ推定により、ナビゲーション性能が向上します。目標の時間的整合性を高めます。実験の結果は
2309.11611
Hate speech detection in algerian dialect using deep learning
With the proliferation of hate speech on social networks under different formats, such as abusive language, cyberbullying, and violence, etc., people have experienced a significant increase in violence, putting them in uncomfortable situations and threats. Plenty of efforts have been dedicated in the last few years to overcome this phenomenon to detect hate speech in different structured languages like English, French, Arabic, and others. However, a reduced number of works deal with Arabic dialects like Tunisian, Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in this work a complete approach for detecting hate speech on online Algerian messages. Many deep learning architectures have been evaluated on the corpus we created from some Algerian social networks (Facebook, YouTube, and Twitter). This corpus contains more than 13.5K documents in Algerian dialect written in Arabic, labeled as hateful or non-hateful. Promising results are obtained, which show the efficiency of our approach.
Dihia Lanasri, Juan Olano, Sifal Klioui, Sin Liang Lee, Lamia Sekkai
2023-09-20T19:54:48
http://arxiv.org/abs/2309.11611v1
# Hate speech detection in Algerian dialect using deep learning ###### Abstract With the proliferation of hate speech on social networks under different formats, such as abusive language, cyberbullying, and violence, etc., people have experienced a significant increase in violence, putting them in uncomfortable situations and threats. Plenty of efforts have been dedicated in the last few years to overcome this phenomenon to detect hate speech in different structured languages like English, French, Arabic, and others. However, a reduced number of works deal with Arabic dialects like Tunisian, Egyptian, and Gulf, mainly the Algerian ones. To fill in the gap, we propose in this work a complete approach for detecting hate speech on online Algerian messages. Many deep learning architectures have been evaluated on the corpus we created from some Algerian social networks (Facebook, YouTube, and Twitter). This corpus contains more than 13.5K documents in Algerian dialect written in Arabic, labeled as hateful or non-hateful. Promising results are obtained, which show the efficiency of our approach. Hate Speech Algerian dialect Deep Learning DziriBERT FastText ## 1 Introduction Hate speech detection, or detection of offensive messages in social networks, communication forums, and websites, is an exciting and hot research topic. Many hate crimes and attacks in our current life started from social network posts and comments MacAvaney et al. (2019). Studying this phenomenon is imperative for online communities to keep a safe environment for their users. It also has a significant benefit for security authorities and states to ensure the safety of citizens and prevent crimes and attacks. A universally accepted definition of hate speech is currently unavailable Bogdani et al. (2021) because of the variation of cultures, societies, and local languages. Other difficulties include the diversity of national laws, the variety of online communities, and forms of online hate speech. Various definitions are proposed. According to the Encyclopedia of the American Constitution: "Hate speech is speech that attacks a person or group based on attributes such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity." Nockleby (2000). Today, many authors largely used this definition Guellili et al. (2022). Facebook considers hate speech as "a direct attack on people based on protected characteristics--race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disease or disability. We also provide some protections for immigration status." 1. Davidson et al., who defines hate speech as "language that is used to express hatted towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group" propose one of the most accepted definitions Davidson et al. (2017). Alternatively, the one proposed by Fortuna et al., "Hate speech is a language that attacks or diminishes, that incites violence or hate against groups, based on specific characteristics such as physical appearance, religion, descent, national or ethnic origin, sexual orientation, gender identity or other, and it can occur with different linguistic styles, even in subtle forms or when humor is used." Fortuna and Nunes (2018). Footnote 1: Community Standards; Available on:[https://www.facebook.com/communitystandards/objectionable_content](https://www.facebook.com/communitystandards/objectionable_content) The literature review shows that the term _Hate speech_ (which is the most commonly used) has various synonym terms such as abusive speech, offensive language, cyberbullying, or sexism detection Schmidt and Wiegand (2017). Many works have been published in the context of hate speech detection for different standard and structured languages, like French Battistelli et al. (2020), English Alkomah and Ma (2022), Spanish Plaza-del Arco et al. (2021), and Arabic Albadi et al. (2018). These languages are known for their standardization with well-known grammar and structure, which make the language processing well mastered. However, detecting hate speech in dialects, mainly Arabic ones such as Libyan, Egyptian, and Iraqi, etc. is still challenging and complex work Mulki et al. (2019). Even if they are derived from the literal Arabic language, each country's specific vocabulary and semantics are added or defined. In this work, we are interested in detecting hate speech in the Algerian dialect. This latter is one of the complex dialects Mezzoudj et al. (2019) characterized by the variety of its sub-dialects according to each region within the country. Algeria is a country with 58 regions; each one has a specificity in its spoken language with different words and meanings. The same word may have various meanings for each region; for example, '_Flouka_' in the east means '_earrings_.' In the north, it means '_small boat_'. Moreover, new '_odd_' words are continually added to the Algerian vocabulary. The Algerian dialect is known for its morphological and orthographic richness. Facing this situation, treating and understanding the Algerian dialect for hate speech detection is a complex work. The importance of this project for the Algerian context encourages us to work on this problem. To the best of our knowledge, only few works have been proposed for hate speech detection in the Algerian dialect Boucherit and Abainia (2022), Menifi et al. (2022). Some other related topics are treated like sentiment analysis Abdelli et al. (2019), sexism detection Guelli et al. (2021) which may be exploited to analyze the hate speech. In this paper, we proposed a complete end-to-end natural language processing (NLP) approach for hate speech detection in the Algerian dialect. Our approach covers the main steps of an NLP project, including data collection, data annotation, feature extraction, and then model development based on machine and deep learning, model evaluation, and inference. Moreover, we have evaluated various machine and deep learning architectures on our corpus built from diverse social networks (YouTube, Twitter, and Facebook) for several years (between 2017 and 2023). This corpus contains more than 13.5K annotated documents in Algerian dialect written in Arabic characters. Two classes are used for annotation (hateful, non-hateful). This work allows us essentially to provide a wealthy evaluation of many deep learning architectures, an essential value for academic and industrial communities. The obtained results are promising, and continuous tests are performed for further results. This paper is structured as follows: Section 2 presents a necessary background, Section 3 reviews the most important related works, Section 4 details our proposed approach and evaluated models, Section 5 discusses the obtained results, and Section 6 concludes the paper. ## 2 Background Hate speech is commonly defined as any communication that disparages a target group of people based on some characteristic such as race, color, ethnicity, gender, sexual orientation, nationality, religion, or other characteristic De Gibert et al. (2018). ### Hate speech According to Al-Hassan and Al-Dossari (2019) hate speech is categorized into five categories: (1) gendered hate speech, including any form of misogyny and sexism; (2) religious hate speech including any religious discrimination, such as Islamic sects, anti-Christian, etc.; (3) racist hate speech including any racial offense or tribalism, and xenophobia; (4) disability including any sort of offense to an individual suffering from health problems; and (5) political hate speech can refer to any abuse and offense against politicians Guellil et al. (2022). ### Algerian Dialect and Arabic Languages Arabic is the official language of 25 countries2. More than 400 million people around the world speak this language. Arabic is also recognized as the 4th most-used language on the Internet Boudad et al. (2018). Arabic is classified into three categories Habash (2022): (1) Classical Arabic (CA), which is the form of the Arabic language used in literary texts. The Quran is considered the highest form of CA text Sharaf and Atwell (2012). (2) Modern Standard Arabic (MSA) is used for writing and formal conversations. (3) Dialectal Arabic is used in daily life communication, informal exchanges, etc. Boudad et al. (2018) like the Algerian dialect, Tunisian dialect, etc. The Algerian dialect on social networks can be written with Arabic characters (_a_z_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_z_l_z_z_l_z_l_z_l_z_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_l_z_z_l_z_l_z_l_z_l_z_l_z_z_ Guellili et al. (2020) proposed a system for detecting hateful speech in Arabic political debates. The approach was evaluated against a hateful corpus concerning Algerian political debates. It contains 5K YouTube comments in MSA and Algerian dialects, written in both Arabic and Latin characters. Both classical algorithms of classification (Gaussian NB, Logistic Regression, Random Forest, SGD Classifier, and Linear SVC(LSVC)) and deep learning algorithms (CNN, multilayer perceptron (MLP), LSTM, and BiLSTM) are tested. For extracting features, the authors use Word2vec and FastText with their two implementations, namely, Skip Gram and CBOW. Simulation results demonstrate the best performance of LSVC, BiLSTM and MLP. Mohdeb et al. (2022) proposed an approach for analysis and the detection of dialectal Arabic hate speech that targeted African refugees and illegal migrants on the YouTube Algerian space. The corpus contains more than 4K comments annotated as Incitement, Hate, Refusing with non-hateful words, Sympathetic, and Comment. The transfer learning approach has been exploited for classification. The experiments show that the AraBERT monolingual transformer outperforms the mono-dialectal transformer DziriBERT and the cross-lingual transformers mBERT and XLM-R. ### Hate speech detection in other Arabic dialects Various datasets or corpora were published in different dialects, which can be used for different purposes like hate speech, racism, violence, etc. detection. ALBayari and Abdallah (2022) is the first work to propose a corpus built from Instagram comments. This corpus contains 198K comments, written in MSA and three different dialects: Egyptian, Gulf, and Levantine. The comments were annotated as neutral, toxic, and Bullying. Al-Ajlan and Ykhler (2018) and Haidar et al. (2019) datasets are collected from Twitter containing respectively 20K and 34K multi-dialectal Arabic tweets annotated as bullying and non-bullying labels. These tweets were from various dialects (Lebanon, Egypt, and the Gulf area). Moreover, two other datasets were proposed by Mubarak et al. (2017). The first one with 1.1K tweets in different dialects and the second dataset contains 32K inappropriate comments collected from a famous Arabic news site and annotated as obscene, offensive, or clean. Albadi et al. (2018) proposed the religious hate speech detection where a multi-dialectal dataset of 6.6K tweets was introduced. It included an identification of the religious groups targeted by hate speech. Alakrot et al. (2018) also provided a dataset of 16K Egyptian, Iraqi, and Libyan comments collected from YouTube. The comments were annotated as either offensive, inoffensive, or neutral. T-HSAB Haddad et al. (2019) and L-HSAB Mulki et al. (2019) are two publicly available corpora for abusive hate speech detection. The first one is in the Tunisian dialect, combining 6K comments. The second one is in Levantine dialect (Syrian, Lebanese, Palestinian, and Jordanian dialects) containing around 6K tweets. These documents are labeled as Abusive, Hate, or Normal. Mubarak et al. (2020) looked at MSA and four major dialects (Egyptian, Levantine, Maghrebi, and Gulf). It presented a systematic method for building an Arabic offensive language tweet dataset that does not favor specific dialects, topics, or genres with 10K tweets. For tweet labeling, they used the count of positive and negative terms based on a polarity lexicon. FastText and Skip-Gram (AraVec skip-gram, Mazajak skip-gram); and deep contextual embeddings, namely BERTbase-multilingual and AraBERT are used. They evaluated different models: SVM, AdaBoost, and Logistic regression. Mulki and Ghanem (2021) introduced the first Arabic Levantine Twitter dataset for Misogynistic language (LeT-Mi) to be a benchmark dataset for automatic detection of online misogyny written in the Arabic and Levantine dialect. The proposed dataset consists of 6.5K tweets annotated either as neutral (misogynistic-free) or as one of seven misogyny categories: discredit, dominance, cursing/danning, sexual harassment, stereotyping and objectification, derailing, and threat of violence. They used BOW + TF-IDF, SOTA, LSTM, BERT, and Majority class as classifiers. Duwairi et al. (2021) investigated the ability of CNN, CNN-LSTM, and BiLSTM-CNN deep learning networks to classify or discover hateful content posted on social media. These deep networks were trained and tested using the ArHS dataset, which consists of around 10K tweets that were annotated to suit hateful speech detection in Arabic. Three types of experiments are reported: first, the binary classification of tweets into Hate or Normal. Ternary classification of tweets into (Hate, Abusive, or Normal), and multi-class classification of tweets into (Misogyny, Racism, Religious Discrimination, Abusive, and Normal). Aldjanabi et al. (2021) have built an offensive and hate speech detection system using a multi-task learning (MTL) model built on top of a pre-trained Arabic language model. The Arabic MTL model was experimented with two different language models to cover MSA and dialect Arabic. They evaluated a new pre-trained model 'MarBERT' to classify both dialect and MSA tweets. They propose a model to explore multi-corpus-based learning using Arabic LMs and MTL to improve the classification performance. Haidar et al. (2017) presented a solution for the issue of cyberbullying in both Arabic and English languages. The proposed solution is based on machine learning algorithms using a dataset from Lebanon, Syria, the Gulf Area, and Egypt. That dataset contained 35K Arabic texts. In this research, Naive Bayes and SVM models were chosen to classify the text. The SVM model achieved greater precision. Abdelali et al. (2016) The authors built a large dataset that consists of offensive Arabic words from different dialects and topics. The tweets were labeled into one of these categories: offensive, vulgar, hate speech, or clean. Since the offensive tweets involve implicit insults, the hate speech category was the tweets that contain racism, religious, and ethnic words. Different classifiers were employed in this study; the SVM model with a radial function kernel was mainly used with lexical features and pre-trained static embedding, while Adaptive Boosting and Logistic regression classifiers were employed when using Mazajak embedding. SVM gave the best precision. _According to this literature analysis, we detect that the topic of hate speech detection in the Algerian dialect is not widely considered, and only few works deal with this problem. Furthermore, a lack of Algerian datasets prepared for hate speech is found. All these findings motivate our proposal._ ## 4 Our Methodology To identify hate speech in messages written in Algerian dialects--whether in Arabic or Latin script-- we outline a comprehensive methodology encompassing (1) data gathering, (2) data annotation, (3) feature extraction, (4) model development, and (5) model evaluation and inference. We'll delve into each of these stages in the subsequent sections. ### Data Collection Data collection serves as the foundational step in our approach. To effectively train our models, we require a robust dataset in the Algerian Arabic dialect. To achieve this, we sourced our data from three distinct social networks spanning the years 2017 to 2023: **1. YouTube**: Numerous Algerian channels have emerged on YouTube, dedicated to discuss various topics, including politics, religion, social issues, youth concerns, education, and more. We have identified and focused on the most influential ones with a significant following and engagement. We employ the YouTube Data API through a Python script to gather comments from various videos. **2. Twitter**: Even if Algerian citizens do not widely use Twitter, we targeted it to collect tweets. We used a list of keywords to search for tweets. Many hashtags were launched between 2017 and 2023 about some situations and crises in Algeria, which enhanced the activity on Twitter, like [(do not buy oil of rebrab), [(do not buy oil of rebrab), [(some them all), [(mafia), [(no for fifth presidential term), etc. During this activity, we used these hashtags to collect an important number of tweets. Two techniques have been used for this objective: (1) Using Twitter API: Until February 2023, we were able to use this API for free and gather tweets. (2) Since February 2023, this API has become paid. Consequently, we used other solutions based on scrapping using the SNScrape library. **3. Facebook**: To gather data from Facebook, we selected public pages talking and sharing content about politics, Algerian products, pages of some influencers, mobile operators, etc. We collected the posts, comments, and replies from these various pages. To collect data, we used different solutions: (1) Between 2017 and 2018, we were able to collect data from any public page using Graph API. (2) Since 2019, we have used either FacePager free application to collect data from public pages or (3) Facebook-scraper library for scraping. From these sources, we have collected more than 2 million documents (messages) in different languages: Arabic, French, English, dialect, etc. The next step consists of filtering only documents written in Algerian dialects, either in Arabic or Latin characters. This work was done manually by a group of collaborators. At the end, we obtained around 900K documents. ### Data Annotation (Data Labeling) To annotate data, we followed two approaches: automatic and manual. We have decided to annotate only the dialect written in Arabic characters. Our approach consists of building one model that detects hate speech only for Algerian dialects written in Arabic characters. Then, a transliteration function is developed to transliterate any Algerian document written in Latin characters into Arabic ones, then use the built model to classify it. For example, "ma tech-rich zit el ailha" becomes "[]" []" which means "Don't buy the oil of the gods," which expresses the expensiveness of this oil. We used a binary annotation: 0 expressing NON-HATE, which represents a document that doesn't contain any hateful or offensive word. 1 in case of a Hateful message containing any hateful word or the meaning and the semantics of the message expresses it. **1- Automatic annotation**: For automatic annotation, we prepared a set of hateful keywords in the Algerian dialect discovered from our corpus. These words express the hate and the violence in Algerian speech. This list contains 1.298 words. This list of keywords has been used in a Python script to automatically tag a document with 1 if it contains at least one hateful keyword. In the other case, it is considered as 0. The automatically annotated corpus contains 200K Algerian documents written in Arabic characters. **2- Manual annotation**: The automatically annotated documents have been validated manually. A group of annotators checked the annotated corpus and corrected the wrong-labeled documents. The manual step validates 5.644 documents considered for the next step. **3- Dataset Augmentation for Enhanced Balance**: To bolster our dataset and enhance its equilibrium, we employed a strategy involving the incorporation of positively labeled subsets sourced from sentiment analysis datasets. In doing so, we reclassified these subsets as non-hateful, under the reasonable assumption that expressions of positive sentiment inherently exclude hate speech. Specifically, we leveraged the dataset available at [https://www.kaggle.com/datasets/djoughimehdi/algerian-dialect-review-for-sentiment-analysis](https://www.kaggle.com/datasets/djoughimehdi/algerian-dialect-review-for-sentiment-analysis), selecting solely the instances characterized by positive sentiment and relabeling them as 'normal.' However, due to preprocessing constraints, this process yielded a reduced set of just 500 documents. Moreover, we used the corpus shared by Boucherit and Abainia (2022) containing 8.7K documents in the Algerian dialect. This dataset is labeled manually as Offensive (3.227), Abusive (1.334), and Normal (4.188). We changed the labels of this corpus into Hateful (1) for fused Offensive and Abusive ones and Non-Hateful (0) for Normal ones. This corpus has been filtered and treated to keep 7.345 labeled documents. At the end of this step, we obtained an annotated balanced corpus of 13.5K documents in Algerian dialect written in Arabic characters, which will be used later to build classifiers. ### Data Preprocessing Before using any dataset, a cleaning or preprocessing task should be performed. We have defined a set of functions orchestrated in a pipeline, as illustrated in Figure 1. * Remove URL: All URLs in a document are deleted. * Remove stop words: The list of Arabic stop words provided by NLTK is used to clean meaningless words. This list has been enriched by a set of stop words detected in the Algerian dialect. * Replace special punctuation: some punctuation concatenation can represent meaning and have an added value for the model, like: :) Means happy, :( Means upset, etc. This kind of punctuation is transformed into the corresponding emoji. Figure 1: Preprocessing pipeline * Normalize Arabic numbers: Arabic numbers are transformed into the classic digits in order to standardize the writing, like \(\backslash\) = 1, \(\gamma\)=2, etc. * Normalize Arabic: some letters have special symbols, which needs some treatment. Like: \(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\(\lambda\)\( **3. LSTM & BiLSTM with Dziri FastText:** LSTM and BiLSTM are one of the deep learning models that are suitable for NLP problems, mainly in text classification like sentiment analysis and even for hate speech detection. In this paper, we have tested these two models against our corpus. To learn the semantics and context of messages, we used FastText as a word embedding model. In our case, we fine tuned a Dziri FastText model. This later was trained on a huge dataset of Algerian messages in Arabic characters based on the Skip-gram model. The obtained model (Dziri FastText) is used to generate an embedding matrix for our built corpus of hate speech. The sequential architecture is composed of: (i) Embedding layer which is the input layer representing the embedding matrix; (ii) Dropout layer with a rate of 0.2 to prevent over-fitting; (iii) LSTM or Bidirectional LSTM layer with units=100, dropout=0.4, recurrent_dropout=0.2; (iv) Dropout layer with a rate of 0.2 to prevent over-fitting; (v) Output dense layer, using sigmoid as an activation function. As optimizer we used Adam, and we used binary crossentropy as a loss function, batch_size = 64 and epochs= 100. **4. Dziribert-FT-HEAD:** Pre-trained transformers, like BERT, have become the standard in Natural Language Processing due to their exceptional performance in various tasks and languages. The authors in Abdaoui et al. (2021) collected over one million Algerian tweets and developed DziriBERT, the first Algerian language model, outperforming existing models, especially for the Latin script (Arabizi). This demonstrates that a specialized model trained on a relatively small dataset can outshine models trained on much larger datasets. The authors have made DziriBERT3 publicly available to the community. Footnote 3: [https://huggingface.co/alger-ia/dziribert](https://huggingface.co/alger-ia/dziribert) In this experiments we fine-tuned Dziribert, by incorporating a classification head while keeping the rest of the Dziribert parameters frozen. The classification head consists of three key components: a fully connected layer with 128 units, followed by batch normalization for stability, a dropout layer to mitigate overfitting, and a final fully connected layer that produces a single output value. We apply a sigmoid activation function to ensure the output falls between 0 and 1, which suits our binary classification task. Training employed the binary cross-entropy loss function and the Adam optimizer with a fixed learning rate of 1e-3. Additionally, a learning rate scheduler was employed to dynamically adjust the learning rate during training for improved convergence. **5. DZiriBert with Peft+LoRA:** In our experiment, we fine-tuned the pre-trained model "DZiriBERT" using techniques called Peft (Parameter-Efficient Fine-Tuning) Mangrulkar et al. (2022) + LoRA Hu et al. (2021). These methodologies allowed us to tailor the model specifically for the Algerian dialect, making it sensitive to the unique nuances of this language. The Peft configuration is established using the LoRa technique. Parameters such as the reduction factor, scaling factor, dropout rate, and bias are defined according to the task requirements. _Peft and LoRa Configuration:_ PEFT method has recently emerged as a powerful approach for adapting large-scale pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Given that fine-tuning such models can be prohibitively costly, PEFT offers a viable alternative by only fine-tuning a small number of (extra) model parameters. This greatly decreases the computational and storage costs without compromising performance. LoRA is a technique specifically designed to make the fine-tuning of large models more efficient and memory-friendly. The essential idea behind LoRA is to represent weight updates using two smaller matrices (referred to as update matrices) through a low-rank decomposition. While the original weight matrix remains frozen, these new matrices are trained to adapt to the new data, keeping the overall number of changes minimal. LoRA has many advantages, mainly the: (1) Efficiency: by significantly reducing the number of trainable parameters, LoRA makes fine-tuning more manageable. (2) Portability: Since the original pre-trained weights are kept frozen, multiple lightweight LoRA models can be created for various downstream tasks. (3) Performance: LoRA achieves performance comparable to fully fine-tuned models without adding any inference latency. (4) Versatility: Though typically applied to attention blocks in Transformer models, LoRA's principles can, in theory, be applied to any subset of weight matrices in a neural network. _Model Initialization:_ DZiriBERT is loaded and configured with Peft using the defined parameters. The model is then fine-tuned using the tokenized datasets. We configure our model using the LoraConfig class, which includes the following hyperparameters: - Task Type: We set the task type to Sequence Classification (SEQ_CLS), where the model is trained to map an entire sequence of tokens to a single label. Target Modules: The target modules are set to "query" and "value". - Rank (r): We employ a low-rank approximation with a rank =16 for the LoRA matrices. - Scaling Factor (\(\alpha\)): The LoRA layer utilizes a scaling factor=32, which serves as a regularization term. - Dropout Rate: We introduce a dropout rate of 0.35 in the LoRA matrices to improve generalization. - Bias: The bias term is set to "none," reducing the model complexity. _Training Process:_ The model is trained using custom training arguments, including learning rate, batch sizes, epochs, and evaluation strategies. The training process leverages the Hugging Face Trainer class, providing a streamlined approach to model fine-tuning. We train our model with the following parameters: -learning_rate=1e-3: Specifies the learning rate as 1e-3. Learning rate controls how quickly or slowly a model learns during the training process. -per_device_train_batch_size=16: This indicates that each device used for training (usually a GPU) will handle a batch of 16 samples during each training iteration. - per_device_eval_batch_size=32: Similar to the above, but for evaluation, each device will process batches of 32 samples. - num_train_epochs=5: The training process will go through the entire training dataset 5 times. An epoch is one complete forward and backward pass of all the training examples. - weight_decay=0.01: This is a regularization technique that helps prevent the model from fitting the training data too closely (overfitting). A weight decay of 0.01 will be applied. - evaluation_strategy="epoch": Evaluation will be performed at the end of each epoch. This allows you to check the performance of your model more frequently and make adjustments if needed. - save_strategy="epoch": The model will be saved at the end of each epoch, allowing you to revert to the model's state at the end of any given epoch if necessary. - load_best_model_at_end=True: Once all training and evaluation are completed, the best-performing model will be loaded back into memory. This ensures that you always have access to the best model when your training is complete. **6. Dzarashield:** We built the Dzarabert4 which is a modification of the original Dziribert model that involves pruning the embedding layer, specifically removing tokens that contain non-Arabic characters. This pruning significantly reduces the number of trainable parameters, resulting in faster training times and improved inference speed for the model. This approach is aimed at optimizing the model's performance for tasks involving Arabic-based text while minimizing unnecessary complexity and computational overhead. Dzarashield5 is built upon the Dzarabert base model by incorporating a classification head. This classification head consists of sequential architecture including: a linear layer (input: 768, output: 768), followed by a Rectified Linear Unit (ReLU) activation function; a dropout layer (dropout rate: 0.1); and another linear layer (input: 768, output: 2) for binary classification. The model's hyperparameters were determined through experimentation: a learning rate (lr) of 1.3e-05, a batch size of 16, and training for 4 epochs. The Adam optimizer was used with its default parameters for optimization during training. Experimentation resulted in a better score when updating all the weights of the model rather than freezing the base BERT model and updating the classification head. Footnote 4: [https://huggingface.co/Sifal/dzarabert](https://huggingface.co/Sifal/dzarabert) Footnote 5: [https://huggingface.co/Sifal/dzarashield](https://huggingface.co/Sifal/dzarashield) **7. Multilingual E5 Model:** We conducted a fine-tuning process on a pre-existing model, specifically the Multilingual E5 base model Wang et al. (2022). Our primary objective was to ascertain the efficacy of a multilingual model within the context of the Algerian dialect. In adherence to the training methodology, the prefix "query:" was systematically introduced to each data row. This precautionary measure was deemed necessary Wang et al. (2022) to avert potential indications of performance deterioration that might arise in the absence of such preprocessing. The foundation of our investigation rested upon the initialization of the pre-trained base model using the xlm-roberta-base6 architecture, which was trained on a mixture of multilingual datasets. The model is fine-tuned with an additional Dense layer followed by a Dropout Layer. The model is trained with custom hyperparameters for fine-tuning (Warmup Steps: 100; Weight Decay: 0.01 ; Epoch: 5 ; Probability of Dropout: 0.1; Train batch size: 16 ; Evaluation batch size: 64) Footnote 6: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base) **8. sbert-distill-multilingual Fine Tuned:** Similar to the Multilingual E5 Model, we fine-tuned a pre-trained model known as sbert-distil-multilingual model from sentence transformer to investigate how well a multilingual model performs in Algerian Dialect. The pre-trained model is based on a fixed (monolingual) teacher model that produces sentence embeddings with our desired properties in one language. The student model is supposed to mimic the teacher model, i.e., the same English sentence should be mapped to the same vector by the teacher and by the student model. The model is fine-tuned with an additional Dropout layer and a GeLU layer via K-Fold cross validation. The model is trained with custom hyperparameters for fine-tuning (Warmup Steps: 100; Weight Decay: 0.01; Probability of Dropout: 0.1 ; Epoch: 10 ; K-Fold: 4 ; Train batch size: 16 ; Evaluation batch size: 64) **9 AraT5v2-HateDetect** AraT5-base is the result of testing the T5 model (mT5)7 on Arabic. For comparison, three robust Arabic T5-style models are pre-trained and evaluated on ARGEN dataset Nagoudi et al. (2021). Surprisingly, despite being trained on approximately 49% less data, these models outperformed mT5 in the majority of ARGEN tasks, achieving several new state-of-the-art results. The AraT5v2-base-1024 model 8 introduces several improvements compared to its predecessor, AraT5-base : - More Data: AraT5v2-base-1024 is trained on a larger and more diverse Arabic dataset. This means it has been exposed to a wider range of Arabic text, enhancing its language understanding capabilities. - Larger Sequence Length: This version increases the maximum sequence length from 512 to 1024. This extended sequence length allows the model to handle longer texts, making it more versatile in various NLP tasks. - Faster Convergence: During the fine-tuning process, AraT5v2-base-1024 converges approximately 10 times faster than the previous version (AraT5-base). This can significantly speed up the training and fine-tuning processes, making it more efficient. - Extra IDs: AraT5v2-base-1024 supports 100 sentinel tokens, also known as unique mask tokens. This allows for more flexibility and customization when using the model for specific tasks. Overall, these enhancements make AraT5v2-base-1024 a more powerful and efficient choice for Arabic natural language processing tasks compared to its predecessor, and it is recommended for use in place of AraT5-base. AraT5v2-HateDetect9 is a fine-tuned model based on AraT5v2-base-1024, specifically tailored for the hate detection task. The fine-tuning process involves conditioning the decoder's labels, which include target input IDs and target attention masks, based on the encoder's source documents, which consist of source input IDs and source attention masks. After experimentation, the following hyperparameters were chosen for training AraT5v2-HateDetect (Training Batch Size: 16; Learning Rate: 3e-5; Number of Training Epochs: 4). These hyperparameters were determined to optimize the model's performance on the hate detection task. The chosen batch size, learning rate, and training epochs collectively contribute to the model's ability to learn and generalize effectively for this specific NLP task. Footnote 8: [https://huggingface.co/UBC-NLP/AraT5v2-base-1024](https://huggingface.co/UBC-NLP/AraT5v2-base-1024) Footnote 9: [https://huggingface.co/Sifal/AraT5v2-HateDetect](https://huggingface.co/Sifal/AraT5v2-HateDetect) Footnote 10: [https://pypi.org/project/lang-trans/](https://pypi.org/project/lang-trans/) ### Evaluation and Inference To evaluate the different models, we used four main metrics: Accuracy, Precision, F1-Score, and Recall. To classify a message in case where it is written in Arabizi (a specific dialect using Latin characters), a transliteration process was implemented to convert the text into Arabic characters based on lang-trans11 library. Footnote 11: [https://huggingface.co/Sifal/AraT5v2-HateDetect](https://huggingface.co/Sifal/AraT5v2-HateDetect) ## 5 Experiments and Results To train and evaluate our models, we used TensorFlow or Pytorch deep learning frameworks. We used Google Colab and Kaggle GPUs to accelerate the experiments. In table 1, we will provide the detailed results that we obtained. _Linear Support Vector Classifier (LinearSVC)_: The LinearSVC model offered a competitive accuracy but struggled with the recall for the hate speech class. The precision and recall trade-off indicates possible challenges in differentiating between the subtle nuances of hate and non-hate speech in the dialect. The model exhibited high precision and recall for class 0 but showed room for improvement for class 1, particularly in terms of recall. This suggests that while the model is quite good at identifying class 0, it could be improved for identifying class 1. \begin{table} \begin{tabular}{l l l l l} \hline \hline Model Name & Accuracy & Precision & Recall & F1 Score \\ \hline LinearSVC & 0.83 & 0.84(Class0); 0.72(Class1) & 0.96(Class0); 0.36 (Class1) & 0.9(Class0); 0.48 (Class1) \\ gzip + KNN & 0.67 & 0.63 & 0.56 & 0.60 \\ Dziribert-FT-HEAD & 0.83 & 0.81 & 0.81 & 0.81 \\ LSTM & 0.70 & 0.61 & 0.75 & 0.67 \\ Bidirect LSTM & 0.68 & 0.59 & 0.81 & 0.68 \\ DZiriBERT FT PEFT+LoRA & 0.86 & 0.83 & 0.85 & 0.84 \\ Multilingual-E5-base FT & 0.84 & 0.8 & 0.81 & 0.80 \\ sbert-distill-multilingual FT & 0.80 & 0.74 & 0.81 & 0.77 \\ Dzarashield & 0.87 & 0.87 & 0.87 & 0.87 \\ AraT5v2-HateDetect & 0.84 & 0.83 & 0.84 & 0.83 \\ \hline \hline \end{tabular} \end{table} Table 1: The results of each model (FT: Fine Tuned _gzip + KNN_: One of the worst models in terms of capabilities, although it is diverging from the baseline it is unclear whether these results will hold in out of distribution cases, especially when we know that there is no underlying process in the model that captures semantic representations of the documents. _Dziribert-FT-HEAD:_ the model exhibits a noteworthy precision score, signifying its accuracy in correctly classifying instances as hate speech or not. However, the relatively lower recall score suggests that it missed identifying some hate speech instances. This discrepancy might be attributed to the model's lack of specialized handling for the nuances of the Algerian dialect, potentially causing it to overlook certain hate speech patterns unique to that context. Despite this, the model's overall accuracy remains commendably high, indicating its robust performance in making accurate predictions. Additionally, the balanced precision and recall values underline its ability to strike a reasonable trade-off between minimizing false positives and false negatives, a crucial aspect in hate speech detection. The F1 Score, being the harmonic mean of precision and recall, further validates the model's capacity to effectively identify positive samples while avoiding misclassification of negative ones. The model consistently demonstrates strong performance across multiple evaluation metrics, especially in terms of accuracy and F1 score. These results reaffirm the practicality and effectiveness of employing deep learning techniques for the challenging task of hate speech detection. _LSTM and BiLSTM with FastText-DZ:_ Unfortunately, the results of this model are among the worst ones. The literature shows the strength of LSTM and BiLSTM in this kind of NLP project, but this is not the case for this project. The low precision is due to the incapability of the model to classify correctly the hate class. FastText is a good word embedding model that captures the context and semantics of a document. However, in this case, it does not perform well because of the fine-tuning done where we took an Arabic FastText and fine-tune it on Algerian dataset written in Arabic characters. _DZirBiBert with Pef+LoRA:_ We utilize both PEFT and LoRA to fine-tune DZiriBERT, a model specifically adapted to the Algerian dialect. By employing these techniques, we were able to create a highly effective and efficient model for hate speech detection in the Algerian dialect while keeping computational costs at a minimum. _Multilingual-E5-base Fine Tuned and short-distill-multilingual Fine Tuned_: The outcomes obtained from these models are noteworthy; nonetheless, their performances pale when compared with the parameter-efficient fine-tuning on the DZiriBERT model. _DzaraShield_: The results returned by this model are satisfying considering the relatively low quantity of data it was finetuned on, this exhibits further that the pretraining plays the major role on downstream takes such as classification in our case, especially that the base model is an encoder only architecture which captures contextual information from the input data, making it useful for a wide range of text classification tasks. _AraTSv2-HateDetect_: The results are slightly inferior to Dzarashiield. One possible explanation is the increased complexity of the architecture when compared to the Dzarabert base model. Consequently, fine-tuning becomes a more intricate task due to the larger hyperparameter search space and the limited resources in terms of computing power and data availability. As a result, it is reasonable to expect that these models would perform similarly in real-world scenarios. ### Results Discussion The DzaraShield model has demonstrated remarkable capability in detecting hate speech in the Algerian dialect. Its outstanding precision score highlights its reliability in accurately identifying instances of hate speech. Additionally, it maintains a balanced precision and recall, indicating that it does not excessively sacrifice precision to achieve its higher recall. Such a balanced model holds considerable advantages, particularly when both false positives and false negatives carry significant consequences. For the other models, mainly LSTM or BiLSTM with Dziri FastText, more fine-tuning should be performed to enhance the results. Moreover, future work may include hyperparameter tuning, class balancing techniques, or the integration of more complex models to improve performance across both classes. The disparity between precision and recall in certain models warrants further investigation. Delving deeper into this issue could yield valuable insights into specific aspects of the dialect that might be contributing to this imbalance. Future experiments should prioritize understanding and addressing these discrepancies, with the goal of enhancing recall without compromising precision. The results from various experimental models underscore the intricacies involved in hate speech detection in the Algerian dialect. While traditional machine learning and deep learning approaches provided some valuable insights, they fell short in capturing the dialect's nuanced characteristics. In contrast, the DzaraShield model emerged as the most successful approach, emphasizing the pivotal role of Encoder-only models in the realm of projects of this nature. These findings offer valuable insights for future work in this area and underscore the potential of leveraging domain-specific knowledge, advanced fine-tuning techniques, and sophisticated architectures for the effective detection of hate speech in under-studied and complex dialects such as Algerian. ## 6 Conclusion The importance of hate speech detection on social networks has encouraged many researchers to build solutions (corpora and classifiers) to detect suspect messages. The literature review shows that most works are interested in text in structured languages like English, French, Arabic, etc. However, few works deal with dialects, mainly the Algerian one, which is known for its complexity and variety. To fill in the gap, we propose in this paper a complete NLP approach to detect hate speech in the Algerian dialect. We built an annotated corpus of more than 13,5K documents, which is used to evaluate various deep learning architectures. The obtained results are very promising, where the most accurate was the DzaraShield. Looking ahead, there is significant potential to enhance inference speed, particularly for the Dziribert-based and multilingual models. While this project primarily focused on Arabic characters, our next step will be to address the dialect when written in Latin characters. Embracing both Arabic and Latin characters will more accurately capture the nuances of the written Algerian dialect. Finally, we plan to expand our corpus size and explore alternative deep-learning architectures. ## 7 Acknowledgments We would like to thank every person who has contributed to this project: Micha Freidin, Viktor Ivanenko, Piyush Aaryan, Yassine Elboustani, Tasneem Elyamany, Cephars Bonacci, Nolan Wang and Lydia Khelifa Chibout. We would also like to thank Omdena organization for giving us this valuable opportunity.
ソーシャルネットワークにおける差別的言論の増加に伴い、攻撃的な言語、サイバーモbbing、暴力などの様々な形態で拡散しています。この結果、人々は暴力的な状況や脅威にさらされるようになり、不快な状況に置かれています。近年、この現象に対処するための努力が多くのプロジェクトに投入されています。特に、英語、フランス語、アラビア語など、構造化された言語で差別的言論を検出するためです。しかし、この分野で取り組んでいる研究は限られており、アルジェリア語のアルジェリア方言、チュニジア語、エジプト語、グレートなど、主要なものを含めて、より多くの研究が求められています。この論文では、アルジェリア語のオンラインメッセージでの差別的言論検出に特化したアプローチを提案しています。Facebook、YouTube、Twitterなどのアルジェリアのソーシャルネットワークから作成したコーパスを、多様な深層学習
2309.05664
Superconductivity-induced improper orders
The study of improper phases in the context of multiferroic materials has a long history, but superconductivity has yet to be connected to the network of ferroic orders. In this work, we highlight an overlooked mechanism that couples superconducting order parameters to odd-parity orders in the charge or spin sectors such that the latter emerge as improper orders. For that, we explore a novel perspective of nonsymmorphic symmetries based on extended symmetry groups in real space. We highlight how nonsymmorphic symmetries can generate rather nonintuitive couplings between order parameters. In particular, we find that a bilinear in the superconducting order parameter can couple linearly to odd-parity orders in centrosymmetric systems. Our findings can account for the unusual phenomenology of CeRh$_2$As$_2$, a recently discovered heavy fermion superconductor, and open the door for exploring nonsymmorphic symmetries in the broader context of improper orders with potential applications to functional materials.
Andras Szabo, Aline Ramires
2023-09-11T17:59:16
http://arxiv.org/abs/2309.05664v1
# Superconductivity-induced improper orders ###### Abstract The study of improper phases in the context of multiferroic materials has a long history, but superconductivity has yet to be connected to the network of ferroic orders. In this work, we highlight an overlooked mechanism that couples superconducting order parameters to odd-parity orders in the charge or spin sectors such that the latter emerge as improper orders. For that, we explore a novel perspective of nonsymmorphic symmetries based on extended symmetry groups in real space. We highlight how nonsymmorphic symmetries can generate rather nonintuitive couplings between order parameters. In particular, we find that a bilinear in the superconducting order parameter can couple linearly to odd-parity orders in centrosymmetric systems. Our findings can account for the unusual phenomenology of CeRh\({}_{2}\)As\({}_{2}\), a recently discovered heavy fermion superconductor, and open the door for exploring nonsymmorphic symmetries in the broader context of improper orders with potential applications to functional materials. The Landau theory of phase transitions has been a leading framework for understanding ordered phases of matter. It has been successfully applied to describe magnetic and electric ordering and their non-trivial interplay in multiferroic systems, which are central in the pursuit of exotic functional materials [1; 2]. Within multiferroics, improper ferroelectrics develop electric polarization controlled by the development of a leading distortive or magnetic order parameter [3; 4; 5; 6]. More generally, improper phases are associated with order parameters that develop as a secondary effect of the development of a leading order. The interplay of superconductivity with magnetic and charge orders is empirically known in multiple families of materials and extensively discussed in the framework of intertwined [7] or vestigial orders [8]. Nevertheless, their relation in the context of improper orders has not yet been fully investigated, as the complexity of superconducting order parameters has only recently started to be acknowledged. In this work, we explore the untapped realm of superconducting-induced improper orders, highlighting the role of nonsymmorphic symmetry for the development of unexpected couplings between order parameters, which can potentially explain the unusual phenomenology recently reported in a material in the family of heavy fermion systems. The development of ordered phases of matter can be understood on the phenomenological level based on the notion of symmetry breaking associated with the onset of an order parameter. In crystalline solids, the primary symmetries involved are spatial, generally accompanied by time-reversal symmetry. Spatial symmetries include translations, rotations, and reflections, as well as combinations of these. Particularly notable are nonsymmorphic systems, as these feature symmetry transformations that are necessarily accompanied by a fractional primitive lattice vector (PLV) translation. These systems have been extensively explored in the context of topological band structures [9; 10; 11; 12; 13], and such symmetries are key to protecting band degeneracies of Bloch's states with opposite parity at specific points in momentum space [14; 15]. However, the effects of nonsymmorphic symmetries in the context of ordered states of matter are less explored, and most efforts have been focused on their topological classification [16; 17; 18]. Similarly, much of the research in the context of superconductivity relies on the analysis of point group symmetries, the crystalline symmetries that are left once one factors out translations, a trivial procedure in symmorphic systems. Nonsymmorphic symmetries complicate the process of factoring out translations, making the group theoretical analysis and the classification of superconducting states more cumbersome, particularly if one works in a momentum space representation [19; 15]. Here, we take a complementary view of nonsymmorphic symmetries in the context of ordered phases of matter. We classify the order parameters in the superconducting, charge, and spin sectors based on their textures directly in real space [20], taking explicit account of the nonsymmorphic nature of the crystal [21]. From this analysis, we find new types of coupling between superconducting order parameters and orders in the spin or charge sectors, which can induce odd-parity improper orders by the development of a primary order in the superconducting channel, see Fig. 1. Our results could lead to new functionalities and technological applications by highlighting an unapped mechanism for the development of improper orders and bringing new connectivities between superconductivity and other functional phases of matter. The recently discovered heavy-fermion compound CeRh\({}_{2}\)As\({}_{2}\), depicted in Fig. 2 (a), realizes an exotic phase diagram with two superconducting phases as a function of a \(c\)-axis magnetic field [22]. While the pairing symmetry of the two phases is to date unclear, nuclear magnetic resonance (NMR) [23] and nuclear quadrupole resonance (NQR) [24] measurements on the As sites reveal antiferromagnetic order coexisting with the low-, but not with the high-field superconducting phase. Most intriguingly, specific heat and thermal expansion measurements indicate a single phase transition [25], suggesting that the onset of superconductivity coincides with that of magnetism in the entire low-field phase, in a range of magnetic fields spanning 4 T. Both NMR and NQR measurements observed site-selective broadening of the signal within the magnetic phase: of the two inequivalent As sites in the unit cell, only one of them experiences a change in the local magnetic field with the onset of magnetic ordering. These observations constrain the possible textures for magnetic moments localized at the Ce sites. Within homogeneous phases, the only consistent Figure 1: Order parameter coupling within Landau theory. Assuming a primary superconducting order parameter \(\mathbf{\Psi}\) (potentially multicomponent), and a secondary order \(\phi\), this figure highlights the distinct phenomenology that emerges from two different types of coupling between them. Left: For a biquadratic coupling, \(f_{\lambda}=\lambda|\phi|^{2}|\mathbf{\Psi}|^{2}\), there are two transition temperatures and both order parameters onset with the characteristic \(\sqrt{T}\) temperature dependence. Right: for linear-quadratic coupling, \(f_{\lambda}\sim\lambda\phi(\Psi_{1}^{*}\Psi_{2}\pm\Psi_{1}\Psi_{2}^{*})\), there is a single transition temperature and the subleading order parameter \(\phi\) onsets with a slower, linear temperaure dependence. Here \(\lambda\) is a generic coupling constant. More details are given in the SI. order is a layered antiferromagnet, with magnetic moments along the \(c\)-axis, changing sign across the sublattices [see Figs. 2 (b) and 3 (b)]. Two further families of solutions with in-plane magnetic moments can be obtained by doubling the unit cell [see Figs. 2 (c) and 3 (c)]. Below, we discuss how these constraints on the magnetic order impose strong restrictions on the nature of the superconducting state in this system. Within the Landau formalism, the coexistence of superconducting and magnetic orders implies a coupling between the corresponding order parameters consistent with all operative symmetries. The most straightforward gauge-invariant coupling preserving time-reversal symmetry is quadratic in both the superconducting (\(\Psi\)) and magnetic (\(M\)) order parameters, e.g. \(\sim M^{2}|\Psi|^{2}\). This coupling, however, does not result in the same critical temperatures for the two phases. From the phenomenology of improper orders, a linear-quadratic coupling between \(M\) and \(\Psi\) with a dominant superconducting order would lead to the onset of magnetism at the superconducting critical temperature (see Fig. 1 and discussion in the SI). For this type of coupling, we need to recourse to a multi-component superconducting order parameter, \(\mathbf{\Psi}=(\Psi_{1},\Psi_{2})\), which would allow for a term \(\sim iM(\Psi_{1}\Psi_{2}^{*}-\Psi_{1}^{*}\Psi_{2})\) in the free energy. In CeRh\({}_{2}\)As\({}_{2}\), the homogeneous magnetic order is odd parity, as the magnetic moments are opposite for sites related by inversion symmetry [see Fig. 3 (b) and (c)]. Due to the global inversion symmetry, order parameters classified from the perspective of point group symmetries are generally labeled as either even or odd, and superconducting order parameter bilinears are invariably even parity. This simplified view naively makes a linear-quadratic coupling between magnetic and superconducting order parameters impossible in CeRh\({}_{2}\)As\({}_{2}\), as it would require \(\Psi_{1}\) and \(\Psi_{2}\) to be of opposite parity. Below, we show that in the absence of accidental degeneracies, modulated superconducting orders in nonsymmorphic systems can sustain multi-component order parameters with opposite parity, allowing for this unusual coupling and for the emergence of magnetism as an improper order triggered by a primary superconducting order. More concretely, taking as a working example the space group \(P4/nmm\) (\(\#129\)), we now discuss how unusual irreducible representations (irreps) with components of opposite parity emerge once we enlarge the symmetry group by extending the unit cell to account for modulated order parameters. The 16 symmetry operations in \(P4/nmm\) are generated by \(\bar{C}_{4z}=\{C_{4z}|\mathbf{t}_{\mathbf{x}}/2\}\), a rotation by \(\pi/2\) along the z-axis followed by \(\mathbf{t}_{\mathbf{x}}/2\), half a PLV translation along Figure 2: (a) Crystal structure of CeRh\({}_{2}\)As\({}_{2}\), with centrosymmetric nonsymmorphic space group \(P4/nmm\) (\(\#\) 129). The Ce atoms span a body-centered tetragonal lattice with the main fourfold rotation axis going through these sites. The crystal field due to the Rh and As atoms (dark and light grey spheres, respectively) breaks inversion symmetry at the Ce sites, generating a Ce sublattice structure (blue and red spheres), characterizing this system as locally noncentrosymmetric. Notably, the inversion center is located at the midpoint between Ce sublattices, (b) Original unit cell (green) projected to the \(x-y\) plane containing two Ce atoms (1 and 2) and (c) enlarged unit cell (magenta) containing four Ce atoms (1 through 4). Dashed line represents the diagonal mirror plane, red cross is the global inversion centre, coinciding with our chosen origin. Translation by one lattice constant (indicated by \(\mathbf{t}_{\mathbf{y}}\)) is a trivial operation in the original unit cell scenario, as it takes a sublattice into itself. In contrast, in the enlarged unit cell scenario a translation by one lattice constant constitutes a new operation. the x-axis; \(\sigma_{d}=\{\sigma_{d}|\mathbf{0}\}\), a mirror reflection along the diagonal plane (\(x=y\)); and inversion \(i=\{i|\mathbf{0}\}\) (the complete list of group operations in the standard Seitz notation is given in the SI). Given the nonsymmorphic nature of the space group, these 16 symmetry operations, when composed, do not close into themselves. If we are interested in homogeneous phases, we can redefine the composition of these operations modulo integer lattice vector translations such that they form a group. This procedure corresponds to factoring out translations to determine the little group at the \(\Gamma\) point in momentum space [15], which is isomorphic to \(D_{4h}\). For this group, the symmetry elements are organized in 10 conjugacy classes, leading to 10 irreps which are labeled as even or odd parity (see details in the SI). If we allow the system to develop modulations encompassing multiple unit cells in a commensurate manner, we need to consider the corresponding wave-vector dictating the modulation. If the wave vector corresponds to a point in momentum space at the edge of the BZ, we expect unusual degeneracies in nonsymmorphic systems, as is extensively discussed in the context of electronic band structures in momentum space. Choosing the simplest scenario, here we double the unit cell and introduce the notion of an _extended group_ by adding to the original group new symmetry operations, which take one primitive unit cell into another in the doubled unit cell [20]; see Fig. 2 (b) and (c). The extended symmetry group is formed by the original 16 operations plus the composition of these with a PLV translation, here chosen to be \(E^{\prime}=\{E|\mathbf{t}_{\mathbf{y}}\}\), which is the extension of the identity operation \(E=\{E|\mathbf{0}\}\) (extended symmetry operations are denoted with a prime). Extending the group of symmetries to 32 elements leads to novel irreps (see SI for details). If the space group is symmorphic, the total number of irreps is simply doubled, and the new irreps behave as the original ones up to an extra minus sign under operations including a PLV translation. In contrast, if the group is nonsymmorphic, the irreps associated with the extended group can be fundamentally distinct from the original irreps. To understand how nontrivial irreps are generated, we rely on two elementary results from group theory: (i) the number of irreps is equal to the number of conjugacy classes; (ii) the dimensions of irreps should follow the identity \(\sum_{i}|d_{i}|^{2}=|\mathbf{G}|\), where \(d_{i}\) is the dimension of the \(i\)-th irreducible representation and \(|\mathbf{G}|\) is the order of the group (the number of elements in the group). The order of the extended group in our example is twice the order of the original group. For symmorphic systems, the number of irreps of a given dimension is also doubled, trivially satisfying points (i) and (ii). For nonsymmorphic systems, the conjugacy classes in the extended group are not twice as many as in the original group. This happens because some of the original and extended operators necessarily coalesce into a single conjugacy class. This point can be understood by considering the conjugation of a generic spatial symmetry operation \(O_{B}=\{R_{B}|\mathbf{t}_{B}\}\) by another generic operation \(O_{A}=\{R_{A}|\mathbf{t}_{A}\}\): \[O_{A}^{-1}.O_{B}.O_{A}=O_{C}=\{R_{A}^{-1}R_{B}R_{A}|R_{A}^{-1}(R_{B}\mathbf{t }_{A}-\mathbf{t}_{A}+\mathbf{t}_{B})\}, \tag{1}\] where the dot denotes the composition of operations, and we used \(O_{A}^{-1}=\{R_{A}^{-1}|-R_{A}^{-1}\mathbf{t}_{A}\}\). The presence of inversion operation in P4/nmm allows us to choose \(O_{A}=\{i|\mathbf{0}\}\), such that the RHS of the equation above reads \(\{R_{B}|-\mathbf{t}_{B}\}\). In the original symmetry group, associated with the primitive unit cell, if \(O_{B}\) is nonsymmorphic with \(\mathbf{t}_{B}\) corresponding to half a PLV translation, we can redefine \(-\mathbf{t}_{B}=\mathbf{t}_{B}+PLV\equiv\mathbf{t}_{B}\). As a consequence, \(O_{C}=O_{B}\) and we do not get any information about conjugacy classes. On the other hand, in the extended symmetry group \(-\mathbf{t}_{B}=\mathbf{t}_{B}+PLV\not\equiv\mathbf{t}_{B}\), such that \(O_{C}=O_{B}^{\prime}+O_{B}\). The relation \(O_{A}^{-1}.O_{B}.O_{A}=O_{B}^{\prime}\) indicates that \(O_{B}\) and \(O_{B}^{\prime}\) belong to the same conjugacy class. class in the extended group. We conclude that all conjugacy classes containing nonsymmorphic operations in the original symmetry group are enlarged in the extended symmetry group. This coalescence of conjugacy classes tells us that we have less than twice as many irreps in the extended group, and, consequently, the new (double-valued) irreps are generally not one-dimensional. The complete analysis is summarized in Table 1 and a more detailed discussion is provided in the SI. In this example, there are 14 conjugacy classes, therefore 14 irreps. There are 4 new two-dimensional irreps in the extended group (labelled as \(E_{im}\), \(i=1,...,4\), with \(m\) standing for mixed parity), with the unusual property of having zero character associated with inversion symmetry. To better understand this last statement, we can conjugate inversion with a nonsymmorphic operation. Choosing \(O_{B}=\{i|\mathbf{0}\}\), we find the RHS of Eq. 1 reads \(\{i|-2R_{A}^{-1}\mathbf{t}_{A}\}\). If \(O_{A}\) is nonsymmorphic and \(\mathbf{t}_{A}\) is half a PLV, \(-2R_{A}^{-1}\mathbf{t}_{A}\) is a PLV, and therefore nonsymmorphic group elements inevitably connect inversion \(i=\{i|\mathbf{0}\}\) with \(i^{\prime}=\{i|\mathbf{t}_{\mathbf{y}}\}\), leading to the enlargement of the associated conjugacy class under the group extension. This result has strong implications. The fact that the conjugacy class associated with the inversion operation is enlarged tells us that the character associated with the new irreps is zero [15]. As a consequence, the new two-dimensional irreps are associated with two-component basis functions with components of opposite parity. This fact is directly associated with known results in electronic band theory: our extended group is isomorphic to the abstract group \(G_{32}^{2}\), the little group at the \(M\) point [15]. This remarkable result challenges our intuition about inversion symmetry. In centrosymmetric systems with nonsymmorphic symmetries inversion symmetry can be "effectively broken" if the system develops textures with \begin{table} \begin{tabular}{c|c c c c|c c c c c c c c c} \(G_{16}^{9}\) & \(E\) & \(\tilde{C}_{2z}\) & \(2\bar{C}_{4z}\) & \(2\bar{\sigma}_{x}\) & \(2\sigma_{d}\) & \(i\) & \(\tilde{\sigma}_{h}\) & \(2\bar{S}_{4}\) & \(2\bar{C}_{2x}\) & \(2C_{2\bar{d}}\) \\ & \(\swarrow\searrow\swarrow\swarrow\swarrow\) & \(\swarrow\searrow\) & \(\swarrow\searrow\) & \(\swarrow\searrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) & \(\swarrow\) \\ \(G_{32}^{2}\) & \(E\) & \(E^{\prime}\) & \(\tilde{C}_{2z}\) & \(\tilde{C}_{2z}^{\prime}\) & \(4C_{4z}\) & \(4\bar{\sigma}_{x}\) & \(2\sigma_{d}\) & \(2\sigma_{d}^{\prime}\) & \(2i\) & \(2\sigma_{h}\) & \(4\bar{S}_{4}\) & \(4\bar{C}_{2x}\) & \(2C_{2\bar{d}}\) & \(2C_{2\bar{d}}^{\prime}\) \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \(A_{2g}\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 & -1 \\ \(B_{1g}\) & 1 & 1 & 1 & 1 & -1 & 1 & -1 & -1 & 1 & 1 & -1 & 1 & -1 & -1 \\ \(B_{2g}\) & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 \\ \(E_{g}\) & 2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & 2 & -2 & 0 & 0 & 0 \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 & 1 \\ \(A_{2u}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 \\ \(B_{1u}\) & 1 & 1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 \\ \(B_{2u}\) & 1 & 1 & 1 & -1 & 1 & -1 & -1 & -1 & -1 & -1 & 1 & 1 & 1 \\ \(E_{u}\) & 2 & 2 & -2 & -2 & 0 & 0 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 \\ \hline \(\bar{E}_{1m}\) & 2 & -2 & 2 & -2 & 0 & 0 & 2 & -2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(E_{2m}\) & 2 & -2 & 2 & -2 & 0 & 0 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(E_{3m}\) & 2 & -2 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 2 & -2 \\ \(E_{4m}\) & 2 & -2 & -2 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2 & 2 \\ \end{tabular} \end{table} Table 1: Character table for P4/mmm modulo two integer lattice translations. The first line gives the 10 conjugacy classes for P4/mmm modulo integer lattice translations (isomorphic to \(D_{4h}\) and the abstract group \(G_{16}^{9}\)[15]). The second line encodes the 14 conjugacy classes in P4/mmm modulo two integer lattice translations (isomorphic to abstract group \(G_{32}^{2}\)[15]). The symmetry operations are labeled according to their associated point group operation. If the operation is accompanied by a half PLV translation, it is marked with a bar, while if it is accompanied by two orthogonal half PLV translations, it is marked with a tilde. Operations without either a bar or a tilde are pure point operations. Operations marked with a prime belong to the set of operations extended by a PLV translation. The arrows indicate if the original conjugacy class splits (\(\swarrow\searrow\)) or doubles (\(\suit\)) in the extended group. The first 10 irreps have well-defined parity, and their labels follow that of the \(D_{4h}\) point group. The 4 last irreps are new to the extended group and have mixed parity. The subscripts correspond to even (g), odd (u), or mixed (m) parity. In gray color, we highlight the columns that are simply doubled for the initial 10 irreps given the splitting on the conjugacy classes. In yellow, we highlight the columns with zero characters due to the coalescence of the original and extended operations in the same conjugacy class. wave-vectors that lie at the edge of the BZ. Going back to the discussion on CeRh\({}_{2}\)As\({}_{2}\), using the original group structure modulo PLV (the little-group at the \(\Gamma\) point), we can classify the order parameters in the charge, spin, and superconducting sectors in terms of irreps, which are either even or odd under parity (details in SI). In contrast, if we classify the order parameters according to extended group (see details in SI), we find orders associated with irreps labeled as \(E_{im}\) having two components that transform differently under inversion. Curiously, the magnetic order with in-plane moments discussed above belongs to the irrep \(E_{3m}\), but it cannot couple to any superconducting order through the desired linear-quadratic term (see discussion in SI). On the other hand, the magnetic order with moments along the \(c\)-axis is associated to the irrep \(A_{1u}\), which is odd parity, and can couple linearly to quadratic terms in the superconducting order parameter if the latter is associated with irreps \(E_{3m}\) or \(E_{4m}\). The latter type of coupling has strong implications for the phenomenology. Magnetism develops as an improper order, and its temperature dependence does not follow the standard \(\propto\sqrt{T}\) behaviour expected for a leading order parameter within Landau theory, as depicted in Fig. 1. Improper orders onset with a weaker temperature dependence \(\propto T\), which might make their experimental observation more difficult. A slow onset for the improper magnetic order in CeRh\({}_{2}\)As\({}_{2}\) is consistent with the apparent magnetic critical temperature being lower than the superconducting critical temperature [23; 24]. These results should trigger a more in-depth investigation to determine the temperature at which magnetism emerges. If magnetism onsets exactly at \(T_{c}\), our analysis suggests that we have strong constraints to the nature of both magnetism and superconductivity. It necessarily indicates that the magnetic moments are aligned along the z-axis and that superconductivity is of a very unusual type: a chiral PDW with two components of opposite parity. Classifying order parameters in real space taking into account nonsymmphoricity by dealing with extended symmetry groups allows us to systematically account for modulated unconventional orders and novel types of couplings Figure 3: Representative orders. Inversion-odd \(P_{z}\) polar (a) and \(M_{z}\) layer antiferromagnet (b) orders, adhering to the original unit cell (green). (c) Two components of \(E_{3m}\) in-plane magnetic order (d) two components of \(E_{3m}\) superconducting order in the enlarged unit cell (magenta). The pairing wave function is intrasublattice but antisymmetric between sites, resulting in a modulation in the projected x-y plane (\(+/-\), marked by green and orange respectively). In (c) and (d) the components are related to each other via 90 degree rotation, but one component is odd-parity, while the other is even. between them. Nonsymmorphic symmetries are related to intrinsically complex crystalline structures. Nonsymmorphic crystals necessarily have sublattice structures that add to the already complex set of charge, orbital, and spin degrees of freedom necessary to faithfully describe the electronic structure of most materials. The richness in the number of internal degrees of freedom is nevertheless still strongly constrained by crystalline symmetries, and the investigation of the development of improper orders can lead to very refined information about the nature of order parameters developing in complex materials. We expect that these findings can be harvested for a better characterization of ordered phases of matter and in future studies of improper orders of functional materials. A.S. is grateful for financial support from the Swiss National Science Foundation (SNSF) through Division II (No. 184739). A.R. also acknowledges financial support from the Swiss National Science Foundation (SNSF) through an Ambizione Grant No. 186043.
improper相の多 ferro性材料における研究は長い歴史を持っていますが、超伝導はネットワークにおけるferroic秩序とつながりません。この研究では、超伝導秩序パラメータを奇数奇数秩序に結びつける、見過ごされてきたメカニズムを強調します。そのために、実空間における非対称的な対称性の新しい視点を探ります。非対称的な対称性が、秩序パラメータ間の非直感的な結合を生み出します。特に、超伝導秩序パラメータの線形結合は、中心対称系における奇数奇数秩序と結合します。私たちの発見は、最近発見された重い準伝導超伝導体CeRh$_2$As$_2$の非伝統的な現象を説明し、非対称的な対称性の広範な適用可能性を明らかにします。
2303.18108
A Ramsey apparatus for proton spins in flowing water
We present an apparatus that applies Ramsey's method of separated oscillatory fields to proton spins in water molecules. The setup consists of a water circuit, a spin polarizer, a magnetically shielded interaction region with various radio frequency elements, and a nuclear magnetic resonance system to measure the spin polarization. We show that this apparatus can be used for Rabi resonance measurements and to investigate magnetic and pseudomagnetic field effects in Ramsey-type precision measurements with a sensitivity below 100 pT.
Ivo Schulthess, Anastasio Fratangelo, Patrick Hautle, Philipp Heil, Gjon Markaj, Marc Persoz, Ciro Pistillo, Jacob Thorne, Florian M. Piegsa
2023-03-31T14:56:02
http://arxiv.org/abs/2303.18108v2
# A Ramsey apparatus for proton spins in flowing water ###### Abstract We present an apparatus that applies Ramsey's method of separated oscillatory fields to proton spins in water molecules. The setup consists of a water circuit, a spin polarizer, a magnetically shielded interaction region with various radio frequency elements, and a nuclear magnetic resonance system to measure the spin polarization. We show that this apparatus can be used for Rabi resonance measurements and to investigate magnetic and pseudomagnetic field effects in Ramsey-type precision measurements with a sensitivity below 100 pT. ## I Introduction The nuclear magnetic resonance method of Rabi [1; 2] and Ramsey's technique of separated oscillatory fields [3; 4; 5] have been applied very successfully in a variety of different scientific experiments. They apply constant and time varying magnetic fields to manipulate the spins of probe particles. Ramsey's technique allows to determine the Larmor precession frequency of the spin in a magnetic field \(B_{0}\). In a first step, the spin polarized particles are flipped by an oscillating field \(B_{1}\) into the plane orthogonal to \(B_{0}\). Then, they can precess for a certain time until they are flipped again by a second oscillating \(B_{1}\) field. Usually, the frequency of the oscillating fields is scanned around the resonance while the phases of the two signals are locked. This results in an interference pattern of the spin polarization in the frequency domain. Ramsey's technique can be applied to precisely measure changes in magnetic and pseudo-magnetic fields. It is used in atomic clocks [6; 7], to measure the Newtonian gravitational constant [8], to search for the neutron electric dipole moment [9; 10; 11], to search for dark matter [12; 13], new particles and interactions [14], and others. It was also applied in the measurement of the neutron magnetic moment [15]. In the latter experiment, the technique served to compare resonance frequencies of free neutrons and protons in water passing through one apparatus. The application of resonance techniques with flowing water had been previously demonstrated by Sherman [16]. ## II Apparatus Here we present an experimental apparatus with a similar concept as the one used in the neutron magnetic moment measurement. Figure 1 shows a schematic of the setup and Fig. 2 a photo of the full tabletop experiment. The total length is about 3 meters. The water is circulated through the system using a gear pump. First, the water passes a polarizer to create a sizable spin polarization of the protons. It then flows through the interaction region, which is magnetically shielded to the surrounding by mu-metal. In that region, the spins interact with the magnetic field \(B_{0}\) and can be manipulated with spin-flip coils. There are additional temperature and magnetic field sensors. Finally, the spin polarization is measured and analyzed employing nuclear magnetic resonance (NMR) techniques. No guiding fields for the proton spins are required between the elements since their fringe fields are sufficient. ### Water Circuit To perform a spin precession experiment, the demineralized water that contains the hydrogen protons is circulating in a water circuit. We use a rigid glass capillary with an inner diameter of \(d=4\) mm and a length of 1500 mm to guide the water through the interaction region. To connect the other elements we use plastic tubes (PU, PVC, and PTFE) of various diameters. We use a Figure 1: Schematic of the experimental setup where the protons in water (H\({}_{2}\)O) are pumped from the water reservoir of the chiller. They are first polarized in a polarizer (red) and then enter the interaction region surrounded by a double-layer mu-metal shield. Two spin-flip coils are shown in green and the magnetic field direction is indicated in blue. The spin polarization is analyzed using a nuclear magnetic resonance (NMR) system (purple). The schematic is not to scale.
``` この装置は、水分子におけるプロトンスピンのRamsey法に基づく分離振動場を適用する装置です。装置は、水回路、スピン偏光器、磁気遮蔽相互作用領域(さまざまな電磁波要素を含む)と核磁気共鳴システムを含みます。この装置を用いて、ラビレ共鳴測定を行い、Ramsey型精密測定における磁気および擬磁気場効果の調査を行います。この装置は100pT未満の感度で測定できます。 ```
2309.14769
Unveiling Neutrino Mysteries with $Δ(27)$ Symmetry
An elegant model is proposed by extending the Standard Model using the $\Delta(27)\times Z_3 \times Z_{10}$ symmetry within the framework of the Type-I + Type-II seesaw mechanism. This model is particularly noteworthy for its ability to restrict the atmospheric mixing angle, $\theta_{23}$, to specific values, and provides an explanation for the observed hierarchy of charged lepton masses. The neutrino mass matrix texture defined by three real parameters, predicts the three neutrino mass eigenvalues and the two Majorana phases. Furthermore, the model is tested against the experimental results of neutrino-less double beta ($0\nu\beta\beta$) decay and charged lepton flavour violation (cLFV) experiments.
Manash Dey, Subhankar Roy
2023-09-26T09:03:07
http://arxiv.org/abs/2309.14769v2
# Unveiling Neutrino Mysteries with \(\Delta(27)\) Symmetry ###### Abstract In the realm of neutrino physics, we grapple with mysteries like the origin of neutrino masses, the absence of a clear mass hierarchy, and the values of Majorana phases. To address these puzzles, we extend the Standard Model using \(\Delta(27)\) symmetry within the Hybrid seesaw framework. We also introduce an additional \(Z_{10}\) symmetry to constrain some undesirable terms in the Yukawa Lagrangian, resulting in a unique neutrino mass matrix texture with partial \(\mu-\tau\) symmetry. In our study, we propose a novel lepton mixing matrix that, when connected to this texture, provides valuable phenomenological insights. ## I Introduction Over the past few decades, the Standard Model (SM) of particle physics has gained widespread acceptance as a comprehensive theoretical framework that encompasses the fundamental particles, such as quarks and leptons and explains the fundamental interactions viz., the strong, weak, and electromagnetic interactions. Even though the SM is very successful in explaining a wide range of phenomenon, but the theory is inefficient to explain neutrino mass origin, neutrino mass hierarchy, gravity, matter-antimatter asymmetry, dark matter etc. Out of the many shortcomings, the inability to address the issues related to neutrino masses and mixing are the captivating problems in the realm of particle physics. The neutrinos [1] are massless in the framework of the SM however, long back in the year 1957, Pontecorod had anticipated that neutrinos could change their flavour as they travel in space. This phenomenon is called Neutrino Oscillation [2] which after experimental verification [3; 4; 5], in the recent past confirmed the fact that neutrinos are not massless. Therefore, it is apt to assert that the pursuit of finding answers to the missing pieces in the SM compels the model builders to go beyond the latter (BSM). In the context of the BSM theories, the study of neutrino mass matrix textures arising from discrete symmetry groups gains much significance. The neutrino mass matrix, \(M_{\nu}\), encodes crucial information of various observational parameters which includes the solar mixing angle (\(\theta_{12}\)), reactor mixing angle (\(\theta_{13}\)), atmospheric mixing angle (\(\theta_{23}\)), the neutrino mass eigenvalues (\(m_{1},m_{2},m_{3}\)), a Dirac CP-violating phase (\(\delta\)), as well as two Majorana phases, namely \(\alpha\) and \(\beta\). In this context, "mass matrix texture" specifically denotes correlations or constraints within the neutrino mass matrix texture, often referred to as sum rules in the literature [6; 7; 8; 9]. These correlations serve to reduce the number of independent parameters, thereby aiding model builders in deriving significant relationships among observable parameters. It's worth mentioning that to achieve the generation of small neutrino masses, these textures can be formulated within the framework of seesaw mechanisms [10; 11]. There are many forms of seesaw mechanisms proposed in the literature. For example, the Type-I seesaw mechanism [7; 12; 13; 14; 15] is a simple extension of the SM, it introduces right-handed neutrinos (\(\nu_{R}\)) to create both Majorana and Dirac mass terms, represented as \(M_{R}(\bar{\nu}_{R}\nu_{R}^{c})\) and \(M_{D}(\bar{\nu}_{L}\nu_{R})\) respectively. Here, the \(\nu_{L}\) denotes the left-handed neutrino field from the SM doublet. If one assumes that \(M_{R}\) is much larger than \(M_{D}\), the small neutrino mass in Type-I seesaw is given by the expression: \(m_{\nu}\propto-M_{D}.M_{R}^{-1}.M_{D}^{T}\), and the neutrino masses are observed to be of the order of around \(10^{-2}\) eV. Therefore, the smallness of neutrino masses in this scenario can be attributed to the significant scale of \(M_{R}\), which functions as a seesaw mechanism. There is another type of seesaw mechanism known as Type-II seesaw [16; 17; 12; 18; 12]. Here, it needs the introduction of a heavy \(SU(2)_{L}\) scalar triplet (\(\Delta\)) in the Higgs sector of the SM and the light Majorana neutrino mass term in this scenario, is given by: \(m_{\nu}\propto y_{\Delta}(\bar{D}_{L}\,D_{L}^{c})\,i\tau_{2}\Delta\). When \(m_{\Delta}\gg v_{h}\), the neutral component of the \(\Delta\) triplet field acquires a non zero vacuum expectation value (VEV), it leads to small Majorana neutrino mass: \(m_{\nu}\sim y_{\Delta}\,\psi_{\Delta}\), where \(v_{\Delta}\) is the VEV of the neutral component of the higgs triplet (\(\Delta^{\circ}\)) and \(v_{\Delta}\propto v_{h}^{2}/\text{m}_{\Delta}^{2}\). Here, the \(\text{m}_{\Delta}\) represents the mass of the triplet field (\(\Delta\)) and \(v_{h}\) signifies the SM Higgs VEV (\(v_{h}\sim 246\) GeV). This is to be underlined that one can generate the Majorana neutrino mass matrix in a framework where both Type-I and Type-II seesaw mechanisms may coexist. Such frameworks, most often, are referred as the Hybrid seesaw mechanism [12; 19; 20]. The latter helps to achieve a more robust suppression in neutrino mass and to obtain unique lepton mixing patterns. In the light of above discussion, various ideas in the realm of BSM physics have been proposed. These ideas include concepts like texture zeroes [21], \(\mu\)-\(\tau\) symmetry [22; 23; 24; 25], \(\mu-\tau\) mixed symmetry [26; 27], \(\mu-\tau\) reflection symmetry [28; 29], \(\mu-\tau\) antisymmetry [25; 30], and more. They are formulated in connection with various discrete symmetry groups to place constraints on the neutrino mass matrix \(M_{\nu}\). It's worth noting that creating an exact structure [31; 32] for a neutrino mass matrix based on discrete symmetry groups is quite difficult. Typically, researchers use methods that lead to an approximate mass matrix pattern, and this can some what limit the independence of the parameters in the mass matrix. However, in our current approach, we've taken special care to address these challenges properly. The structure of the work is as follows: In Section II, we introduce the model, beginning with a brief discussion of the discrete flavour symmetry \(\Delta(27)\)[33], which is employed in developing the neutrino mass matrix. We then delve into the field contents, Lagrangian, and scalar potential of our model. In Section III, we discuss the neutrino mixing matrix and propose a special choice based on current experiments. Section IV focuses on numerical analysis and key findings related to the mass matrix. Finally, in Section V, we conclude. ## II The model The discrete symmetry group \(\Delta(27)\)[33; 34; 35; 36; 37; 38] is a subgroup of \(SU(3)\) and is isomorphic to the semi-direct product group \((Z_{3}\times Z_{3})\times Z_{3}\). It is also a part of the \(\Delta(3n^{2})\) family with n = 3. It has 11 irreducible representations, out of which there are one triplet (3), one anti-triplet (3) and nine singlet (\(1_{p,q}\)) representations, where, \(p,q=0,1,2\). Unlike simple discrete symmetry groups such as \(A_{4}\), which offer only two triplets (3) and three singlet (\(1,1^{{}^{\prime}},1^{{}^{\prime\prime}}\)) representations, the presence of an additional anti-triplet (3) representation within the framework of \(\Delta(27)\) symmetry provides a greater flexibility for understanding deeper physics and extracting important phenomenological insights. Let us now briefly explore the product rules associated with \(\Delta(27)\) symmetry, \[3\otimes 3 = \bar{3}_{s_{1}}\oplus\bar{3}_{s_{2}}\oplus\bar{3}_{a},\] \[\bar{3}\otimes\bar{3} = 3_{s_{1}}\oplus 3_{s_{2}}\oplus 3_{a},\] \[3\otimes\bar{3} = \sum_{r=0}^{2}1_{r,0}\oplus\sum_{r=0}^{2}1_{r,1}\oplus\sum_{r=0} ^{2}1_{r,2},\] \[1_{p,q}\otimes 1_{p^{\prime},q^{\prime}} = 1_{(p+p^{\prime})\,mod\,3,\,(q+q^{\prime})\,mod\,3}. \tag{1}\] If \((a_{1},a_{2},a_{3})\) and \((b_{1},b_{2},b_{3})\) are two triplets under \(\Delta(27)\) then, \[(3\otimes 3)_{\bar{3}_{s_{1}}} = (a_{1}b_{1},a_{2}b_{2},a_{3}b_{3}),\] \[(3\otimes 3)_{\bar{3}_{s_{2}}} = \frac{1}{2}(a_{2}b_{3}+a_{3}b_{2},a_{3}b_{1}+a_{1}b_{3},a_{1}b_{2 }+a_{2}b_{1}),\] \[(3\otimes 3)_{\bar{3}_{a}} = \frac{1}{2}(a_{2}b_{3}-a_{3}b_{2},a_{3}b_{1}-a_{1}b_{3},a_{1}b_{2 }-a_{2}b_{1}),\] \[(3\otimes\bar{3})_{1_{r,0}} = a_{1}b_{1}+w^{2r}a_{2}b_{2}+w^{r}a_{3}b_{3},\] \[(3\otimes\bar{3})_{1_{r,1}} = a_{1}b_{2}+\omega^{2r}a_{2}b_{3}+\omega^{r}a_{3}b_{1},\] \[(3\otimes 3)_{1_{r,2}} = a_{1}b_{3}+\omega^{2r}a_{2}b_{1}+\omega^{r}a_{3}b_{2}, \tag{2}\] where, \(r=0,1,2\) and \(\omega=e^{\frac{2\pi i}{3}}\). We extend the SM field content with three singlet heavy right handed neutrinos viz., \(\nu_{l_{R}(l=e,\mu,\tau)}\), which under \(\Delta(27)\) transform as \(1_{00},1_{01}\,{\rm and}\,1_{02}\). Two additional \(\Delta(27)\) triplet scalar fields, \(\chi\,{\rm and}\,\Delta\) are introduced, which individually transform as 1 and 3 under \(SU(2)_{L}\). In addition to these, we consider four \(SU(2)_{L}\) singlets \(\eta,\kappa,\xi\,{\rm and}\,\zeta\), transforming as \(1_{02},1_{01},1_{00}\,{\rm and}\,1_{00}\) under \(\Delta(27)\) respectively. An additional \(Z_{10}\) symmetry is introduced to restrict some undesirable terms, which are otherwise allowed by \(SU(2)_{L}\,{\rm and}\,\Delta(27)\) in the Yukawa Lagrangian. Here, \(Z_{10}\) symmetry typically refers to a modulus symmetry associated with the integers modulo 10. This mathematical structure involves the numbers 0 to 9 and includes operations such as addition and subtraction modulo 10. The transformation of all the field contents under \(SU(2)_{L}\times\Delta(27)\times Z_{10}\) symmetry is shown in the Table (1). The Yukawa Lagrangian invariant under \(SU(2)_{L}\times\Delta(27)\times Z_{10}\) symmetry is presented below, \[-\mathcal{L}_{Y} = \frac{y_{1}}{\Lambda}(\bar{D}_{l_{L}}\chi)\,H\,e_{R}+\,\frac{y_{2} }{\Lambda}(\bar{D}_{l_{L}}\chi)\,H\,\mu_{R}+\,\frac{y_{3}}{\Lambda}(\bar{D}_{l_ {L}}\chi)\] \[H\,\tau_{R}+\frac{y_{e}}{\Lambda}\,(\bar{D}_{l_{L}}\chi)\,\bar{H} \,\nu_{e_{R}}+\,\frac{y_{\mu}}{\Lambda^{2}}\,(\bar{D}_{l_{L}}\chi)\,\bar{H}\, \nu_{\mu_{R}}\,\xi\] \[+\,\frac{y_{\tau}}{\Lambda^{2}}\,(\bar{D}_{l_{L}}\chi)\,\bar{H} \,\nu_{\tau_{R}}\,\zeta\,+\,\frac{1}{2}\,M_{1}\,[\bar{\nu}_{e_{R}}\,\nu_{e_{R} }^{c}+\bar{\nu}_{\mu_{R}}\nu_{\tau_{R}}^{c}\] \[+\,\bar{\nu}_{\tau_{R}}\,\nu_{\mu_{R}}^{c}]\,+\,\frac{1}{2}\,y_{ \tau}[(\bar{\nu}_{\mu_{R}}\,\nu_{\mu_{R}}^{c})\,\eta\,+\,(\bar{\nu}_{\tau_{R}} \,\nu_{\tau_{R}}^{c})\,\kappa]\] \[+\,y_{\Delta}(\bar{D}_{l_{L}}\,D_{l_{L}}^{c})\,i\,\tau_{2}\, \Delta\,+h.c.\] Here, \(\Lambda\) is the unknown scale of the theory. The scalar potential invariant under \(SU(2)_{L}\times\Delta(27)\times Z_{10}\) takes the following form, \begin{table} \begin{tabular}{c c c c c c c c c c c} Fields & \(\bar{D}_{l_{L}}\) & \(D_{l_{L}}^{c}\) & \(l_{R}\) & \(H\) & \(\nu_{l_{R}}\) & \(\chi\) & \(\Delta\) & \(\eta\) & \(\kappa\) & \(\xi\) & \(\zeta\) \\ \hline \(SU(2)_{L}\) & 2 & 2 & 1 & 2 & 1 & 1 & 3 & 1 & 1 & 1 & 1 \\ \(\Delta(27)\) & 3 & 3 & 1\({}_{0r}\) & 1\({}_{00}\) & 1\({}_{0r}\) & 3 & 3 & 1\({}_{02}\) & 1\({}_{01}\) & 1\({}_{00}\) & 1\({}_{00}\) \\ \hline \((Z_{10},+)\) & 0 & 0 & 0 & 0 & (0,4,6) & 0 & 0 & 2 & 8 & 6 & 4 \\ \end{tabular} \end{table} Table 1: The transformation properties of the field contents under \(SU(2)_{L}\times\Delta(27)\times Z_{10}\), where \(r=0,1,2\) and \(l=e,\mu,\tau\). \[V = -\mu_{\chi}^{2}(\chi^{\dagger}\chi)+\lambda_{1}^{\chi}(\chi^{ \dagger}\chi)_{1_{00}}(\chi^{\dagger}\chi)_{1_{00}}+\lambda_{2}^{\chi}(\chi^{ \dagger}\chi)_{1_{10}}(\chi^{\dagger}\chi)_{1_{20}}+\lambda_{3}^{\chi}(\chi^{ \dagger}\chi)_{1_{01}}(\chi^{\dagger}\chi)_{1_{02}}+\lambda_{4}^{\chi}(\chi^{ \dagger}\chi)_{1_{11}}(\chi^{\dagger}\chi)_{1_{22}}+\lambda_{5}^{\chi}(\chi^{ \dagger}\chi)_{1_{21}}\] \[(\chi^{\dagger}\chi)_{1_{12}}+\lambda_{6}^{\chi}(\chi\chi)(\chi^{ \dagger}\chi)-\mu_{\Delta}^{2}(\Delta^{\dagger}\Delta)+\lambda_{1}^{\Delta}( \Delta^{\dagger}\Delta)_{1_{00}}(\Delta^{\dagger}\Delta)_{1_{00}}+\lambda_{2}^ {\Delta}(\Delta^{\dagger}\Delta)_{1_{10}}(\Delta^{\dagger}\Delta)_{1_{20}}+ \lambda_{3}^{\Delta}(\Delta^{\dagger}\Delta)_{1_{01}}(\Delta^{\dagger}\Delta)_ {1_{02}}\] \[+\lambda_{4}^{\Delta}(\Delta^{\dagger}\Delta)_{1_{11}}(\Delta^{ \dagger}\Delta)_{1_{22}}+\lambda_{5}^{\Delta}(\Delta^{\dagger}\Delta)_{1_{21}} (\Delta^{\dagger}\Delta)_{1_{12}}+\lambda_{6}^{\Delta}(\Delta\Delta)(\Delta^{ \dagger}\Delta^{\dagger})-\mu_{H}^{2}(H^{\dagger}H)+\lambda_{1}^{H}(H^{ \dagger}H)_{1_{00}}(H^{\dagger}H)_{1_{00}}+\lambda_{1}^{\chi\Delta}\] \[(\chi^{\dagger}\chi)_{1_{00}}(\Delta^{\dagger}\Delta)_{1_{00}}+ \lambda_{2}^{\Delta}(\chi^{\dagger}\chi)_{1_{01}}(\Delta^{\dagger}\Delta)_{1_{ 20}}+\lambda_{3}^{\Delta}(\chi^{\dagger}\chi)_{1_{01}}(\Delta^{\dagger}\Delta)_ {1_{02}}+\lambda_{4}^{\chi}(\chi^{\dagger}\chi)_{1_{11}}(\Delta^{\dagger}\Delta )_{1_{22}}+\lambda_{5}^{\Delta}(\chi^{\dagger}\chi)_{1_{21}}(\Delta^{\dagger}\] \[\Delta)_{1_{2}}+\lambda_{6}^{\chi\Delta}(\chi^{\dagger}\chi)_{1_{ 20}}(\Delta^{\dagger}\Delta)_{1_{01}}+\lambda_{7}^{\chi\Delta}(\chi^{\dagger} \chi)_{1_{02}}(\Delta^{\dagger}\Delta)_{1_{01}}+\lambda_{8}^{\chi\Delta}(\chi^ {\dagger}\chi)_{1_{22}}(\Delta^{\dagger}\Delta)_{1_{11}}+\lambda_{9}^{\chi \Delta}(\chi^{\dagger}\chi)_{1_{22}}(\Delta^{\dagger}\Delta)_{1_{21}}+\lambda_{ 10}^{\chi\Delta}\] \[((\chi\chi)(\Delta\Delta)+(\chi^{\dagger}\chi^{\dagger})(\Delta^{ \dagger}\Delta^{\dagger}))+\lambda^{\chi H}(\chi^{\dagger}\chi)(H^{\dagger}H)+ \lambda_{1}^{H}(\chi^{\dagger}H)(H^{\dagger}\chi)+\lambda^{\Delta H}(\Delta^{ \dagger}\Delta)(H^{\dagger}H)+\lambda_{1}^{\Delta H}(H^{\dagger}\Delta)( \Delta^{\dagger}H)\] \[-\mu_{1}^{2}(\eta\kappa)+\lambda^{\eta\kappa}(\eta^{\dagger}\eta)_ {1_{00}}(\kappa^{\dagger}\kappa)_{1_{00}}+\lambda_{1}^{\eta\kappa}(\eta^{ \dagger}\kappa)_{1_{02}}(\kappa^{\dagger}\eta)_{1_{01}}-\mu_{2}^{2}(\xi^{ \dagger}\zeta+\xi\zeta)+\lambda^{\xi\zeta}((\xi^{\dagger}\xi)_{1_{00}}(\zeta^{ \dagger}\zeta)_{1_{00}}+(\xi\xi)_{1_{00}}(\zeta\zeta)_{1_{00}})+(\lambda\xi)_{ 1_{00}}(\zeta\zeta)_{1_{00}})+(\lambda\eta\kappa)\] \[\lambda_{1}^{\eta\kappa}((\xi^{\dagger}\zeta)_{1_{00}}(\zeta^{ \dagger}\xi)_{1_{00}}+(\xi\xi)_{1_{00}}(\zeta\xi)_{1_{00}}))+\,h.c\] In models with discrete symmetry, it's common to have multiple coupling constants in the scalar potential. This provides the freedom to choose suitable vacuum alignments for the scalar fields. For simplicity and without loss of generality, we take the following alignments for the scalar fields: \(\langle\chi\rangle=v_{\chi}(1,0,0)\), \(\langle H\rangle=v_{h}\), \(\langle\Delta\rangle=v_{\Delta}(1,1,1)\), \(\langle\eta\rangle=\langle\kappa\rangle=v_{r}\) and \(\langle\xi\rangle=\langle\zeta\rangle=v_{e}\). We assume the couplings and vacuum expectation values(VEVs) to be real, and we choose \(\lambda_{1}^{\chi\Delta}=\lambda_{2}^{\chi\Delta}=\lambda_{6}^{\chi\Delta}=\lambda \) to obtain the following relations from the scalar potential minimisation equations, \[\mu_{\chi}^{2} = 2v_{\chi}^{2}(\lambda_{1}^{\chi}+\lambda_{2}^{\chi}+\lambda_{6}^{ \chi})+v_{h}^{2}(\lambda^{\chi H}+\lambda_{1}^{\chi H}),\] \[\mu_{\Delta}^{2} = 6v_{\Delta}^{2}(\lambda_{1}^{\Delta}+\lambda_{3}^{\Delta}+\frac{ \lambda_{6}^{\Delta}}{3})+v_{h}^{2}(\lambda^{\Delta H}+\lambda_{1}^{\Delta H}),\] \[\mu_{H}^{2} = 2v_{h}^{2}\lambda_{1}^{H}+v_{\chi}^{2}(\lambda^{\chi H}+\lambda_{1 }^{\chi H})+3v_{\Delta}^{2}(\lambda^{\Delta H}+\lambda_{1}^{\Delta H}),\] \[\mu_{1}^{2} = 2v_{r}^{2}(\lambda^{\eta\kappa}+\lambda_{1}^{\eta\kappa}),\] \[\mu_{2}^{2} = 2v_{e}^{2}(\lambda^{\xi\zeta}+\lambda_{1}^{\xi\zeta}),\] \[\lambda = -\frac{2}{3}\lambda_{10}^{\chi\Delta}.\] Therefore, the above choices of the VEVs for the scalar fields represent a possible global minima candidate of the scalar potential Eq.(4), and we shall stick to these VEVs in our upcoming discussions. Along with the chosen VEVs for the two triplets \(\chi\) and \(\Delta\), the other possible vacuum alignments for two scalar triplet model under \(\Delta(27)\), are extensively discussed in Ref.[39]. The chosen VEVs allows us to extract the information of the charged lepton mass matrix (\(M_{l}\)), Dirac neutrino mass matrix (\(M_{D}\)), right handed Majorana neutrino mass matrix(\(M_{R}\)) and Type-II seesaw matrix(\(M_{II}\)) from the Yukawa Lagrangian in the following manner, \[M_{l} = \left[\begin{array}{ccc}\frac{v_{h}v_{\chi}y_{1}}{\Lambda}&0&0\\ 0&\frac{v_{h}v_{\chi}y_{2}}{\Lambda}&0\\ 0&0&\frac{v_{h}v_{\chi}y_{3}}{\Lambda}\end{array}\right],\] \[M_{D} = \left[\begin{array}{ccc}\frac{v_{h}v_{\chi}y_{e}}{\Lambda}&0&0\\ 0&\frac{v_{h}v_{\chi}y_{e}}{\Lambda}&0\\ 0&\frac{v_{h}v_{\chi}v_{\chi}y_{\nu}}{\Lambda^{2}}\end{array}\right],\] \[M_{R} = \left[\begin{array}{ccc}M_{1}&0&0\\ 0&M_{2}&M_{1}\\ 0&M_{1}&M_{2}.\end{array}\right],\,M_{II}=\frac{y_{\Delta}v_{\Delta}}{2}\left[ \begin{array}{ccc}2&1&1\\ 1&2&1\\ 1&1&2\end{array}\right],\] where \(M_{2}=y_{r}v_{r}\). Taking the contribution arising from Type-I seesaw which is \(M_{I}=-M_{D}.M_{R}^{-1}.M_{D}^{T}\), the complex Majorana neutrino mass matrix (\(M_{\nu}=M_{I}+M_{II}\)) after simplification and redefinition of the parameters, takes the following partial \(\mu-\tau\) symmetric form, \[M_{\nu}=\left[\begin{array}{ccc}t-d&t/2&t/2\\ t/2&t+a&\frac{t}{2}-c\\ t/2&\frac{t}{2}-c&t+b\end{array}\right], \tag{7}\] such that the _model parameters_ viz., \(y_{e},y_{\mu}\), \(y_{r},y_{\Delta},v_{\chi},v_{\Delta},v_{h},r_{r},v_{e}\) and \(M_{1}\) are related to the _texture parameters_ viz., \(a,b,c,d\) and \(t\) in the following way, \[\frac{v_{h}v_{\chi}y_{e}}{\Lambda\sqrt{M_{1}}} = \sqrt{d},\] \[\frac{v_{h}v_{\chi}v_{t}y_{\mu}}{\Lambda^{2}\sqrt{M_{1}}} = \frac{\sqrt{\sqrt{a}\sqrt{b}\,c-\frac{a^{3/2 ## III The Pontecorvo-Maki-Nakagawa-Sakata(PMNS) matrix The PMNS matrix is a \(3\times 3\) unitary matrix, parametrised in terms of mixing angles and phases. The parametrisation of the PMNS matrix (\(U\)) adopted by the Particle Data Group [41] is shown below, \[U=P_{\phi}.\tilde{U}.P_{M}, \tag{9}\] we demarcate \(U\) which is the general one, from the \(\tilde{U}\) that excludes the phases. Where, \(P_{M}=diag(e^{i\alpha},e^{i\beta},1)\) contains the two Majorana phases, \(\alpha\) and \(\beta\), that are expected to get probed in the future experiments. The arbitrary phase matrix \(P_{\phi}=diag(e^{i\phi_{1}},e^{i\phi_{2}},e^{i\phi_{3}})\), contains three arbitrary phases (\(\phi_{i=1,2,3}\)), and the latter can be eliminated from \(U\) by redefining the charged lepton fields in terms of these phases. Hence, the PMNS matrix after redefinition of the said phases, attains the following form, \[U=\tilde{U}.P_{M}. \tag{10}\] The matrix \(\tilde{U}\) is written in its standard form, in the following manner, \[\tilde{U} = \begin{bmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{bmatrix}\times\begin{bmatrix}c_{13}&0&s_{13}\,e^{-i\delta }\\ 0&1&0\\ -s_{13}e^{i\delta}&0&c_{13}\end{bmatrix} \tag{11}\] \[\times\begin{bmatrix}c_{12}&s_{12}&0\\ -s_{12}&c_{12}&0\\ 0&0&1\end{bmatrix},\] where, \(s_{ij}\) and \(c_{ij}\) represents \(\sin\theta_{ij}\) and \(\cos\theta_{ij}\) respectively. If the charged lepton mass matrix is already diagonal, then usually, the matrix \(U\) is the diagonalising matrix for \(M_{\nu}\), i,e, \[U^{T}.M_{\nu}.U=diag(m_{1},m_{2},m_{3}), \tag{12}\] We rearrange the above equation in the following way, \[\tilde{U}^{T}.M_{\nu}.\tilde{U}=diag(\tilde{m_{1}},\tilde{m_{2}},m_{3}), \tag{13}\] such that, \(\tilde{m_{1}}=m_{1}e^{-2i\alpha}\) and \(\tilde{m_{2}}=m_{2}e^{-2i\beta}\). Now, \(\tilde{U}\) is the observational matrix/PMNS matrix which diagonalises the neutrino mass matrix \(M_{\nu}\). Over the years, researchers have proposed various mixing patterns related to the neutrino mass matrix \(M_{\nu}\), the most general ansatz involves \(s_{13}=0\), \(s_{23}=c_{23}=1/\sqrt{2}\), and keeping \(s_{12}\) as a free parameter. The matrix form of this ansatz is shown below, \[U_{a}=\begin{bmatrix}c_{12}&s_{12}&0\\ -s_{12}/\sqrt{2}&c_{12}/\sqrt{2}&1/\sqrt{2}\\ s_{12}/\sqrt{2}&-c_{12}/\sqrt{2}&1/\sqrt{2}\end{bmatrix}, \tag{14}\] depending on the choice of \(s_{12}\), various mixing patterns in the light of finite discrete symmetries such as \(A_{4},S_{4},S_{5}\) etc. are present in the literature. The Tribimaximal(TB) mixing pattern [22] was first proposed by Harrison, Perkins, and Scott, here one puts \(s_{12}=1/\sqrt{3}\) in Eq.(14). The TB mixing is special, as it forces \(M_{\nu}\) to take a structure called \(\mu-\tau\) symmetry [22; 23; 24; 25] and vice versa. In case of Bimaximal(BM) mixing [42], one inserts \(s_{12}=1/\sqrt{2}\) in the Eq.(14). Similarly, for Hexagonal(HEX) mixing [43] and Golden ratio(GRa) mixing [44], one uses \(s_{12}=1/2\) and \(s_{12}=1/\sqrt{1+\phi^{2}}\) respectively, where \(\phi=(1+\sqrt{5})/2\), is the golden ratio. However, with recent progress in the neutrino physics experiments [45; 46] showing a non-zero value for \(\theta_{13}\), the credibility of these mixing ansatze is questionable. It is therefore important to seek corrections to these mixing patterns or to look for afresh mixing ansatze that align with both reality and theory. With this motivation, we posit new choices for the mixing parameters in \(\tilde{U}\) as follows: \[s_{12}=\frac{1}{\sqrt{3}};\quad s_{13}=\frac{1}{3\sqrt{5}};\] \[s_{23}=\sqrt{\frac{3}{5}};\quad\delta=\frac{3\pi}{2}.\] In the upcoming section, we shall explore how this choice is special in the context of \(M_{\nu}\) in Eq.(7). ## IV Numerical analysis and important observations The diagonalising matrix \(\tilde{U}\), constrains the \(M_{\nu}\) in the following way, \[\tilde{U}^{T}.M_{\nu}.\tilde{U}=diag\,(m_{1}\,e^{-2i\alpha},m_{2}\,e^{-2i\beta},m_{3}), \tag{15}\] such that, all complex texture parameters can be expressed in terms three real texture parameters viz., \(Re[d],Im[d]\) and \(Re[t]\). Which means, these three independent real parameters can span \(M_{\nu}\), and all the predictions associated with it. We generate a sufficient number of data points for \(Re[d],Im[d]\) and \(Re[t]\), ensuring that the observational parameters \(\Delta m^{2}_{21}\) and \(\Delta m^{2}_{31}\) fall within the \(3\sigma\) bound [46], and that the total sum of the three neutrino masses, \(m_{1}+m_{2}+m_{3}\), is less than \(0.12\)eV [47]. It is to be emphasized that we conduct the analysis without assuming any hierarchy beforehand. However, based on our analysis, we find that there are no data points available for the case of inverted hierarchy of the neutrino mass. The obtained data points for the parameters \(Re[d],Im[d]\) and \(Re[t]\) are shown in the Table (2). In Figure 1(a), we visually represent the concerned datasets for the free parameters. In Table 3, we present the predicted lowest and highest values for the three mass eigenvalues (\(m_{1}\), \(m_{2}\), \(m_{3}\)), the sum of these eigenvalues (\(\sum_{i=1}^{3}m_{i}\)), and the two Majorana CP phases (\(\alpha\) and \(\beta\)). From the predictions in Table 3, it is observed that the model predicts the normal hierarchy of neutrino masses. Additionally, we investigate the effective Majorana neutrino mass, \(m_{\beta\beta}=|U_{ei}^{2}m_{i}|\)\((i=1,2,3)\), which is an important observable parameter in the neutrinoless double beta decay \((0\nu\beta\beta)\)[48; 49]. The minimum and maximum values for \(m_{\beta\beta}\) predicted from our model are shown in the Table (3), and these values are found to be consistent with experimental constraints, which have upper bounds as: \(<(75-350)\,meV\)[50], \(<(104-228)\,meV\)[51], and \(<(61-165)\,meV\)[52] etc. It is evident from Eq.(8), that the model parameters can also be expressed in terms of the free parameters \(Re[d],Im[d]\) and \(Re[t]\). For the sake of simplicity, we redefine the model parameter combinations in Eq.(8) as follows: \[\frac{v_{h}v_{\chi}y_{e}}{\Lambda\sqrt{M_{1}}}=Y_{1};\quad\frac{v _{h}v_{\chi}v_{\chi}y_{\mu}}{\Lambda^{2}\sqrt{M_{1}}}=Y_{2};\] \[\frac{v_{h}v_{\chi}v_{\epsilon}y_{\tau}}{\Lambda^{2}\sqrt{M_{1}} }=Y_{3};\quad\frac{v_{\tau}y_{\tau}}{M_{1}}=Y_{4};\quad v_{\Delta}y_{\Delta}=Y _{5}.\] It is to be underlined that the model is specifically designed to accurately predict these combinations of the model parameters. The predicted maximum and minimum values of the latter are presented in Table 4. ## V Summary and Discussions In this paper, we have constructed a mass matrix model, based on the extension of the SM with \(\Delta(27)\) symmetry in a hybrid seesaw framework. We have introduced an additional symmetry, \(Z_{10}\), to restrict some unfavorable terms from appearing in the Yukawa lagrangian. The mass matrix bears a partial \(\mu-\tau\) symmetry, and contains five complex parameters. To derive the phenomenology associated with the texture, we have chosen specific entries in the mixing matrix as follows: \(s_{12}=1/\sqrt{3}\), \(s_{13}=1/(3\sqrt{5})\), \(s_{23}=\sqrt{3/5}\), and \(\delta=3\pi/2\). These entries serve to reduce the number of free parameters in the neutrino mass matrix, thereby aiding in effectively managing the complexities arising from the model. In this context, our model, in principle, has only three real and independent texture parameters that collectively describe the entire mass matrix and all its associated predictions. It's also important to note that in constructing models, approximations are commonly used, which can restrict parameter independence in the resulting mass matrix texture. However, our model is an exact one, and successfully tackles these challenges. It precisely predicts the normal ordering of neutrino masses and allows the two Majorana phases to vary between \(-90^{\circ}\) and \(90^{\circ}\). The current model predicts the highest possible value for \(m_{\beta\beta}\) to be 31.30 meV, which lies below the experimental upper bound. In addition to these, we have also presented the possible extrema for the texture and the model parameters. ## Acknowledgement One of the authors, MD thanks R Srivastava, Department of Physics, IISER-Bhopal, for an important discussion related to the \(\Delta(27)\) symmetry group. MD acknowledges financial support from Council of Scientific and Industrial Research (CSIR), Government of India through a NET Junior Research Fellowship vide grant No. 09/0059(15346)/2022-EMR-I. \begin{table} \begin{tabular}{|c|c|c|} \hline Parameters & Minimum Value & Maximum Value \\ \hline \(Re[d]\)/eV & -0.029 & 0.029 \\ \hline \(Im[d]\)/eV & -0.037 & 0.037 \\ \hline \(Re[t]\)/eV & -0.004 & 0.004 \\ \hline \end{tabular} \end{table} Table 2: Represents the approximate maximum and minimum values for the Independent real texture parameters \begin{table} \begin{tabular}{|c|c|c|} \hline Prediction & Lowest Value & Highest Value \\ \hline \(m_{1}\)/eV & 0.018 & 0.030 \\ \hline \(m_{2}\)/eV & 0.020 & 0.031 \\ \hline \(m_{3}\)/eV & 0.053 & 0.059 \\ \hline \(\sum m_{i}\)/eV & 0.093 & 0.119 \\ \hline \(\alpha\)/\({}^{\circ}\) & \(-90\) & 90 \\ \hline \(\beta\)/\({}^{\circ}\) & \(-90\) & 90 \\ \hline \(m_{\beta\beta}\)/meV & 4.679 & 31.359 \\ \hline \end{tabular} \end{table} Table 3: Shows the lowest and highest values of the mass eigenvalues \(m_{1}\), \(m_{2}\), \(m_{3}\), the sum of mass eigenvalues \(m_{1}+m_{2}+m_{3}\), the Majorana phases \(\alpha\), \(\beta\) and the effective Majorana neutrino mass \(m_{\beta\beta}\). Figure 1: The correlation plots for the free parameters, mass eigenvalues, Majorana phases and \(m_{\beta\beta}\) are shown: (a) Plots the relationship between the free parameters \(Re[d]\) and \(Re[t]\), as well as \(Im[d]\) and \(Re[t]\).(b) Highlights plots for \(m_{2}\) vs \(m_{1}\), \(m_{3}\) vs \(m_{1}\), and \(m_{1}+m_{2}+m_{3}\) vs \(m_{1}\). (c) Shows the plot depicting the relationship between the Majorana phases \(\alpha\) and \(\beta\).(d) Represents the plot relating the effective Majorana neutrino mass \(m_{\beta\beta}\) to the lightest mass eigenvalue \(m_{1}\). Figure 2: The correlation plots for the model parameters are shown: (a)Represents the plots of \(|Y_{2}|\) vs \(|Y_{1}|\) and \(|Y_{3}|\) vs \(|Y_{1}|\). (b) Represents the plot between \(|Y_{5}|\) and \(|Y_{4}|\). (c) Shows the plots of Arg[\(Y_{2}\)] vs Arg[\(Y_{1}\)] and Arg[\(Y_{3}\)] vs Arg[\(Y_{1}\)]. (d) Highlights the plot between Arg[\(Y_{5}\)] and Arg[\(Y_{4}\)].
eleganteなモデルは、標準モデルを$\Delta(27)\times Z_3 \times Z_{10}$ 規約を用いて拡張することで提案されています。このモデルは、特に、 大気混ざり角$\theta_{23}$を特定の値に制限する能力、 そして、電子の質量階層の観測と説明を可能にするという点で注目されます。 このモデルは、3つの実パラメータで定義された、 ニュートリノ質量行列のテクスチャーが、 3つのニュートリノ質量値、 そして2つのマジョランパセフを予測します。 さらに、このモデルは、 無核の二重β崩壊($0\nu\beta\beta$) および電子の種類変動(cLFV)実験の 実験結果と合致しています。
2309.13006
Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches
The rapid development of AR/VR brings tremendous demands for 3D content. While the widely-used Computer-Aided Design (CAD) method requires a time-consuming and labor-intensive modeling process, sketch-based 3D modeling offers a potential solution as a natural form of computer-human interaction. However, the sparsity and ambiguity of sketches make it challenging to generate high-fidelity content reflecting creators' ideas. Precise drawing from multiple views or strategic step-by-step drawings is often required to tackle the challenge but is not friendly to novice users. In this work, we introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information. Specifically, we introduce a lightweight generation network for efficient inference in real-time and a structural-aware adversarial training approach with a Stroke Enhancement Module (SEM) to capture the structural information to facilitate learning of the realistic and fine-detailed shape structures for high-fidelity performance. Extensive experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets.
Tianrun Chen, Chenglong Fu, Ying Zang, Lanyun Zhu, Jia Zhang, Papa Mao, Lingyun Sun
2023-09-22T17:12:13
http://arxiv.org/abs/2309.13006v1
# Deep3DSketch\(+\): Rapid 3D Modeling from Single Free-hand Sketches ###### Abstract The rapid development of AR/VR brings tremendous demands for 3D content. While the widely-used Computer-Aided Design (CAD) method requires a time-consuming and labor-intensive modeling process, sketch-based 3D modeling offers a potential solution as a natural form of computer-human interaction. However, the sparsity and ambiguity of sketches make it challenging to generate high-fidelity content reflecting creators' ideas. Precise drawing from multiple views or strategic step-by-step drawings is often required to tackle the challenge but is not friendly to novice users. In this work, we introduce a novel end-to-end approach, Deep3DSketch\(+\), which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information. Specifically, we introduce a lightweight generation network for efficient inference in real-time and a structural-aware adversarial training approach with a Stroke Enhancement Module (SEM) to capture the structural information to facilitate learning of the realistic and fine-detailed shape structures for high-fidelity performance. Extensive experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets. Keywords:Sketch 3D reconstruction 3D modeling. ## 1 Introduction The era has witnessed tremendous demands for 3D content [1], especially with the rapid development of AR/VR and portable displays. Conventionally, 3D content is created through manual designs using Computer-Aided Design (CAD) methods. Designing numerous models by hand is not only labor-intensive and time-consuming, but also comes with high demands for the skill set of designers. Specifically, existing CAD-based methods require creators to master sophisticated software commands (_commands knowledge_) and further be able to parse a shape into sequential commands (_strategic knowledge_), which restricts its application in expert users [2, 3]. The restrictions of CAD methods call for the urgent need for alternative ways to support novice users to have access to 3D modeling. Among many alternatives, sketch-based 3D modeling has been recognized as a potential solution in recent years - sketches play an important role in professional designing and our daily life, as it is one of the most natural ways we humans express ideas. Despite many works have utilized sketch to produce 3D models, the majority of existing efforts either require accurate line-drawings from multiple viewpoints or apply step-by-step workflow that requires _strategic knowledge_[4, 5, 6], which is not user-friendly for the masses. Other work proposed retrieval-based approaches from existing models, which lack customizability. To mitigate the research gap, we aim to propose an effective method that uses only one single sketch as the input and generates a complete and high-fidelity 3D model. By fully leveraging the information from input human sketches, the designed approach should offer an intuitive and rapid 3D modeling solution to generate high-quality and reasonable 3D models that accurately reflects the creators' ideas. However, it is a non-trivial task to obtain high-quality 3D models from a single sketch. A significant domain gap exists between sketches and 3D models, and the sparsity and ambiguity of sketches bring extra obstacles. As in [7, 8], deploying widely-used auto-encoder as the backbone of the network can only obtain coarse prediction, thus Guillard et al. [8] use post-processing optimization to obtain fine-grained mesh, which is a time-consuming procedure. It remains a challenge to have rapid 3D modeling from single sketches with high fidelity. Facing the challenge, we hereby propose Deep3DSketch+, an end-to-end neural network with a lightweight generation network and a structural-aware adversarial training approach. Our method comes with a shape discriminator with input from the predicted mesh and ground truth models to facilitate the learning of generating reasonable 3D models. A Stroke Enhancement Module (SEM) is also introduced to boost the capability for structural feature extraction of the network, which is the key information in sketches and the corresponding silhouettes. Extensive experiments were conducted and demonstrated the effectiveness of our approach. We have reported state-of-the-art (SOTA) performance in both synthetic and real datasets. ## 2 Related Works ### Sketch-based 3D Modeling Sketch-based 3D modeling is a research topic that researchers have studied for decades. [9, 10] review the existing sketch-based 3D modeling approaches. Existing sketch-based 3D modeling falls into two categories: end-to-end approach and interactive approach. The interactive approaches require sequential step decomposition or specific drawing gestures or annotations [11, 4, 5, 12, 13, 14, 6], in which users need to have strategic knowledge to perform the 3D modeling process. For end-to-end approaches, works that use template primitives or retrieval-based approaches [15, 16, 17, 18] can produce some decent results but lack customizability. Some very recent works [7, 8, 19] directly reconstruct the 3D model using deep learning and recognized the problem as a single-view 3D reconstruction task. However, sketch-based modeling and conventional monocular 3D reconstruction have substantial differences - the sparse and abstract nature of sketches and lack of textures calls for extra clues to produce high-quality 3D shapes, which are in this work we aim to solve. ### Single-view 3D Reconstruction Single-view 3D reconstruction is a long-standing and challenging task. Recent advances in large-scale datasets like ShapeNet [20] facilitate rapid development in the field, making possible data-driven approaches. Among the data-driven methods, some [21, 22, 23, 24, 25, 26, 27] use category-level information to infer 3D representation from a single image. Others [28, 29, 30, 31, 32, 33] obtain 3D models directly from 2D images, in which the emergence of differentiable rendering techniques played a critical role. There are also recent advances [34, 35, 36, 37, 38] use unsupervised methods for implicit function representations by differentiable rendering. Many geometric processing approaches can enhance the performance. [39, 40, 41, 42, 43, 44, 45] Whereas existing methods concentrate on learning 3D geometry from 2D images, we aim to obtain 3D meshes from 2D sketches - a more abstract and sparse form than real-world colored images. Generating high-quality 3D shapes from such an abstract form of image representation is still a challenge that needs to be solved. ## 3 Method ### Preliminary A single binary sketch \(I\in\left\{0,1\right\}^{W\times H}\) is used as the input of 3D modeling. We let \(I\left[i,j\right]=0\) if marked by the stroke, and \(I\left[i,j\right]=1\) otherwise. The network Figure 1: The overall structure of Deep3DSketch+. is designed to obtain a mesh \(M_{\Theta}=(V_{\Theta},F_{\Theta})\), in which \(V_{\Theta}\) and \(F_{\Theta}\) represents the mesh vertices and facets, and the silhouette \(S_{\Theta}\) of \(M_{\Theta}\) matches with the input sketch \(I\). ### View-aware and Structure-aware 3D Modeling. The overall structure of our method, Deep3DSketch+, is illustrated in Figure 1. The backbone of the network \(G\) is an encoder-decoder structure. As sketches are a sparse and ambiguous form of input, an encoder \(E\) first transforms the input sketch into a latent shape code \(z_{s}\), which summarizes the sketch on a coarse level with the involvement of the semantic category and the conceptual shape. A decoder \(D\) consisting of cascaded upsampling blocks is then used to calculate the vertex offsets of a template mesh and deforms it to get the output mesh \(M_{\Theta}=D(z_{s})\) with fine details by gradually inferring the 3D shape information with increased spatial resolution. Next, the generated mesh \(M_{\Theta}\) is rendered with a differentiable renderer and generates a silhouette \(S_{\Theta}\). The network is end-to-end trained with the supervision of rendered silhouettes through approximating gradients of the differentiable renderer. However, due to the sparse nature of sketches and the only supervision of the single-view silhouette constraint, the encoder-decoder structured generator \(\boldsymbol{G}\) cannot effectively obtain high-quality 3D shapes. Extra clues must be used to attend to the fine-grained, and realistic objects' structures [7, 8]. Therefore, [8] introduces a two-stage post-refinement scheme through optimization, which first obtains a coarse shape and further optimizes the shape to fit the silhouette. However, such an approach is time-consuming and cannot meet the requirement of real-time interactive modeling. On the contrary, we aim to end-to-end learn a rapid mesh generation while also being capable of producing high-fidelity results. We introduce a shape discriminator and a stroke enhancement module to make it possible. **Shape discriminator and Multi-view Sampling.** We aim to address the challenge by introducing a shape discriminator _CNN_, which introduces the 3D shapes from real datasets during training to force the mesh generator \(\boldsymbol{G}\) to produce realistic shapes, while keeping the generation process efficient during inference. Specifically, the discriminator CNN is inputted with the generated silhouette from the predicted mesh and rendered silhouette from the manually-designed mesh. Moreover, we argue that a single silhouette cannot fully represent the information of the mesh because, unlike the 2D image translation task, the generated mesh \(M_{\Theta}\) is a 3D shape that can be viewed in various views. The silhouette constraints ensure the generated model matches the viewpoint of the particular sketch but cannot guarantee the model is realistic and reasonable across views. Therefore, we propose to randomly sample \(N\) camera poses \(\xi_{1\ldots N}\) from camera pose distribution \(p_{\xi}\). Findings in the realm of shape-from-silhouette have demonstrated that multi-view silhouettes contain valuable geometric information about the 3D object [46, 47]. We use a differentiable rendering module to render the silhouettes \(S_{1...N}\) from the mesh \(M\) and render the silhouettes \(S_{r}\left\{1...N\right\}\) from the mesh \(M_{r}\). The differentiable rendering equation \(R\) is shown in [28]. By inputting the \(S_{r}\left\{1...N\right\}\) to the discriminator for the predicted meshes and the real meshes, the network is aware of the geometric structure of the objects in cross-view silhouettes, ensuring the generated mesh is reasonable and high-fidelity in detail. **Stroke Enhancement Module.** Sketch-based 3D modeling differs from conventional monocular 3D reconstruction tasks, in which the input image has rich textures and versatile features for predicting depth information. But in our sketch-based modeling task, the input sketch and projected silhouettes are in a single color and thus cannot effectively obtain depth prediction results. Alternatively, we propose to fully utilize the monocolored information for feature extraction by introducing a stroke enhancement module (SEM), as shown in Figure 2. The SEM consists of a position-aware attention module as in [48] that encodes a wide range of contextual information into local features to learn the spatial interdependencies of features [49] and a post-process module that is designed to manipulate the feature from position-aware attention with a series of convolutions in order to smoothly add them to the original feature before attention in an element-wise manner. Such a strategy can boost the learning of features in the targeted positions, especially on the boundary. Specifically, the local feature from the silhouette \(A\in\mathbb{R}^{C\times N\times M}\) is fed into a convolutional layer to form two local features \(B,C\in\mathbb{R}^{C\times W}\) where \(W=M\times N\) equals the number of pixels, and another convolutional layer is used to form the feature map \(D\in\mathbb{R}^{C\times N\times M}\). Matrix multiplication is performed between the transpose of \(C\) and \(B\), followed by a softmax layer to generate the attention map \(S\in\mathbb{R}^{W\times W}\), thus enhancing the capability of the utilization of key structural information represented by the silhouette. \[s_{ij}=\frac{\exp\left(B_{i}*C_{j}\right)}{\sum_{i=1}^{W}\exp\left(B_{i}*C_{j} \right)}, \tag{1}\] The attention map is used to produce the output \(F\) through a weighted sum of the original feature and the features across all positions, \[F_{j}=\lambda\sum_{i=1}^{W}\left(s_{j}D_{j}\right)+A_{j} \tag{2}\] ### Loss Function The loss functions are carefully designed with three components to train the network: a multi-scale mIoU loss \(\mathcal{L}_{sp}\), flatten loss and laplacian smooth loss \(\mathcal{L}_{r}\), and a structure-aware GAN loss \(\mathcal{L}_{sd}\). The multi-scale mIoU loss \(\mathcal{L}_{sp}\) measures the similarity between rendered silhouettes and ground truth silhouettes. Aiming at improving computational efficiency, we progressively increase the resolutions of silhouettes, which is represented as \[\mathcal{L}_{sp}=\sum_{i=1}^{N}\lambda_{s_{i}}\mathcal{L}_{iou}^{i} \tag{3}\] \(\mathcal{L}_{iou}\) is defined as: \[\mathcal{L}_{iou}\left(S_{1},S_{2}\right)=1-\frac{\left\|S_{1}\otimes S_{2} \right\|_{1}}{\left\|S_{1}\oplus S_{2}-S_{1}\otimes S_{2}\right\|_{1}} \tag{4}\] where \(S_{1}\) and \(S_{2}\) is the rendered silhouette. We also proposed to use flatten loss and Laplacian smooth loss to make meshes more realistic with higher visual quality, represented by \(\mathcal{L}_{r}\), as shown in [7, 31, 28]. For our structure-aware GAN loss \(\mathcal{L}_{sd}\), non-saturating GAN loss [50] is used. \[\mathcal{L}_{sd} =\mathbf{E}_{\mathbf{z_{v}}\sim p_{z_{v}},\xi\sim p_{\xi}}\left[f \left(CNN_{\theta_{D}}\left(R(M,\xi)\right)\right)\right] \tag{5}\] \[+\mathbf{E}_{\mathbf{z_{v}}\sim p_{z_{v}},\xi\sim p_{\xi}}\left[f \left(-CNN_{\theta_{D}}(R(M_{r},\xi))\right)\right]\] \[\textit{where}(u)=-\log(1+\exp(-u)) \tag{6}\] The overall loss function \(Loss\) is calculated as the weighted sum of the three components: \[Loss=\mathcal{L}_{sp}+\mathcal{L}_{r}+\lambda_{sd}\mathcal{L}_{sd} \tag{7}\] ## 4 Experiments ### Datasets Public available dataset for sketches and the corresponding 3D models is rare. Following [7], we take an alternative solution by using the synthetic data _ShapeNet-synthetic_ for training, and apply the trained network to real-world data _ShapeNet-sketch_ for performance evaluation. The synthetic data is obtained by collecting Figure 2: The Details of Stroke Enhancement Module (SEM). \(\otimes\) denotes element-wise multiplication, \(\oplus\) demotes element-wise add operation. an edge map extracted by a canny edge detector of rendered images provided by Kar et al. [51]. 13 categories of 3D objects from ShapeNet are used. The ShapeNet-Sketch is collected by real-human. Volunteers with different drawing skills draw objects based on the images of 3D objects from [51]. A total number of 1300 sketches and their corresponding 3D shapes are included in the dataset. ### Implementation details We use ResNet-18 [52] for the encoder \(E\) for image feature extraction. SoftRas [28] is used for rendering silhouettes. Each 3D object is placed with 0 in evaluation and 0 in azimuth angle in the canonical view, with a fixed distance from the camera. The ground-truth viewpoint is used for rendering. Adam optimizer with the initial learning rate of 1e-4 and multiplied by 0.3 for every 800 epochs. Betas are equal to 0.9 and 0.999. The total training epochs are equal to 2000. The model is trained individually with each class of the dataset. \(\lambda_{sd}\) in Equation. 7 equal to 0.1. ### Results #### 4.3.1 The ShapeNet-Synthetic Dataset. Following [7], we compare our method with the model retrieval approach with features from a pre-trained sketch classification network, and the [7] as the existing state-of-the-art (SOTA) model. We first evaluate the model performance on the _ShapeNet-Synthetic_ dataset, which has the accurate ground truth 3D model for training and evaluation. Commonly-used 3D reconstruction metrics - voxel IoU is used to measure the fidelity of the generated mesh, as shown in Table 1. We also measured the Chamfer Distance, another widely used metric for mesh similarities, as shown in the Supplementary Material. The quantitative \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{Shapenet-synthetic (Voxel Iou \(\uparrow\))} \\ \hline & car & sofa & airplane & bench & display & chair & table \\ \hline Retrieval & 0.667 & 0.483 & 0.513 & 0.380 & 0.385 & 0.346 & 0.311 \\ \hline Auto-encoder & 0.769 & 0.613 & 0.576 & 0.467 & 0.541 & 0.496 & **0.512** \\ \hline Sketch2Model & 0.751 & 0.622 & 0.624 & 0.481 & **0.604** & 0.522 & 0.478 \\ \hline **Ours** & **0.782** & **0.640** & **0.632** & **0.510** & 0.588 & **0.525** & 0.510 \\ \hline & telephone & cabinet & loudspeaker & watercraft & lamp & rifile & mean \\ \hline Retrieval & 0.622 & 0.518 & 0.468 & 0.422 & 0.325 & 0.475 & 0.455 \\ \hline Auto-encoder & 0.706 & 0.663 & 0.629 & 0.556 & 0.431 & 0.605 & 0.582 \\ \hline Sketch2Model & 0.719 & **0.701** & **0.641** & **0.586** & **0.472** & 0.612 & 0.601 \\ \hline **Ours** & **0.757** & 0.699 & 0.630 & 0.583 & 0.466 & **0.632** & **0.611** \\ \hline \end{tabular} \end{table} Table 1: The quantitative evaluation of ShapeNet-Synthetic dataset. evaluation shows the effectiveness of our approach, which achieves state-of-the-art (SOTA) performance. We also conducted a quantitative evaluation of our method compared with existing state-of-the-art, which further demonstrated the effectiveness of our approach to producing models with higher quality and fidelity in structure, as shown in Figure 3. **The ShapeNet-Sketch Dataset.** After training in the synthetic data, we further evaluate the performance of real-world human drawings, which is more challenging due to the creators' varied drawing skills and styles. A domain gap also exists in the synthetic and real data when we train the model on ShapeNet-Synthetic Dataset and use ShapeNet-Sketch Dataset for evaluation. In such settings, a powerful and robust feature extractor with structural-awareness is more critical. In experiments, our model generalizes well in real data. As shown in Table 3, our model outperforms the existing state-of-the-art in most categories, demonstrating the effectiveness of our approach. It is worth noting that the domain adaptation technique could be a potential booster for the network's performance in real datasets with the domain gap in presence, which could be explored in future research. **Evaluating Runtime for 3D modeling.** As previously mentioned, we aim to make the network efficient for rapid 3D modeling. After the network was well-trained, We evaluated the neural network on a PC equipped with a consumer-level graphics card (NVIDIA GeForce RTX 3090). Our method achieves the generation speed of 90 FPS, which is a 38.9% of speed gain compared to Sketch2Model (0.018s) [7]. We also tested the performance solely on CPU (Intel Xeon Gold 5218) and reported an 11.4% of speed gain compared to Sketch2Model (0.070s) [7], with the rate of 16FPS, which is sufficient to be applied for smooth computer-human interaction. Figure 3: **Qualitative evaluation with existing state-of-the-art.** The visualization of 3D models generated demonstrated that our approach is capable of obtaining higher fidelity of 3D structures. ### Ablation Study To verify the effectiveness of our proposed method, we conducted the ablation study as shown in Table 4. We demonstrated that our method with Shape Discriminator (SD) and Stroke Enhancement Module (SEM) contributed to the performance gain to produce models with higher-fidelity, as shown in Figure. 4, compared to w/o SD or SEM (baseline method). Moreover, We argue that the random viewpoint sampling combined with the shape discriminator (SD) with real shapes as inputs allows the neural network "to \begin{table} \begin{tabular}{c c c c c c c c c} \hline SD SEM & car & sofa & airplane & bench & display & chair & table \\ \hline \hline & 0.767 & 0.630 & 0.633 & 0.503 & 0.586 & 0.524 & 0.493 \\ \(\surd\) & 0.778 & 0.632 & **0.637** & 0.503 & **0.588** & 0.523 & 0.485 \\ \(\surd\) & \(\surd\) & **0.782** & **0.640** & 0.632 & **0.510** & **0.588** & **0.525** & **0.510** \\ \hline SD SEM & telephone & cabinet & loudspeaker & watercraft & lamp & rifle & mean \\ \hline \hline & 0.742 & 0.690 & 0.555 & 0.563 & 0.458 & 0.613 & 0.598 \\ \(\surd\) & 0.749 & 0.688 & 0.617 & 0.567 & 0.454 & 0.612 & 0.602 \\ \(\surd\) & \(\surd\) & **0.757** & **0.699** & **0.630** & **0.583** & **0.466** & **0.624** & **0.611** \\ \hline \end{tabular} \end{table} Table 4: Ablation Study. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{Shapenet-sketch (Voxel Iou \(\uparrow\))} \\ \hline & car & sofa & airplane & bench & display & chair & table \\ \hline Retrieval & 0.626 & 0.431 & 0.411 & 0.219 & 0.338 & 0.238 & 0.232 \\ \hline Auto-encoder & 0.648 & **0.534** & 0.469 & 0.347 & 0.472 & 0.361 & 0.359 \\ \hline Sketch2Model & 0.659 & **0.534** & 0.487 & 0.366 & **0.479** & **0.393** & 0.357 \\ \hline **Ours** & **0.675** & **0.534** & **0.490** & **0.368** & 0.463 & 0.382 & **0.370** \\ \hline & telephone & cabinet & loudspeaker & watercraft & lamp & rifle & mean \\ \hline Retrieval & 0.536 & 0.431 & 0.365 & 0.369 & 0.223 & 0.413 & 0.370 \\ \hline Auto-encoder & 0.537 & 0.534 & 0.533 & 0.456 & 0.328 & 0.541 & 0.372 \\ \hline Sketch2Model & 0.554 & **0.568** & **0.544** & 0.450 & 0.338 & 0.534 & **0.483** \\ \hline **Ours** & **0.576** & 0.553 & 0.514 & **0.467** & **0.347** & **0.543** & **0.483** \\ \hline \end{tabular} \end{table} Table 3: The quantitative evaluation of ShapeNet-Sketch dataset. see" real shapes from multiple angles, thus being capable of predicting reasonable structural information that is not even in presence in the sketch (which might be not represented due to the viewpoint constraints). In Figure. 5, We show several examples. It can be observed that the region of the sitting pad on the sofa is reconstructed, although the input human sketch is only viewed backward. The flat plane at the back of the car is reconstructed, although the input human sketch is only viewed near the front, thanks to the introduction of SD and random viewpoint sampling. ## 5 Conclusion In this paper, we propose Deep3DSketch+, which takes a single sketch and produces a high-fidelity 3D model. We introduce a shape discriminator with Figure 4: **Ablation Study.** Our method generates more fine-grained structures compared to the baseline method. Figure 5: **The effectiveness of SD and random-viewpoint sampling.** As shown in the example, the neural network generates more fine-grained structures compared to the baseline method. random-pose sampling to allow the network to generate reasonable 3D shapes and a stroke enhancement model to fully exploit the mono-color silhouette information for high-fidelity 3D reconstruction. The proposed method is efficient and effective, and it is demonstrated by our extensive experiments - we have reported state-of-the-art (SOTA) performance on both real and synthetic data. We believe that our proposed easy-to-use and intuitive sketch-based modeling method have great potential to revolutionize future 3D modeling pipeline.
AR/VRの急速な発展に伴い、3Dコンテンツに対する需要が大きく増加しています。一方、広く使われているコンピュータ支援設計(CAD)手法は、時間と労力を要するモデル링プロセスを必要としますが、スケッチベースの3Dモデリングは、人間とコンピュータの自然な相互作用の形式として潜在的な解決策を提供します。しかしながら、スケッチの散らばりや曖昧さにより、創造者のアイデアを反映した高精細なコンテンツを生成するのが困難です。複数視点からの精密な描画または戦略的な段階的な描画が必要になることもありますが、特に初心者ユーザーには不親切です。この研究では、Deep3DSketch+という新しいエンドツーエンドアプローチを導入しました。このアプローチでは、複数のスケッチや視覚情報を入力せずに、単一の自由手描きから3Dモデリングを行います。特に、リアルタイムでの効率的な推論を可能にする軽量な生成ネットワーク
2309.13760
The effect of Jupiter on the CAI storage problem
By studying the distribution of calcium-aluminium-rich inclusions (CAIs) that are embedded within meteorites, we can learn about the dynamical history of the protoplanetary disk from which our Solar System formed. A long-standing problem concerning CAIs is the CAI storage problem. CAIs are thought to have formed at high temperatures near the Sun, but they are primarily found in carbonaceous chondrites, which formed much further out, beyond the orbit of Jupiter. Additionally, radial drift of CAI particles should have removed them from the solar protoplanetary disk several million years before the parent bodies of meteorites in which they are encountered would have accreted. We revisit a previously suggested solution to the CAI storage problem by Desch, Kalyaan, and Alexander which proposed that CAIs were mixed radially outward through the disk and subsequently got trapped in a pressure maximum created by Jupiter's growing core opening a planet gap. Our aim is to investigate whether their solution still works when we take into account the infall phase during which the disk builds up from the collapse of a molecular cloud core. We build a 1D numerical code in Python using the DISKLAB package to simulate the evolution of the solar protoplanetary disk, starting with a collapsing molecular cloud. We find that outward transport of CAIs during the infall phase is very efficient, possibly mixing them all the way into the far outer disk. Subsequent inward radial drift collects CAIs in the pressure maximum beyond Jupiter's orbit while draining the inner disk, roughly reproducing parts of the result by Desch et al. By introducing CAI formation so early, abundances out to 100 AU remain significant, possibly not consistent with some meteoritic data. It is possible to create a disk that does not expand as far out and also does not push CAIs as far out by using a very slowly rotating cloud.
Stefan Jongejan, Carsten Dominik, Cornelis Dullemond
2023-09-24T21:41:35
http://arxiv.org/abs/2309.13760v1
# The effect of Jupiter on the CAI storage problem ###### Abstract Context:Meteorites preserve an imprint of conditions in the early Solar System. By studying the distribution of calcium-aluminium-rich inclusions (CAIs) that are embedded within meteorites, we can learn about the dynamical history of the protoplanetary disk from which our Solar System formed. A long-standing problem concerning CAIs is the CAI storage problem. CAIs are thought to have formed at high temperatures near the Sun, but they are primarily found in carbonaceous chondrites, which formed much further out, beyond the orbit of Jupiter. Additionally, radial drift of CAI particles should have removed them from the solar protoplanetary disk several million years before the parent bodies of meteorites in which they are encountered would have accreted. Aims:We revisit a previously suggested solution to the CAI storage problem by Desch, Kalyan, and Alexander which proposed that CAIs were mixed radially outward through the disk and subsequently got trapped in a pressure maximum created by Jupiter's growing core opening a planet gap. Our aim is to investigate whether their solution still works when we take into account the infall phase during which the disk builds up from the collapse of a molecular cloud core. Methods:We build a 1D numerical code in Python using the DISKLAB package to simulate the evolution of the solar protoplanetary disk, starting with a collapsing molecular cloud. CAIs are created in the model by thermal processing of solar nebula composition dust, and subsequently transported through the disk by turbulent diffusion, radial drift and advection by the gas. Results:We find that outward transport of CAIs during the infall phase is very efficient, possibly mixing them all the way into the far outer disk. Subsequent inward radial drift collects CAIs in the pressure maximum beyond Jupiter's orbit while draining the inner disk, roughly reproducing parts of the result by Desch et al. By introducing CAI formation so early, abundances out to 100 AU remain significant, possibly not consistent with some meteoritic data. It is possible to create a disk that does not expand as far out and also does not push CAIs as far out by using a very slowly rotating cloud. Conclusions: ## 1 Introduction A wealth of information about the conditions in the early Solar System can be obtained through the study of meteorites. These celestial objects are thought to have undergone little change since their formation over 4.5 billion years ago, and as such they preserve an imprint of the conditions in the solar protoplanetary disk. The distribution of calcium-aluminium-rich inclusions (CAIs) in present-day meteorites is one such piece of information that can tell us something about the dynamical history of our Solar System. These structures were not expected to survive long enough to end up in any meteorite parent bodies at all, yet somehow they must have survived the harsh conditions at the time of their birth for long enough to do so. To add to this mystery, they are also predominantly found in locations far away from the Sun, in whose vicinity they must have originally formed. These two issues together constitute the CAI storage problem. In this paper we present the results of a numerical model of the early Solar System, which attempts to explain the observed distribution of CAIs in meteorites. Our model builds on the solution previously proposed by Desch et al. (2018). That paper attempts to construct a detailed, fine-tuned model for the formation and outward mixing of CAIs, with the goal to fit and address many aspects of the record that is available to us in the form of meteoritic data. While meteoritic data samples limited regions of the Solar System, it is very detailed and produces an important guide on how to model processes in the young Solar System. Our goal in the present study is not to attempt a reconstruction of the Solar System and the meteoritic record on a similar level of detail as Desch et al. Instead, we perform a calculation with similar assumptions about physical processes in the disk, in particular with similar assumptions about the disk viscosity and the CAI-forming processes, but start the computation earlier by computing the effects of the infall phase from a molecular cloud core. We show that the infall phase can significantly influence the "starting conditions" in the final phase of the circumstellar disk in which the Desch model is anchored. In this introduction we start by giving a brief overview about the different kinds of meteorites encountered. We then describe CAIs and the CAI storage problem in more detail, before discussing some of the work previously done on this topic. Meteorites are an important tool for understanding the history of the Solar System and planet formation mechanisms, since they preserve information about the conditions in the solar protoplanetary disk, such as the abundances of their constituent elements, isotope ratios, or temperatures they must have been subjected to. The classification of meteorites into different classes, clans, groups, and subgroups can be a murky topic upon which there is no universal agreement. Here we only present a simplified overview of the different kinds of meteorites. An extensive modern overview of meteorite classification can for example be found in Weisberg et al (2006). Broadly speaking, we can distinguish two types of meteorites: achondrites, which are composed of material that underwent melting or differentiation inside a massive parent body, and chondrites, which did not undergo such processes. Using this definition, achondrites encompass not only rocky achondrites, which represent crustal material, but also iron meteorites, which are composed of an iron-nickel alloy thought to come from the core of a differentiated parent object. An intermediate case exists in the form of pallasites, which consist of silicates embedded within an iron matrix, appearing to have formed from a mix of core as well as mantle or crustal material. Another category of achondrite are the primitive achondrites, which only underwent partial melting (making them more closely related to chondrites), or which perhaps did melt, but without solidifying in crystalline form as a result of it. Other than achondrites originating from parent asteroids or comets, a small number of finds are thought to actually have originated on Mars or the Moon. Chondrites make up the majority of the known meteorite population. Because chondrites did not undergo melting after accreting into a parent body, they are the most primitive kind of meteorite, most closely preserving an imprint of the conditions in the solar protoplanetary disk. There are three main classes of chondrites: ordinary chondrites (OCs), which are the most common type, enstatite chondrites (ECs), which are rich in the mineral enstatite, and carbonaceous chondrites (CCs), which have relatively high carbon abundances. Some smaller classes also exist, as well as ungrouped individual meteorites that might represent the first kind of an entirely new class. Each of the main chondrite classes can be further subdivided into various groups, which are generally thought to sample separate meteorite parent bodies. A defining feature of chondrites is that chondrules are found embedded within them. Chondrules are millimeter-sized spherical droplets of igneous rocks that must have melted at high temperatures before they ended up inside their parent meteorites. This is a first hint that material that was processed at high temperatures must somehow have been transported out to colder regions of the solar protoplanetary disk (Brownlee et al. 2006). The formation time of chondrite parent bodies can be determined indirectly in several ways. Radiometric dating of components such as chondrules, which must have formed before the parent body itself, provides upper limits on the age of the parent body. Similarly, lower limits can be determined through radiometric dating of minerals that are known to have formed only after the parent body itself accreted. Finally, estimates of the maximum temperature reached by the parent body provide model-dependent limits on the accretion time. Using these constraints, enstatite chondrite parent bodies are estimated to have formed first, around 1.8 Myr after the formation of the Solar System, followed by the ordinary chondrite parent bodies after around 2.1 Myr. Carbonaceous chondrite parent bodies formed last. The accretion time of the parent bodies for the different groups the CCs are subdivided in is more spread out over time, but ranges from 2.4 to 4.1 Myr after the formation of the Solar System (Desch et al. 2018). Constraints on meteorite ages also make it possible to put constraints on the timing of planet formation processes. Different kinds of chondrites can be spectroscopically matched to certain groups of asteroids that are found in the asteroid belt today. This way, it can also be determined roughly what heliocentric distance these meteorites originated from. Asteroids associated with enstatite chondrites are found closest to the Sun, around 2 AU. Those linked to ordinary chondrites dominate the region between 2 and 2.5 AU, while the asteroids similar to carbonaceous chondrites are commonly found between 2.5 and 4 AU (Desch et al. 2018). While this does not necessarily mean that the meteorite parent bodies also originally accreted at these locations, at least this ordering of chondrite types by current heliocentric distance seems to match that of their original formation locations. Evidence for this is for example provided by their water content. While enstatite chondrites contain virtually no water, ordinary chondrites do contain small amounts, and also show evidence for aqueous alteration. Carbonaceous chondrites on the other hand are water-rich (Alexander et al. 2012). Since condensation of water-rich minerals is only possible in the relatively cool region of the disk beyond the snow line, this indicates that carbonaceous chondrites formed relatively far from the Sun. Warren (2011) discovered that meteorites can be divided into two distinct clusters based on their isotopic abundances. Carbonaceous chondrites are enriched in certain neutron-rich isotopes, such as \({}^{50}\)Ti or \({}^{54}\)Cr, whereas ordinary and enstatite chondrites, as well as most achondrites (collectively referred to as the non-carbonaceous or NC meteorites), are relatively depleted in these isotopes. This NC-CC dichotomy implies that the two clusters of meteorites formed from two isotopically distinct reservoirs of material. Furthermore, because the parent bodies of the meteorites in which this dichotomy is observed are known to have accreted several million years apart, these two reservoirs must also have coexisted for several million years without significant mixing of material between them. A likely explanation for the separation of the two reservoirs is the formation of Jupiter, which opened a planet gap in the solar protoplanetary disk that prevented material exchange between the reservoirs (Kruijer et al. 2017). A good candidate to explain where the isotopic differences between the NC and CC reservoirs originated from in the first place is heterogeneities in the Solar System's parent molecular cloud. It is possible this cloud was not homogeneous in composition, or that its composition changed over time. Because infall from the molecular cloud affects different regions of the protoplanetary disk in different ways, it is possible that material containing neutron-rich isotopes was primarily added to the CC reservoir, or that material depleted in these isotopes was added primarily to the NC reservoir (Nname et al. 2019). Indeed, the presence of daughter isotopes of \({}^{26}\)Al in meteorites, a radioactive isotope with a half-life of only 0.72 Myr, is evidence that the Solar System's molecular cloud must have been recently polluted with radioactive isotopes, possibly originating from a supernova or Wolf-Rayet stellar winds, something that could explain heterogeneities in either space or time. The present-day coexistence of asteroids linked to both NC and CC meteorites in the asteroid belt can be explained as a result of Jupiter's migration after the asteroid parent bodies accreted, first in- and then outward. This would scatter the different asteroid populations, after which they ended up together in the same asteroid belt (Walsh et al. 2011). Calcium-aluminium-rich inclusions, or CAIs, are small solid structures that are found as inclusions in certain types of meteorites. Often irregularly shaped, they range in size from micrometers to about a centimeter. They are composed of minerals such as corundum (aluminium oxide) or perovskite (calcium titanium oxide), which are enriched in refractory elements such as calcium and aluminium, and which condense at high temperatures (T \(\approx\) 1400 K), implying that CAIs formed near the Sun where such temperatures would be achieved (Grossman 1972). CAIs are also the oldest dated solids, with a mean age of approximately 4567.30 \(\pm\) 0.16 Myr (Connelly et al. 2012). Because of this, and because their constituent minerals are predicted to be the first to condense in a cooling gas of solar nebula composition, the age of CAIs is often equated with the age of the Solar System itself. The time of (initial) CAI formation is therefore also the time relative to which the accretion times of meteorite parent bodies were defined previously. CAIs are primarily found embedded in carbonaceous chondrites, in which they typically make up a couple percent of the total volume. In ordinary and enstatite chondrites on the other hand, CAI abundances are easily an order of magnitude lower, typically less than a tenth of a percent of the total volume. This distribution presents us with two serious problems that together make up the CAI storage problem. The first problem is that CAIs are thought to have formed at high temperatures, which are only obtained close to the Sun. As we have seen however, the carbonaceous chondrites in which they ended up formed further out in the solar protoplanetary disk, likely beyond the orbit of Jupiter in the CC reservoir. Clearly CAIs must have been transported outward through the disk to end up there, but this raises the question why did they not also end up in ordinary and enstatite chondrites, which formed at intermediate heliocentric distances. The second problem is related to the accretion time of carbonaceous chondrite parent bodies. Because the gas in a protoplanetary disk normally experiences an outward pressure-gradient force which partially supports it against gravitational collapse, its orbital velocity can be slightly less than Keplerian. Solid particles do not experience such a pressure-gradient force, and hence have to orbit the central star at the Keplerian velocity. The velocity difference between gas and dust leads to a drag force on the dust particles1 that robs them of angular momentum and causes them to slowly drift inward, spiralling towards the star over time. This is what is called radial drift of dust particles (Weidenschilling 1977). For particles the size of large CAIs (in which most of the CAI mass is contained) that form in the inner disk near the Sun, this radial drift velocity is high enough that they are expected to all vanish into the Sun on a time scale of order \(10^{4}\) years. This is evidently much shorter than the time CAIs would need to remain in the disk in order to be incorporated into carbonaceous chondrite parent bodies, the last of which did not accrete until over 4 Myr after CAI formation. Footnote 1: Technically, smaller dust grains are so well coupled to the gas that they simply get dragged along with the same orbital velocity. But because this velocity is sub-Keplerian, they will drift inward all the same. The CAI storage problem can therefore be summarized as the question how CAIs managed to survive long enough in the disk to end up in any meteorites in the first place, and why they then ended up preferentially in carbonaceous chondrites, which of all meteorite types formed the furthest away from the location where CAIs themselves formed. This problem is closely related to the problem of the NC-CC dichotomy. CAIs are enriched in many of the elements that are also found to be enriched in the CC reservoir in general. However, the NC-CC dichotomy has been found to also extend to elements which aren't present in CAIs, such as nickel (Nanne et al. 2019). This means that simply adding CAIs to a reservoir of otherwise NC composition does not lead to the CC reservoir. Over time, a number of mechanisms have been proposed that could explain how CAIs and other dust species can be transported outward in a protoplanetary disk. Cuzzi et al (2003) showed that inward radial drift of CAIs can be overcome by turbulent diffusion, a process in which turbulent motions of the gas within a protoplanetary disk redistribute solid particles by advection, at least part of which would be in the outward direction. This allows CAIs to survive in the disk on time scales of order \(10^{6}\) years. Keller and Gail (2003) performed both calculations and numerical simulations that showed that while the accretion stream in a protoplanetary disk is normally pointed inward, flows near the disk midplane can actually point outward, thereby providing another mechanism of radial outward transport of material called meridional flow. Boss et al (2012) showed that the gravitational instability, an instability that occurs when a disk becomes so massive that its own self-gravity can no longer be neglected, can lead to a rapid redistribution of mass both in- and outward. Cuzzi et al (2003) also proposed the CAI Factory model, which is a simplified picture of the CAI formation environment. The CAI Factory consists of a region at a nearly constant temperature, where the minerals CAIs are composed of can exist as solids, but which is too hot to allow for other silicates, such as chondrules, in solid form. The radial extent of the CAI Factory changes over time, the outer boundary moving inward as the disk cools. Desch, Kalyaan, and Alexander (2018) proposed a solution to the CAI storage problem, to which we will hereafter simply refer as the DKA18 model. They constructed a 1D hydrodynamics code that builds on Cuzzi's CAI Factory model as well as the suggestion of Kruijer et al (2017) that a planet gap opened by Jupiter prevented material exchange between the regions interior and exterior to the gap, previously mentioned in the context of the NC-CC dichotomy. Their simulation begins at \(t=0\) with a disk that is seeded with dust of solar nebula composition. Viscous heating creates a region near the Sun, the CAI Factory, in which the temperature reaches 1400 K. Solar nebula composition dust that enters this region is thermally processed, and a certain fraction of it is instantaneously converted into CAIs of one particular size. This conversion of dust continues for several \(10^{5}\) years. In the meantime, the disk is viscously evolving. CAIs are transported out of the CAI Factory, both into the Sun and outward into the disk by the effects of turbulent diffusion and meridional flow. A small part of the CAIs diffused past a heliocentric distance of 3 AU in this way, where Jupiter's core of 30 \(M_{\oplus}\) is then assumed to form 0.6 Myr into the simulation. As the planet grows to its full size over the course of the next 4 Myr by accreting gas from its surroundings, it opens up a gap in the disk, where the surface density of the gas is significantly reduced. As a result of this, there exists a region just beyond the planet location where the gas density necessarily increases again in the outward direction. This means that the pressure-gradient force here points inward instead of outward as it normally does, and that gas in this region will be orbiting the Sun with super-Keplerian velocities in order to balance itself against gravitational collapse. This also reverses the sign of dust radial drift, causing CAIs to be transported outward. Some distance beyond the planet, the gas surface density and pressure reach a maximum before continuing their normal outward decrease. In this pressure bump, the gas orbits with exactly the Keplerian velocity, removing any drag force on solid particles and therefore the conditions for dust radial drift. This thus creates a situation in which CAIs in the vicinity always drift in the direction of the pressure bump, at which location they can remain in a stable orbit for millions of years, until the moment they are incorporated into the accreting meteorite parent bodies. In the meantime, CAIs in the inner disk continue to drift into the Sun unimpeded, depleting the formation location of ordinary and enstatite chondrites. At the end of the simulation, the DKA18 model predicts that all remaining CAIs in the disk are concentrated around the pressure bump behind Jupiter's orbit, where their abundance peaks at about 6% of all solids.2 Footnote 2: Other than just calculating the abundances of CAIs and refractory elements, Desch, Kalyaan, & Alexander also demonstrate that disk conditions consistent with the formation of chondrites matching those abundances as well as properties such as water content emerge from contextual evidence such as \({}^{54}\)Cr anomalies and radiometric dating. They also calculate the particle size concentrated by turbulence in each chondrite’s formation location and find an excellent match with observed chondrule sizes. While it seems that the DKA18 model conclusively solves the CAI storage problem, there are some issues with it that could have a significant impact on the results. The most important of these issues is that the model starts with a fully formed star plus disk system, within which CAI formation is then initiated. The build-up of the solar protoplanetary disk from a parent molecular cloud core is neglected. It is therefore unclear what effect the infall phase would have on the timing of CAI formation, their dynamical evolution and final abundance profile, or indeed whether the solution of Jupiter keeping CAIs in place would still work in the first place. While Yang and Ciesla (2012) did perform disk simulations in which the effects of the infall phase were taken into account, showing that CAIs that formed during the infall phase could be preserved in primitive bodies that accreted in the outer disk, they did not address the question why CAIs would be depleted in the inner disk. ## 2 Methods ### Disklab Our model was programmed in Python using DISKLAB, a package developed by Cornelis Dullemond and Til Birnstiel3, which contains many basic functions for setting up disk models, calculating basic quantities such as surface densities and temperatures, and evolving these models over time. While DISKLAB contains methods for the vertical structure of disks and for making full two-dimensional models, only one-dimensional radial (and hence rotationally symmetric) models were used for this project. Footnote 3: [https://github.com/dullemond/DISKLAB](https://github.com/dullemond/DISKLAB) – User ID: dullemond & Birnstiel - Access granted upon request Setting up any model in DISKLAB begins by calling the DiskRadialModel class, which sets up a grid of discrete radial coordinates at which disk quantities will be calculated, as well as basic stellar parameters such as mass and luminosity, all of which can be freely modified. A surface density profile for the disk gas can then be set by hand, but usually the easier approach is to begin with a basic model such as a powerlaw disk and then modify it according to your wishes. In addition to the (main) gas component of the disk, any number of solid components (dust species) can be added to the model. This is done by specifying a certain grain size (or alternatively, a Stokes number) and grain density, as well as the surface density for that component, which is usually the gas surface density multiplied by some dust-to-gas ratio. Dust grains of different sizes would have to be added to the model as separate components, even if they represent dust of the same chemical composition. It is not possible to follow individual dust grains through the disk with this method. An important property of a protoplanetary disk is its (mid-plane) temperature, since this directly influences the isothermal speed, \[c_{s}=\sqrt{\frac{k_{B}T}{\mu m_{p}}}, \tag{1}\] (with \(T\) the temperature, \(k_{B}\) the Boltzmann constant, \(\mu\) the mean molecular weight and \(m_{p}\) the proton mass) and hence also important parameters such as the viscosity \(\nu\) through \[\nu=\alpha c_{s}h. \tag{2}\] Here the Shakura-Sunyaev \(\alpha\)-parameter quantifies the strength of the angular momentum transport due to turbulence (Shakura & Sunyaev 1973), and \(h\) is the vertical scale height of the disk. The midplane temperature due to stellar irradiation \(T_{\rm irr}\) is calculated separately from the temperature due to viscous heating \(T_{\rm visc}\). The two temperatures are then combined as \[T=\left(T_{\rm irr}^{4}+T_{\rm visc}^{4}\right)^{1/4}. \tag{3}\] The irradiation temperature itself is calculated by equating the heating rate due to irradiation by the central star \(Q_{\rm in}(r)\) with the cooling rate \(Q_{\rm cool}(r)\): \[Q_{\rm irr}(r)=2\phi(r)\frac{L_{*}}{4\pi r^{2}}, \tag{4}\] \[Q_{\rm cool}(r)=2\sigma_{\rm SB}T_{\rm eff}(r)^{4}, \tag{5}\] with \(\phi(r)\) the flaring angle of the disk, \(L_{*}\) the stellar luminosity, \(\sigma_{\rm SB}\) the Stefan-Boltzmann constant and \(T_{\rm eff}\) the effective temperature at the surface of the disk. This surface temperature is related to the midplane temperature \(T_{\rm irr}\) as \[T_{\rm irr}=\left(\frac{T_{\rm eff}^{4}}{2}\right)^{1/4}, \tag{6}\] where the factor 2 comes from the fact that of all the stellar radiation intercepted at the disk surface, only half is re-emitted into the disk where it can heat up the midplane. (Chiang & Goldreich 1999) (Dullemond et al. 2001) Combining Equations (4), (5) and (6) then leads to a final expression for the midplane temperature due to irradiation: \[T_{\rm irr}=\left(\frac{\phi(r)}{2\sigma_{\rm SB}}\frac{L_{*}}{4\pi r^{2}} \right)^{1/4}. \tag{7}\] Similarly, the viscous temperature \(T_{\rm visc}\) can be found by equating the viscous heating rate \(Q_{\rm visc}\) with the cooling rate \(Q_{\rm cool}\): \[Q_{\rm visc}(r)=\frac{9}{4}\Sigma_{g}(r)\nu(r)\Omega_{K}(r)^{2}, \tag{8}\] \[Q_{\rm cool}(r)=2\sigma_{\rm SB}T_{\rm eff}(r)^{4}\left(1-e^{-2\tau_{\rm Ross}} \right), \tag{9}\] where \(\Sigma_{g}\) is the gas surface density, \(\nu\) the viscosity, \(\Omega_{K}=\sqrt{GM_{*}/r^{3}}\) the Kepler frequency and \(\tau_{\rm Ross}\) the Rosseland optical depth, which depends on the dust surface density \(\Sigma_{d}\) and Rosseland opacity \(\kappa_{d,\rm Ross}\) as \[\tau_{\rm Ross}=\Sigma_{d}\kappa_{d,\rm Ross}. \tag{10}\] The relation used between the effective (surface) temperature \(T_{\rm eff}\) and the midplane temperature \(T_{\rm visc}\) is now \[T_{\rm visc}=\left(\frac{1}{2}\tau_{\rm Ross}+1\right)^{1/4}T_{\rm eff}. \tag{11}\] Combining Equations (8), (9), (10) and (11) then leads to an expression for the viscous temperature \(T_{\rm visc}\): \[T_{\rm visc}=\left(\frac{9}{8\sigma_{\rm SB}}\frac{\frac{1}{2}\tau_{\rm Ross}+1}{ 1-e^{-2\tau_{\rm gas}}}\Sigma_{g}(r)r(r)\Omega_{K}(r)^{2}\right)^{1/4}. \tag{12}\] Because this expression depends on the viscosity \(v\) as well as the Rosseland opacity \(\kappa_{d,\rm Ross}\), which in turn depend on the temperature themselves, iteration is required. DISKLAB uses Brent's method to find the roots of this equation and solve for \(T_{\rm visc}\). Before this can be done however, the Rosseland mean opacities for the dust species, \(\kappa_{d,\rm Ross}\) in Equation (10), have to be specified. Ideally this is done by calculating them from the frequency-dependent opacities of the dust species that can be specified, but it is also possible to read them from a user-provided table of opacities as a function of density and temperature, to use the Bell & Lin opacity model (Bell & Lin 1994), or to simply specify the value to be used at each radial grid point. Evolving disk models over time is done by solving the time-dependent viscous disk equation: \[\frac{\partial\Sigma_{g}}{\partial t}-\frac{3}{r}\frac{\partial}{\partial r} \left(\sqrt{r}\frac{\partial(\sqrt{r}\Sigma_{g}v)}{\partial r}\right)=\dot{ \Sigma}_{g}, \tag{13}\] where \(\Sigma_{g}\) is the gas surface density, \(v\) is the viscosity and \(\Sigma_{g}\) is a source term for the gas surface density that could correspond to for example infall or photoevaporation. This is a diffusion equation, which DISKLAB can solve using an implicit integration scheme. A consequence of this is that the solutions should be fairly accurate even for large time steps, while this would lead to large errors using an explicit method. It does remain true that multiple smaller time steps lead to more accurate results than a single large one. By default, the boundary condition for Equation (13) is that the gradient of \(\Sigma_{g}\) vanishes at the inner boundary, though this can be changed to instead set it to some custom value. The evolution of dust components is handled separately. The dynamics of a dust particle are determined by its frictional stopping time \(\tau_{\rm stop}\), caused by the aerodynamic drag on the particle, and the related Stokes number, which is a dimensionless constant describing how well the dust is coupled to the gas in the disk. The expression for this stopping time first of all depends on the mean free path of gas molecules \(\lambda_{\rm mfp}\), which is calculated as \[\lambda_{\rm mfp}=\frac{1}{n_{\rm gas}\sigma_{H_{2}}}, \tag{14}\] with \(\sigma_{H_{2}}=2\cdot 10^{-15}\) cm\({}^{2}\) and the gas number density \(n_{\rm gas}\) \[n_{\rm gas}=\frac{\rho_{\rm gas}}{\mu m_{p}}, \tag{15}\] where \(\rho_{\rm gas}\) is the gas density, \(\mu=2.3\) is the mean molecular weight and \(m_{p}\) the proton mass. There are two different physical regimes for the drag force on a solid particle, depending on the grain size \(a\) compared to the mean free path \(\lambda_{\rm mfp}\). In the Epstein regime, which holds when \(a<9/4\lambda_{\rm mfp}\), the grain size is small compared to the mean free path, so the fluid is basically a collisionless collection of molecules following a Maxwell velocity distribution. In this regime, the stopping time takes the form \[\tau_{\rm stop}=\frac{\xi a}{\rho_{\rm gas}v_{\rm th}}, \tag{16}\] where \(\xi\) is the (interior) grain density and \(v_{\rm th}\), the thermal velocity of the gas particles, is given by \[v_{\rm th}=\sqrt{\frac{8k_{B}T}{\pi\mu m_{p}}}=\sqrt{\frac{8}{\pi}}c_{s}. \tag{17}\] In the Stokes regime, which holds when \(a\geq 9/4\lambda_{\rm mfp}\), the dust particles are relatively large, and the gas flows around them as a fluid. In this regime, the precise form of the stopping time depends on the Reynolds number: \[{\rm Re}=\frac{2a\Delta v}{v_{\rm mol}}, \tag{18}\] where \(\Delta v\) is the velocity difference between the gas and the dust in the disk, and the molecular viscosity \(v_{\rm mol}\) is given by \[v_{\rm mol}=\frac{1}{2}v_{\rm th}\lambda_{\rm mfp}. \tag{19}\] When \({\rm Re}<1\): (Birnstiel et al. 2010) \[\tau_{\rm stop}=\frac{2\xi a^{2}}{9v_{\rm mol}\rho_{\rm gas}}. \tag{20}\] When \(1<{\rm Re}<800\): (Perets & Murray-Clay 2011) \[\tau_{\rm stop}=\frac{8\xi a}{3C_{D}\rho_{\rm gas}\Delta v}, \tag{21}\] where the drag coefficient \(C_{D}\) is given by \[C_{D}=\frac{24}{{\rm Re}}\left(1+0.27{\rm Re}\right)^{0.43}+0.47\left(1-e^{-0. 04{\rm Re}^{\lambda_{\rm th}}}\right). \tag{22}\] When \({\rm Re}>800\): (Birnstiel et al. 2010) \[\tau_{\rm stop}=\frac{6\xi a}{\rho_{\rm gas}\Delta v}. \tag{23}\] Regardless of which form for the stopping time applies to some particle, the Stokes number is then calculated as \[{\rm St}=\Omega_{K}\tau_{\rm stop}. \tag{24}\] With the Stokes number determined, we can then see how the dust evolves over time. First of all, as a result of the drag force on the dust caused by the velocity difference \(\Delta v\) with the gas, the dust undergoes radial drift with a velocity \(v_{d}\) given by \[v_{d}=\frac{1}{1+{\rm St}^{2}}\left(v_{r}+{\rm St}\frac{c_{s}^{2}}{\Omega_{K} r}\frac{d{\rm ln}p}{d{\rm ln}r}\right), \tag{25}\] where the last term represents the pressure gradient in the gas, and the radial velocity of the gas itself \(v_{r}\) is given by \[v_{r}=-\frac{3}{\sqrt{\tau}\Sigma_{g}}\frac{\partial(\sqrt{\tau}\Sigma_{g}v)}{ \partial r}. \tag{26}\] The full time-dependent evolution of the dust includes both this radial drift and mixing due to the turbulent motions in the gas, which drags the dust along: \[\frac{\partial\Sigma_{d}}{\partial t}+\frac{1}{r}\frac{\partial\left(r\Sigma _{d}v_{d}\right)}{\partial r}-\frac{1}{r}\frac{\partial}{\partial r}\left(rD_ {d}\Sigma_{g}\frac{\partial}{\partial r}\left(\frac{\Sigma_{d}}{\Sigma_{g}} \right)\right)=\dot{\Sigma}_{d}. \tag{27}\] Here \(\Sigma_{d}\) is the dust surface density and \(\dot{\Sigma}_{d}\) is a source term, analogously to Equation (13) for the gas. \(D_{d}\) is a diffusion coefficient given by \[D_{d}=\frac{1}{\mathrm{Sc}}\frac{1}{1+\mathrm{St}^{2}}v, \tag{28}\] where the Schmidt number \(\mathrm{Sc}\) is defined as \[\mathrm{Sc}=\frac{v}{D_{g}}, \tag{29}\] where \(D_{g}\) is the gas turbulent diffusivity. Just as Equation (13), Equation (27) is a diffusion equation that is solved in DISKLAB using implicit integration, which means it should be stable even when larger time steps are used. Importantly for this project, DISKLAB can also include the effects of infall from a molecular cloud into the model. To this end, it follows Hueso & Guillot (2005) in combining the models of Shu (1977) and Ulrich (1976), which handle the radial infall and the effect of rotation, respectively. This model starts with a molecular cloud core of mass \(M_{c}\) that is assumed to be isothermal with temperature \(T_{c}\), spherically symmetric with radius \(R_{c}\), and rotating as a solid body with rotation rate \(\Omega_{c}\). This cloud then starts to collapse from the inside out as an expansion wave propagates outward with the sound speed \(c_{s}\), causing every shell of mass it passes through to collapse onto the star+disk system. The mass accretion rate is assumed to remain constant during this phase: \[\dot{M}=0.975\frac{c_{s}^{3}}{G}, \tag{30}\] with \(\mathrm{G}\) the gravitational constant. Material falling onto the disk this way always ends up within the centrifugal radius \(r_{c}(t)\), which is given by \[r_{c}(t)=\frac{r_{\mathrm{shell}}(t)^{4}\omega(r_{\mathrm{shell}})^{2}}{GM(t)}, \tag{31}\] where \(r_{\mathrm{shell}}(t)\) is the distance from the center of the molecular cloud of the shell where the expansion wave passes at time \(t\), \(\omega(r_{\mathrm{shell}})\) the angular momentum contained in that shell, and \(M(t)\) the total mass that has been accreted onto the star+disk from the cloud up until that time. Since both \(r_{\mathrm{shell}}\) and \(M(t)\) are proportional to time (due to the constant sound speed and accretion rate), \(r_{c}\propto t^{3}\) and infalling matter ends up progressively further away from the star. The way this matter is spread out over the disk is then \[\dot{\Sigma}_{g}(r,t)=\frac{\dot{M}}{\pi r_{c}^{2}}\frac{1}{8}\left(\frac{r}{ r_{c}}\right)^{-3/2}\left[1-\left(\frac{r}{r_{c}}\right)^{1/2}\right]^{-1/2}. \tag{32}\] This is the \(\dot{\Sigma}_{g}\) that enters Equation (13) for the time evolution of the gas as the source term. The source term for the dust, \(\dot{\Sigma}_{d}\) in Equation (27), can then simply be found by multiplying \(\dot{\Sigma}_{g}\) by the dust-to-gas ratio. Especially in a model that includes infall, it is important to ensure that the disk does not become so massive that its own self-gravity starts to play a role, leading to the gravitational instability. This would produce non-axially symmetric effects (spiral waves) in the disk, which can't be properly treated with the axially symmetric models DISKLAB produces. The stability of the disk against this phenomenon can be checked in DISKLAB with a simple command that calculates the Toomre \(Q\) parameter (Toomre 1964) as \[Q=\frac{c_{s}\Omega_{K}}{\pi G\Sigma_{g}}. \tag{33}\] The disk remains stable as long as \(Q>2\). Under this condition, pressure forces and shear can act faster to destroy overdensities than self-gravity can produce them. In practice, operations in DISKLAB are performed by applying functions to the DiskRadialModel object that is created at the start. This way, many individual calculations can be performed using only a single line of code. It is then up to the user to combine the different functionalities into a sensible model. Evolving a model over time can be done using a for-loop, where each iteration corresponds to the next time step, and within which the individual functions are called to solve the viscous disk equation and update other time-dependent parameters such as temperatures, velocities and masses. Because any parameter that can vary with radius in the disk is stored as a Python array, with each entry corresponding to one point on the chosen radial grid, they can also easily be manipulated by hand, using standard Python commands. It is for example possible to simulate a chemical reaction in which one type of dust is transformed into another by removing surface density from the array for the first dust species and adding it to that of the second. Only minor changes were made to the standard DISKLAB code itself. Two new functions were added to introduce a planet and open a gap in the disk, in addition to the existing gap models in the package. These will be described in the next section. The radial velocity of the dust was also set to equal that of the gas at the innermost three grid points only, overriding Equation (25) there, due to a problem with a boundary condition that will also be described in the next section. ### Model setup The model was calculated on a 1D grid of 1000 logarithmically spaced radial points between 0.06 and 1000 AU. The inner edge of the disk \(r_{\mathrm{in}}\) at 0.06 AU is the same as used in the DKA\(18\) model, and it seems a not unreasonable estimate for the radius of the still contracting proto-Sun. The basic disk model used was the model by Lynden-Bell & Pringle (1974) as described in Lodato et al (2017). But while this model is present in the code and evolving at the earliest time steps, we are really interested in letting the disk build up naturally due to the effects of infall. The initial mass was therefore chosen to have a negligibly low value, \(M_{0}=10^{-20}M_{\odot}\), which is about 19 orders of magnitude lower than the final disk mass. In practice then, it is not relevant what happens with this initial model at early times, because the material infalling from the molecular cloud core dominates the evolution of the disk as soon as the centrifugal radius of the infall \(r_{c}\) exceeds the inner edge of the disk. The way infalling matter is spread over the disk, described by Equation (32), depends solely on the properties of the molecular cloud core. The three relevant parameters are the cloud mass \(M_{c}\), temperature \(T_{c}\) and rotation rate \(\Omega_{c}\).4 For the cloud mass, a value of \(M_{c}=1.05\)\(\mathrm{M}_{\odot}\) was chosen in order for the Sun to end up with roughly 1 \(\mathrm{M}_{\odot}\). The cloud temperature was set to \(T_{c}=14\) K, which is a typical temperature for a molecular cloud (Wilson et al. 1997). The choice for the rotation rate, \(\Omega_{c}=2.3\cdot 10^{-14}\) rad s\({}^{-1}\), is more or less arbitrary, but the effect of varying this parameter will be explored later. With these choices for mass and temperature, the duration of the infall phase can be calculated by dividing the cloud mass by the accretion rate in Equation (30): \[t_{\rm infall}=\frac{M_{c}}{M}=\frac{GM_{c}}{0.975c_{s}^{3}}\approx 0.4\ {\rm Myr}. \tag{34}\] the effects of the gravitational instability by adopting an artificially enlarged \(\alpha\)-parameter (Armitage et al. 2001) during the infall phase. This redistributes the infalling material much more rapidly, ensuring that most of the gas flowing through the disk actually ends up in the star. It turns out a value of \(\alpha=0.6\) is sufficient to ensure that \(Q>2\) and the total disk mass \(M_{\rm disk}<0.1M_{\odot}\) at all times. This value was therefore used during the entire infall phase, after which \(\alpha\) was changed to follow Equations (36-38). In addition to this, we also ran a simulation with a reduced cloud rotation rate of \(\Omega_{\rm c}=1\times 10^{-15}\), which is low enough that the gravitational instability never triggers, allowing us to use Equations (36-38) from the start. To solve Equation (13) for the viscous evolution of the gas, a boundary condition needs to be specified. The default zero-gradient boundary condition caused issues with the temperature calculation, so instead we set \(\Sigma_{g}\) to a fixed value at the inner disk edge. A low value of \(\Sigma_{g}=10^{-14}\) g cm\({}^{-2}\) was used for the initial low-mass disk model, which was then increased to \(\Sigma_{g}=10\) g cm\({}^{-2}\) when the centrifugal radius of the infall crossed the disk inner edge, a value similar to the gas surface density predicted for the innermost disk at the end of the simulation. However, using a fixed value can cause problems with radial outflow of dust species at the inner disk edge when \(\Sigma_{g}\) increases in the outward direction, because this creates a pressure gradient that can act as a particle trap in the same way a planet gap does. For this reason, the radial velocity of the dust species was set to equal that of the gas at the innermost three grid points only, which prevents them from getting trapped. The model is now set up and ready to be evolved over time. The simulation ran for 5 million years, by the end of which CAI parent bodies are thought to have finished their formation. Since DISKLAB uses implicit integration for calculating the time evolution of the model, relatively large time steps could be employed. Unfortunately however, this does not apply to the thermal conversion of the dust species, which had to be done explicitly at each time step. We therefore used a constant time step of 0.2 years during the infall phase, when dust mixing is particularly strong, after which we switched to a time step of 1000 years for the remainder of the simulation. More information on how these time steps were chosen can be found in the Appendix. This way, each individual simulation could be finished in under a day. Some time after infall had ended and the disk had fully formed, Jupiter's core was assumed to form, start growing and open a gap in the disk. We once again followed the DKA18 model in the way this was incorporated into the simulation. Jupiter's core first appeared in the model when it had reached a mass of \(M_{J}=30~{}M_{\oplus}\). It then started growing to its full size by accreting gas from its surroundings. At every time step \(dt\), an amount of mass was added that is calculated as 6 Footnote 6: This quantity of gas was not actually removed from the disk, however. \[dM=\frac{dt}{\tau}\int_{r}\Sigma_{g}(r)e^{-x^{2}}2\pi rdr, \tag{39}\] Figure 1: Result of a simulation in which the parameterization for the \(\alpha\)-parameter as given by Equations 36 through 38 is used throughout the infall phase. _Left_: Evolution of the total mass present in each of the molecular cloud, star and disk as a function of time. In this scenario, mass can’t accrete onto the star from the disk rapidly enough, so the disk keeps growing in mass until it even exceeds the mass present in the star. Even at the end of the simulation after 5 Myr, the star has only gained roughly 0.5 M\({}_{\odot}\) of mass. _Right_: Resulting value of the Toomre \(Q\) as a function of radius in the disk, shown for several intermediate time steps. As the disk grows in mass, \(Q\) drops below the safe value of 2 (indicated by the horizontal dashed line) everywhere except in the innermost part of the disk. This means that the self-gravity of gas in the disk can no longer be ignored, and the disk becomes susceptible to the gravitational instability. Figure 2: Shakura-Sunyaev \(\alpha\)-parameter as a function of radius in the disk. Two regions of constant \(\alpha\) in the inner and outer disk are connected by a decreasing powerlaw between 1 and 10 AU. This profile applies to the post-infall phase only. The sharp Gaussian peak at 3 AU forms at 0.6 Myr, and is caused by the presence of Jupiter’s growing planetary core. This peak is not used for turbulent mixing of dust species, as it represents a torque exerted on the gas by the planet, and not a real increase in viscosity. where \[x=\frac{r-r_{J}}{R_{H}}, \tag{40}\] with the Hill Radius \(R_{H}\) given by \[R_{H}=r_{J}\left(\frac{M_{J}}{3M_{*}}\right)^{1/3}. \tag{41}\] Here \(r_{J}\) is the location where the planet forms, assumed to be at 3.0 AU (or roughly 40% closer in than its current position at 5.2 AU from the Sun) and \(\tau\) is the growth time scale which sets what fraction of the available gas in the vicinity is accreted. Because the gas surface density \(\Sigma_{g}\) is a function of radius and time and is also modified by the presence of the planet itself, the precise value of \(\tau\) used depends on the choice of time step as well as \(r_{J}\). The value we used, \(\tau=1537.5\) yr, thus only works well for our chosen time step of 1000 years for the post-infall phase. No mass was added to the planet beyond \(M_{J}=317.8~{}M_{\oplus}\), which equals 1 Jupiter mass. For the chosen values for \(r_{J}\) and \(\tau\), this value was reached after roughly 4.5 Myr. The way a gap was opened in the disk by this growing protoplanet is by modifying the value for \(\alpha\) in its vicinity: \[\alpha_{\rm new}=\alpha+(\alpha_{\rm peak}-\alpha)e^{-x^{2}}. \tag{42}\] This adds a Gaussian spike to the \(\alpha\)-profile as described before, which can be seen in Figure 2. The value of \(\alpha_{\rm peak}\) was set to 0.01, in accordance with Desch et al (2018). This peak in \(\alpha\) acts as a torque by the planet, pushing material away and out of the gap. Because physically there is no real increase in the turbulent viscosity, the mixing of dust species should not be affected. Therefore the \(\alpha\)-profile used in the mixing calculation does not include this peak. The final parameter relevant to the planet gap is the formation time \(t_{\rm planet}\), which is also the time when the gap starts to open. As in the DKA18 model, this time was set to \(t_{\rm planet}=0.6\) Myr, though it must be noted that this time can't be directly compared between the two models. In the DKA18 model, \(t=0\) refers to the point where the disk is fully formed (the end of their non-existent infall phase) and the time at which CAI formation starts. In our model however, \(t=0\) corresponds to the very beginning of the infall phase. As we will see in the Results section, CAI formation already begins early in the infall phase, so well before the disk is finished building up. So while \(t_{\rm planet}\) has been chosen to (at least roughly) match the 0.6 Myr after the first CAIs start to appear, it cannot simultaneously match the 0.6 Myr after the end of the infall phase. Both the planet formation time \(t_{\rm planet}\) and the formation location \(r_{J}\) will be varied later to see how different choices for these parameters impact the results. At the end of the 5 Myr simulation, the disk was simply left as is. No physics was included to simulate the eventual dissipation of the disk, as this would occur after the phenomenon of interest, CAI parent body formation, has already occurred. A summary of physical parameter choices made for the main model is shown as Table 2. ## 3 Results ### Main model We can now move on to the results of our main model. The disk starts building up when the centrifugal radius \(r_{c}\) exceeds the inner edge of the disk \(r_{\rm in}\), which happens around 35 kyr after the start of the simulation. The infalling gas is then spread out over the disk according to Equation (32), the result of which is shown as Figure 3. The centrifugal radius itself can be seen in this plot as the location of the steep vertical line beyond which no material raains down on the disk at that time. As this radius increases over time, so does the total surface area of the disk within that radius. The value of \(\hat{\Sigma}\) at any one radius \(r<r_{c}\) must therefore decrease over time, in order for the total accretion rate \(\dot{M}\) to remain constant as per Equation (30). The infall phase ends when the molecular cloud core has been depleted of mass after roughly 0.4 Myr, at which point the centrifugal radius has reached \(r_{c}=93.9\) AU. Figure 4 shows the resulting surface density evolution of the gas during the infall phase. What stands out in this plot is that at every time step, a significant amount of gas is present in the region beyond the centrifugal radius. This means that this material must have ended up there by viscous spreading. Figure 5 shows the radial velocity of the gas throughout the disk. The vertical dashed lines indicate the location of the centrifugal radius at each time. The radial velocity interior to \(r_{c}\) is strongly negative throughout the infall phase, since most of the gas infalling from the molecular cloud core is rapidly accreting onto the growing Sun. At the centrifugal radius however, the radial \begin{table} \begin{tabular}{l l l} \hline Parameter & Value & Description \\ \hline \(M_{c}\) & 1.05 \(M_{\odot}\) & Initial mass of the molecular cloud core \\ \(T_{c}\) & 14 K & Isothermal temperature of the molecular cloud core \\ \(\Omega_{c}\) & \(2.3*10^{-14}\) s\({}^{-1}\) & Solid-body rotation rate of the molecular cloud core \\ \(M_{\rm star,0}\) & \(1.05*10^{-4}M_{\odot}\) & Initial mass of the Sun \\ \(M_{0}\) & \(10^{-20}M_{\odot}\) & Mass of the initial Lynden-Bell \& Pringle disk model \\ \(R_{0}\) & 1 AU & Truncation radius of the initial Lynden-Bell \& Pringle disk model \\ \(r_{\rm in}\) & 0.06 AU & Inner edge of the disk \\ \(\sigma_{\rm in}\) & 10 g cm\({}^{-2}\) & Gas surface density at the disk inner boundary \\ \(\alpha_{\rm in}\) & 0.6 & Global value of the \(\alpha\)-parameter during the infall phase \\ \(t_{\rm planet}\) & 0.6 Myr & Time of formation of Jupiter’s core \\ \(a_{\rm planet}\) & 3 AU & Semi-major axis of Jupiter’s orbit at the formation time \\ \(m_{\rm planet,0}\) & 30 \(M_{\oplus}\) & Mass of Jupiter’s core at the formation time \\ \(\tau_{\rm planet}\) & 1537.5 yr & Growth time scale of Jupiter (for post-infall time steps of 1000 years) \\ \(\alpha_{\rm peak}\) & 0.01 & Value of \(\alpha\) at Jupiter’s location after its formation \\ \(\kappa_{d,\rm Ross}\) & 5 cm\({}^{2}\) g\({}^{-1}\) & Global value of the dust opacity \\ \hline \end{tabular} \end{table} Table 2: Overview of physical parameters chosen for the model. See Table 1 for the properties of the dust species. The values of \(\alpha\) in the post-infall phase are given in Equations (36) to (38). velocity switches sign as the disk is expanding outward beyond this point. At the end of the infall phase, the disk has reached a total mass of \(M_{\rm disk}=0.064\ M_{\odot}\). While this is comparable to the disk in the DKA18 model, which has \(M_{\rm disk}=0.089\ M_{\odot}\), this mass is spread out in a very different way. Our disk extends all the way out to 1000 AU, while the disk in the DKA18 model is much more compact, its gas surface density sharply decreasing past 10 AU. In turn, the surface density in the inner part of our disk is three orders of magnitude lower than in the DKA18 model. This has consequences for the accretion rate of the disk onto the Sun: while the disk in the DKA18 model loses about half its mass in just 0.1 Myr of subsequent evolution, Figure 6 shows that in our case, both the stellar and the disk mass barely change anymore after the end of the infall phase. This could simply mean that the chosen \(\alpha\)-parametrization is unrealistic, as the resulting viscosity is too weak to move significant quantities of mass back in towards the Sun. Now that the disk has been fully formed, we can see how it evolves in the post-infall phase. Figure 7 shows the full time evolution of the gas surface density from the start of disk formation to the end of the simulation. The post-infall phase is represented here by the red lines. While the surface density is clearly decreasing over time in the innermost 10 AU of the disk, little change is visible beyond that point, where \(\alpha\) is lowest. An important feature of the post-infall phase is the planet gap that has opened up at \(r=3\) AU 0.6 Myr after the start of the simulation, a closeup of which can be seen as Figure 8. The gap can be seen to get wider over time, as Jupiter continues to accumulate mass, increasing its Hill radius. The surface density of gas within the gap is successfully reduced by 2 orders of magnitude. The mass evolution of Jupiter itself is shown as Figure 9. Its growth turns out to be fairly linear, with the growth time scale \(\tau\) chosen so that it reaches its full mass of \(M_{J}=317.8\ M_{\oplus}\) after about 4.5 Myr. So far we have only looked at the gas component of the disk during the simulation, but what we're really interested in is the behaviour of the dust species during this time. Because this de Figure 4: Viscous evolution of the gas surface density \(\Sigma_{s}\) during the infall phase. The disk starts building up when \(r_{c}>r_{\rm in}\). Rapid viscous expansion causes the gas to move all the way out to 1000 AU. The surface density keeps increasing everywhere during the entire infall phase as more and more gas is added to the disk. Figure 5: Radial velocity of the gas \(v_{R}\) during the infall phase. At the centrifugal radius, indicated by the dashed vertical lines, \(v_{R}\) becomes strongly positive due to a large gradient in surface density pushing gas outward. Interior to this radius, the gas moves inward as it accretes onto the Sun. The sudden vertical jumps are likely numerical artefacts that are not expected to have a significant impact on the results. Figure 3: Infall rate \(\dot{\Sigma}_{s}\) of gas from the molecular cloud core onto the disk. As time passes, the centrifugal radius \(r_{c}\) increases and infalling matter is added to greater and greater radii in the disk, while the total accretion rate \(\dot{M}\) remains constant. At the end of infall, the centrifugal radius has reached 93.9 AU. Figure 6: Mass evolution of the molecular cloud core, the Sun and the disk. The infall rate of matter on the disk is constant over time. Most infalling material quickly accretes onto the Sun, which reaches 0.93 \(M_{\odot}\) at the end of the infall phase after 0.4 Myr. The solar and disk mass change little post-infall, as most of the disk mass is located at large radii where the viscosity is small. pends heavily on where and when the CAI Factory is active, we'll first look at the midplane temperature, which is shown in Figure 10. The temperature in the inner disk shoots up to 1400 K soon after the disk starts building up, when viscous heating dominates the temperature calculation there. For the entire duration of the infall phase, there is then some region in the disk where the temperature reaches 1400 K, the CAI Factory. This region never extends past 1 AU, which means that CAIs can only end up past Jupiter's location of formation at \(r=3\) AU by transport within the disk, since they will not be created that far out. After the end of infall at 0.4 Myr, the temperature rapidly drops, and the CAI Factory switches off. During the post-infall phase, viscous heating is negligible compared to irradiative heating. An important result here is that the period during which CAIs are being created in the disk basically coincides with the infall phase. CAI formation should therefore have ceased by the time the disk is fully formed, unlike in the DKA18 model, where this is instead the time when the CAI Factory is first turned on. This earlier formation of CAIs significantly affects the evolution of their surface density. Figures 11, 12, 13, 14, and 15 show the surface density evolution for the five different dust species in the model. As Figure 11 shows, population 1 dust is completely absent in the innermost part of the disk during the infall phase (except for the very first line when the temperature has not quite reached 1400 K) as that is where it is being converted into populations 3, 4, and 5. An important thing to note is that, just as with the gas, each of the different populations is present much further out in the disk than the centrifugal radius. Evidently the dust particles are being dragged along with the gas during the early rapid viscous expansion phase as well as experiencing strong mixing. This is true even for populations 3, 4, and 5 (CAIs), which originate exclusively from the CAI Factory in the innermost part of the disk. At the end of the infall phase then, each dust population can be found all the way out to 1000 AU. From this point on, the dust particles start to drift back in towards the Sun. Because populations 1, 2, and 3 are micron-sized particles that are well coupled to the gas, their radial velocity is essentially equal to that of the gas throughout the disk. Similarly to the gas in Figure 7 then, these dust species slowly drain from the inner disk, but little change in their surface density can be seen beyond 10 AU. Figure 8: Close-up of Figure 7 around the location of the planet gap. Surface density builds up here until the end of the infall phase, after which it starts dropping again. As Jupiter grows to its full size by accreting mass from its surroundings, its Hill radius increases, widening the gap. Figure 10: Midplane temperature due to irradiation and viscous heating. During the infall phase, the temperature reaches 1400 K in the inner disk, activating the CAI Factory. At the end of infall, the temperature quickly decreases again. Figure 7: Viscous evolution of the gas surface density \(\Sigma_{g}\) during the full simulation. After the infall phase, \(\Sigma_{g}\) keeps dropping in the inner disk where matter is accreting onto the Sun. In the outer disk, the viscosity is lower and there is less observable change. The planet gap can be clearly be seen at 3 AU. Figure 9: Growth of Jupiter from a 30 \(M_{\oplus}\) core at 0.6 Myr to its full size of 317.8 \(M_{\oplus}\) (1 Jupiter mass) after 4.5 Myr by accretion of gas in its vicinity. The growth turns out to be roughly linear. Figure 11: Surface density evolution of population 1 dust. During most of the infall phase, there is no population 1 dust in the inner disk due to its complete thermal conversion into other dust species. Figure 16: Radial velocity of the dust species in the post-infall phase. The micrometer-sized particles, populations 1-3, are well coupled to the gas and experience very little radial drift in the outer disk. They are not caught in the pressure bump, but simply drift through the gap into the inner disk. The larger populations 4 and 5 have higher radial drift velocities, but cannot easily pass the planet gap. Figure 14: Surface density evolution of population 4 dust. These grains are large enough to display noticeable inward radial drift in the outer disk as well as a trapping effect in Jupiter’s pressure bump. The 35000 year line is not visible, since the temperature is still below 1400 K. Figure 13: Surface density evolution of population 3 dust. The 35000 year line is not visible, since the temperature is still below 1400 K. Figure 15: Surface density evolution of population 5 dust (CAIs). As all other dust species, the CAIs are efficiently transported into the outer disk during the infall phase. They then start drifting back in and piling up in the region behind the planet gap. CAIs in the inner disk slowly drain as they accrete onto the Sun. The 35000 year line is not visible, since the temperature is still below 1400 K. No trapping effect can be seen for these populations behind the planet gap. The sign of the pressure-gradient force reverses in the outer gap, where the gas pressure increases in the outward direction, which increases the inward force on the gas and causes it to orbit with super-Keplerian velocities. The resulting drag force on solid particles causes them to drift towards the pressure maximum, away from the star. However, smaller particles such as populations 1 through 3 are coupled to the gas strongly enough to be dragged along with it as it flows through the gap, while only larger particle sizes with radial velocities exceeding that of the gas are hindered by this barrier. This process is called dust filtration by Rice et al (2006). In Figure 16, which shows the effective radial velocity of the different dust species in the post-infall phase, it is shown that dust particles belonging to populations 1, 2, and 3, when approaching the planet gap from larger radii, are simply accelerated through. The picture is different for the much larger AOAs (population 4) and CAIs (population 5). These populations have higher Stokes numbers than small dust grains, and as such they drift more strongly towards higher gas pressures, leading to larger radial drift velocities. In the far outer disk, the radius at which the surface density of these dust species starts to drop off steeply can be seen to move inward over time. Because the planet gap does serve as an effective barrier against particles of these sizes, they are piling up in the region just behind the planet gap, as evidenced by the surface density increasing over time there. Similar to the other dust populations, the CAIs and AOAs are slowly depleting in the inner disk, where no barrier exists to prevent them from drifting into the Sun. However, the average radial velocity of CAIs in the inner disk implies that all CAIs would vanish in this region on a time scale of order \(10^{5}\) years after their formation ceased. Since this is clearly not what is happening, with a lower but still significant amount of surface density left even after 5 Myr, at least some part of the CAIs must still be leaking through the planet gap into the inner disk. Figure 17 shows the CAI abundance as a mass fraction of all the dust species in the disk. This represents the main result of this project. Comparing first the final abundance profile after 5 Myr to the result from the DKA18 simulation (their Figure 8), we see that they are roughly similarly shaped, with a large abundance peak just beyond Jupiter's location and little to no CAI abundance elsewhere. There are however also some important differences. First of all, the peak in our model is much broader, extending over almost 100 AU, while the peak in the DKA18 model is not even one full AU wide. This can at least be partially explained by the CAIs having been transported outward so far that they are simply still in the process of drifting back in. Given more time (or equivalently, given a higher accretion rate in the outer disk), the CAIs would presumably continue to drift back in, leading to a narrower and taller abundance peak. Second, the overall abundance in our peak is significantly higher than in the DKA18 model. However, there are other uncertainties affecting the precise abundance values. One of these is parameter choice, in particular the molecular cloud parameters. We will explore the consequences of different parameter choices in section 3.2. For now, suffice it to say that the general shape of the abundance profile is more reliable than the precise quantitative values. While the final abundance profile after 5 Myr looks broadly similar to that in the DKA18 model, the intermediate time steps do not. At \(t=50\) kyr, the first line in Figure 17 with a non-zero abundance, we see that the CAI abundance has a virtually constant value of 2.4% out to 50-60 AU. Since this is the value obtained inside the CAI Factory when all population 1 dust is thermally processed, it means that outward mixing is very rapid and efficient. At subsequent time steps, we see that the CAIs are mixed further outward, until infall ends after \(t=0.4\) Myr. At this point the abundance is rather low throughout the entire disk, but inward radial drift of CAIs now commences. Something interesting then happens that does not occur in the model without infall: apart from the abundance peak that starts to build up when the planet gap opens, a second peak forms in the far outer disk. As CAIs drift in, they start slowing down (Figure 16) when the gas density increases and their Stokes number (Figure 18) drops, causing a pile-up. This second peak slowly drifts inward over time and eventually merges with the first peak, leading to the final abundance profile with a single broad peak. a parameter search was conducted in which the simulation was run a number of times, varying one physical parameter at a time over a number of possible values. Not every possible parameter was varied in this way. For example, while the molecular cloud temperature \(T_{c}\) and rotation rate \(\Omega_{\mathrm{z}}\) are not known a priori, the cloud mass \(M_{c}\) should roughly be the same as the total mass of the Solar System, so it makes little sense to see what happens for completely different cloud masses. We first show what happens when varying some of the parameters associated with the disk itself. Figure 19 shows the CAI abundance profile resulting from variation of the disk inner edge \(r_{\mathrm{in}}\) between 0.05 and 0.1 AU. Moving the inner edge further away from the Sun leads to slightly higher abundances in the CAI peak, but this effect is so small that we can say the overall abundance is essentially independent from the inner edge. Figure 20 shows the result of variations of \(\sigma_{\mathrm{in}}\), the gas surface density at the inner edge, which is used as a boundary condition for Equation (13). The explanation here can be brief: while this parameter influences the surface densities in the inner disk, the dust species are all affected equally, so the abundance profile does not change. Figure 21 shows the result for variations of the opacity \(\kappa\) in the disk. When the opacity increases, it becomes harder for heat to escape from the disk, leading to an increase in the temperature, at least in the part of the disk where the temperature wasn't already at the maximum capped value. In practice then, this means that the CAI Factory grows in radial extent. More population 1 dust is then converted into CAIs, leading to a higher peak abundance for higher values of \(\kappa\). The search over different values of the initial \(\alpha\)-parameter during the infall phase is presented as Figure 22. A trend is visible where the peak abundance increases for higher values of this parameter. Since this \(\alpha\) is used to mimic the effects of gravitational instability, higher values imply stronger redistribution of mass in the disk. This might transport additional CAIs outward, increasing abundances in the outer disk. What is less clear is why the abundance peak is broader for lower values of \(\alpha\). Perhaps the lower efficiency of mass redistribution slows down the inward radial drift of CAIs in the outer disk because the gas surface density ends up lower, reducing drag forces on the dust. It must be noted that for the lowest values of \(\alpha\), the disk technically becomes too massive to ignore the effects of self-gravity, even though these values do follow the general trend of lower but Figure 21: Abundances after 5 Myr for different values of the disk opacity \(\kappa\). Higher opacities trap more heat in the disk, increasing the temperature and thereby enlarging the radial extent of the CAI Factory. This creates more CAIs and increases their abundance in the pressure bump. Figure 22: Abundances after 5 Myr for different values of the \(\alpha\)-parameter during the infall phase. Higher values increase the peak abundance, likely due to more efficient outward mass redistribution. Figure 19: Abundances after 5 Myr for different values of the disk inner edge \(r_{\mathrm{in}}\). Higher values of \(r_{\mathrm{in}}\) slightly increase the peak CAI abundance, but this effect is so small that the results are basically independent of the inner edge location. Figure 20: Abundances after 5 Myr for different values of the surface density at the disk inner edge \(\sigma_{\mathrm{in}}\). This parameter seems to have no impact on the CAI abundance, as it affects dust species equally. broader abundance peaks in this parameter search. For \(\alpha=0.1\) or 0.2, the disk mass exceeds 0.15 M\({}_{\odot}\). This might also have an impact on the results. Moving on to the molecular cloud parameters, Figure 23 shows the dependence of the CAI abundance on variation of the molecular cloud rotation rate \(\Omega_{c}\), which is also a measure of the total angular momentum in the cloud. Here we see that the CAI abundance peak past the planet gap grows both wider and taller for increasing angular momentum. The first of these effects is easy to understand. If the angular momentum in the cloud is low, the centrifugal radius in Equation (31) is also low, and infalling material is deposited on the disk relatively close to the star. In contrast, high cloud angular momentum causes infalling material to be deposited further out. A secondary effect of this is that, because it takes longer for the centrifugal radius to reach the inner edge of the disk when \(\Omega_{c}\) is low, more of the cloud mass will accrete directly onto the Sun instead of on the disk, leading to a less massive disk. The second effect, higher peak abundances for higher \(\Omega_{c}\), is more surprising, because in general, rapidly rotating molecular clouds produce more massive disks with lower crystallinity than slower rotating clouds (Dullemond et al. 2006). We would expect that first, if more infalling population 1 dust ends up at larger radii from the Sun, less of it ends up in the CAI Factory where it can be used to produce CAIs. The total amount of CAIs in the disk will then also be lower. Second, population 1 dust infalling at larger radii will dilute the CAI abundance at those locations. These two effects would lead to lower abundances behind the planet gap for larger values of \(\Omega_{c}\) instead of higher. However, it seems these effects are overshadowed by another: for higher \(\Omega_{c}\), surface densities in the disk will be higher further out, leading to more viscous heating and higher temperatures there as well. This increases the radial extent of the CAI Factory, which produces CAIs efficiently enough that the net effect is an increase in abundance. Figure 24 shows the parameter search over the molecular cloud temperature \(T_{c}\), which seems to have a larger effect on the peak CAI abundance than most other parameters searched over. Here we must first note that varying the cloud temperature alone would have an undesirable by-effect. When the cloud temperature increases, so does the local sound speed within the Figure 23: Abundances after 5 Myr for different values of the cloud rotation rate \(\Omega_{c}\). The abundance peak grows taller and wider for increasing values of \(\Omega_{c}\), as more material ends up in the disk instead of directly accreting onto the Sun, also increasing the efficiency of the CAI Factory. Figure 26: Abundances after 5 Myr for different values of the planet formation location \(a_{plan}\). As the planet moves outward, its Hill radius increases, widening the gap and making the trapping slightly more efficient. Figure 24: Abundances after 5 Myr for different values of the molecular cloud temperature \(T_{c}\). Higher temperatures decrease the duration of the infall phase. While this causes higher surface densities in the inner disk early on, increasing the efficiency of the CAI Factory, it also causes stronger outward transport of CAIs, spreading them further out over the disk. Figure 25: Abundances after 5 Myr for different values of the planet formation time \(t_{plan}\). Perhaps surprisingly, this parameter has little effect on the final abundance profile. CAIs are mixed so far out that they are still in the process of drifting back in even after 5 Myr. cloud. This means that the outward travelling expansion wave that causes the cloud to collapse moves faster, so the entire infall phase takes place in a shorter time span. However, the radius of the cloud also depends on the sound speed through \[R_{c}=\frac{GM_{c}}{2c_{s}^{2}}. \tag{43}\] Therefore, increasing the temperature of the molecular cloud will cause it to contract. However, because the rotation rate \(\Omega_{c}\) is kept constant, this removes angular momentum from the cloud. We therefore varied both the cloud temperature and rotation rate at the same time, to keep the total angular momentum constant. We see that higher cloud temperatures then lead to a lower but broader CAI abundance peak. At least early on, the more rapid infall should lead to higher surface densities in the inner disk, making CAI production more efficient. However, the higher surface densities also lead to stronger outward transport, and this effect seems to dominate here. In the simulations with higher molecular cloud temperatures, the CAIs have been spread out more, leading to a lower but broader abundance peak which would presumably become taller and narrower given more time for continued inward radial drift. Moving on to the planet parameters, there are two important quantities we have varied: the planet formation time \(t_{\rm planet}\), shown in Figure 25, and the planet formation location \(a_{\rm planet}\), shown in Figure 26. Interestingly, while changing the location of formation obviously also moves the location of the planet gap and the pressure maximum beyond it, the final abundance profile otherwise does not change very strongly with these parameters. If the effects of infall would not be considered, this would be rather surprising. In the DKA18 model, CAIs are transported outward only by means of turbulent diffusion and meridional flow. Their surface density rapidly drops beyond \(r=3\) AU, which means that much less CAIs would be present beyond the location of the planet gap if Jupiter formed further out in the disk, leading to a lower peak abundance. Likewise, because CAI formation has ceased by the time the planet forms, later formation times would mean that more CAIs would have drifted back in towards the Sun, possibly already having accreted onto it. This again leaves less CAIs far enough out to be trapped in the pressure maximum. To achieve the CAI abundance profile from the DKA18 model then, it is essential that Jupiter does not form too late or too far from the Sun. In our model however, the outward transport of CAIs during the infall phase is so efficient that CAIs will remain present in the outer disk, drifting in, even after several million years. The precise time of formation (even when infall is still ongoing at 0.2 Myr) or location then make a relatively small difference to the final abundances. While by no means an exhaustive search over all the different possibilities, we can also attempt to run the model with different parameterizations of the post-infall \(\alpha\). The result of this is shown as Figure 27. For reference, the default \(\alpha\)-profile as given by Equations (36) through (38) is shown as \(\alpha\)-model 0. In \(\alpha\)-model 1, the viscosity was raised by a factor 10 in the inner disk (\(r<1\) AU), with the powerlaw part between 1 and 10 AU adjusted to correctly connect the two regions of constant \(\alpha\). The primary effect of this model seems to be to increase CAI abundances in the inner disk, where stronger mixing likely counteracts the effects of radial drift inward. In \(\alpha\)-model 2, the viscosity was increased by a factor 10 in the outer region (\(r>10\) AU) instead. This seems to decrease the effectiveness of the particle trap in the pressure bump, as the stronger turbulent mixing behind the planet gap has an easier time propelling CAIs through the gap into the inner disk, where the abundance is now comparable to that in the pressure bump. This is similar to the situation in \(\alpha\)-model 3, where the entire \(\alpha\)-profile was increased by a factor 10. In \(\alpha\)-model 4, a constant value of \(\alpha=10^{-4}\) was used throughout the disk, while \(\alpha\)-model 5 represents the case with \(\alpha=10^{-3}\). Both of these cases lead to a situation where the CAI abundance interior to the planet gap is not much different from that exterior to it. None of these models therefore really represents an improvement over the \(\alpha\)-profile used for our main model, which most accurately reproduces the observations of high CAI abundances in carbonaceous chondrites and low abundances in meteorites originating from the inner disk. Our main model only takes CAIs with a grain size of 2500 \(\mu\)m into account. While most of the mass in CAIs is contained in such large grains, they are found almost entirely in one group of carbonaceous chondrites (CV), while the most widely occurring type of CAI consists of much smaller grains (Cuzzi et al. 2003). A final thing we can try then, is to see how our model impacts CAIs of different sizes. This is shown in Figure 28. CAIs up to 1000 \(\mu\)m remain spread out over the outer disk quite evenly, while larger CAI grains, which have higher drift velocities, pile Figure 28: Abundances after 5 Myr for different values of the CAI grain size \(a_{\rm grain}\). The larger the grain size, the faster the CAIs drift back in to pile up in the pressure bump. Smaller grains are more evenly spread throughout the disk. Figure 27: Abundances after 5 Myr for different values of the post-infall \(\alpha\)-profile. See the text for the meaning of the different \(\alpha\)-models. up at the location of the pressure bump. This effect gets stronger for larger grain sizes. Smaller CAI grains also maintain a (low but noticeable) presence in the inner disk, reproducing the observation that smaller grains occur in more meteorite types, also those originating in the inner disk. An important conclusion we can draw from the parameter searches we have shown is that our model is not very sensitive to parameter choices. For a particular grain size, many of the parameters we have varied only have a limited effect on the CAI abundances. The molecular cloud parameters, \(\Omega_{\rm c}\) and \(T_{\rm c}\), appear to have the largest quantitative impact on the results. But importantly, the effect of particle trapping in the pressure bump is not disrupted by changes in any of the parameters we varied, possibly with the exception of different \(\alpha\)-parameterizations. So while quantitative uncertainties are introduced into the CAI abundances we find due to uncertainties in the parameter choices, at least qualitatively the result that the Jupiter solution keeps CAIs trapped in the pressure bump seems to be quite general. ### A second planet The efficient outward transport of CAIs during the infall phase, and subsequent inward radial drift, raises the question how they would be affected by the presence of multiple planet gaps in the disk. This question never occurred in the case of the DKA18 model, since CAIs were just barely diffusing past the planet gap from the inner disk, instead of approaching it from the far outer disk. If CAIs become trapped in a pressure bump caused by one of the other giant planets in our Solar System, this might prevent the build-up of a significant CAI abundance near Jupiter, which is where the carbonaceous chondrites are thought to have formed. To answer this question, the model was extended with a second planet gap caused by the formation and growth of Saturn. We assume that Saturn has the same mass as Jupiter (\(M=30M_{\oplus}\)) when it starts opening a gap, and grows to its full size in the same amount of time, meaning it reaches a mass of \(M=95.2~{}M_{\oplus}\), or one Saturn mass, after 4.5 Myr. This leaves three important parameters for which we must choose some value: the formation time \(t_{\rm planet}\), the formation location \(a_{\rm planet}\), and the depth of the planet gap, represented by \(\alpha_{\rm peak}\) in Equation (42). Concerning the planet formation time, we assume that Saturn started forming around the same time as Jupiter (\(t_{\rm planet}=0.6\) Myr). We initially place Saturn at its present-day location at \(a_{\rm planet}=9.6\) AU, although it would make sense to place its birthplace closer to the Sun, since Jupiter also formed a \(40\%\) closer in in our model. The largest uncertainty lies in the value of \(\alpha_{\rm peak}\). As Saturn is less massive than Jupiter, it makes sense that its planet gap would not be as deep and its potential for trapping dust species in a pressure maximum less strong. This places an upper limit of \(10^{-2}\) on \(\alpha\). A lower limit of \(10^{-5}\) is set by the viscosity in the vicinity of Saturn, as this is the lowest value of \(\alpha\) in the surrounding disk, and any lower values would actually produce an overdensity instead of a gap. We rather arbitrarily assume a value of \(\alpha_{\rm peak}=10^{-4}\) to start with. Figure 29 shows the evolution of the CAI surface density resulting from this model. Saturn's planet gap can be seen to the right of that due to Jupiter. While this gap is clearly less deep, a significant build-up of CAI surface density can still be seen behind it. However, this does not seem to prevent a similar build-up of CAIs in the region in between the two planets. This is reflected in Figure 30 showing the CAI abundance profile. Up until the formation time of both planets at 0.6 Myr, this result is identical to the one-planet case of Figure 17. The inward drift of the second abundance peak where CAIs are piling up is also the same in this case, since this occurs independently from whether a planet is present or not. But in the meantime, the CAI abundance is rising faster at the location of Saturn's pressure bump than at Jupiter's. Saturn is stopping the inward drift of a significant fraction of the CAIs, but a large enough amount is still leaking through the gap to ensure that the final CAI abundance behind Jupiter is a significant fraction of what it is in the one-planet case. As with the main model, we can investigate how the results depend on the parameter choices. Figure 31 shows how the final abundances after 5 Myr depend on Saturn's formation time \(t_{\rm planet}\), while Jupiter's formation time is kept fixed at 0.6 Myr. Perhaps unsurprisingly, because it matches what happens in the one-planet case, the formation time has little effect on the end result. Sufficient CAIs remain in the disk, drifting inward, and it takes several million years for the second peak to drift all the way in, whether it be to 3 AU or 9.6 AU. Though this effect is not very strong, the CAI abundance in between the two planets does increase for later formation times for Saturn, as more CAIs can drift past its location in the meantime. Figure 30: Evolution of the CAI abundance in the two-planet case. The largest abundance peak has shifted from Jupiter to Saturn, but enough CAIs leak through that the abundance behind Jupiter is still half of what it is in the one-planet case. Figure 29: CAI surface density when Saturn is included into the model. While CAIs are piling up in Saturn’s pressure maximum around 10 AU, some part of the CAIs is still leaking through this gap, as the surface density also keeps increasing in the region in between the two planets. Figure 32 shows how the results depend on Saturn's formation location \(a_{\rm planet}\), while keeping Jupiter fixed at 3 AU. Unlike in the one-planet scenario, this parameter now does have a rather large effect on the final abundance profile. Since the width of the planet gap depends on Saturn's Hill radius, which is proportional to its heliocentric distance, it shrinks as Saturn moves closer in. We have already seen that Saturn's pressure bump is less effective at keeping CAIs in place than Jupiter's is, and moving the formation location closer to the Sun exacerbates this issue. The closer in Saturn forms, the higher the CAI abundance in Jupiter's pressure bump becomes. The blue line for \(a_{\rm planet}=5.6\) AU places Saturn \(\pm\) 40% closer in than where it is located today, similar to Jupiter. The result here greatly resembles the one-planet case with only a small dip at the location of the second planet gap. Finally, and quite as expected, Figure 33 shows that increasing the value of \(\alpha_{\rm peak}\), which causes a deeper gap and thus a stronger pressure gradient in the gas, which pushes CAIs back out towards the pressure maximum, will lead to higher CAI abundances in Saturn's own pressure bump, while letting through less CAIs in the direction of Jupiter. The difference in abundance at Jupiter's pressure bump seems particularly large between the cases with \(\alpha_{\rm peak}=1\times 10^{-4}\) and \(2\times 10^{-4}\), suggesting that perhaps there is a transition point around these values above which Saturn's CAI trapping capability changes from ineffective to effective. Without more precise knowledge about the location where Saturn originally formed or what value of \(\alpha_{\rm peak}\) best represents how well it can push CAIs back towards its pressure bump, it is difficult to say what exactly the effect of a second planet on the CAI abundances in the Solar System would be. However, as we have demonstrated, the presence of multiple planet gaps in the solar protoplanetary disk would at least not necessarily be an impediment to obtaining the results of our main model as shown in Figure 17. The inclusion of the infall phase from a collapsing molecular cloud into the model by Desch et al (2018) therefore does not transport CAIs out too far for the Jupiter solution to still work as an explanation for the relatively high CAI content of carbonaceous chondrites. ### A gravitationally stable model Up until now, we have, through the choice of values for the molecular cloud rotation rate and temperature, assumed that so much mass is deposited onto the protoplanetary disk that it becomes gravitationally unstable. It is however possible that the Sun formed in a region of high-mass star formation, where temperatures tend to be higher and strong magnetic fields can slow down the cloud rotation rate through magnetic braking. This leads to clouds with lower angular momentum, and therefore also smaller centrifugal radii during disk formation. In such a situation it is possible that the disk never becomes gravitationally unstable. To see how this affects disk (and CAI) evolution, we ran a model with lower \(\Omega_{\rm c}\), keeping our other disk param Figure 31: Final abundances when varying Saturn’s formation time. Later formation allows additional CAIs to drift towards Jupiter. Figure 33: Final CAI abundance profile after 5 Myr with variations of \(\alpha_{\rm peak}\), which controls the depth of the planet gap. Higher values increase the gradient in the gas surface density and hence lead to a stronger pressure gradient pushing CAIs back towards the pressure bump. This reduces the amount of CAIs passing through and thus lowers the CAI abundance in the region in between the two planets. A rapid transition seems to occur between \(a_{\rm peak}=1*10^{-4}\) and \(2*10^{-4}\). Figure 32: Final CAI abundance profile after 5 Myr with variations of Saturn’s location of formation \(a_{\rm planet}\). The planet’s Hill radius shrinks when it moves in towards smaller heliocentric distances, reducing the width of the resulting planet gap. It then becomes easier for turbulent motions in the gas to propel CAIs through the gap into the region between Jupiter and Saturn. If Saturn formed at 5.6 AU (at least for the fixed value of \(\alpha_{\rm peak}=10^{-4}\)), it fails in trapping many CAIs at all, and the result greatly resembles the one-planet case. eters the same as before. We found that \(Q_{\rm Toomre}\) exceeds 2 in at least some part of the disk for rotation rates \(\Omega_{\rm c}\) of at least \(2\times 10^{-15}\) rad s\({}^{-1}\), and thus picked a value of \(\Omega_{\rm c}=1\times 10^{-15}\) rad s\({}^{-1}\) for this new simulation. Since no artificially increased \(\alpha\)-parameter is required in this model, we use the parameterization from Equations (36-38) from the start. Figure 34 shows the resulting time evolution of the gas in this disk model. Comparing this to Figure 7, there are several notable differences. First of all, the disk now only starts forming after 0.28 Myr, as the centrifugal radius grows more slowly and requires more time to even reach the inner disk edge at 0.06 AU. Because the infall phase still ends after 0.4 Myr, disk formation now occurs in a shorter time span. Secondly, the radial extent of the disk is much smaller. At the end of infall, the centrifugal radius is no larger than 0.18 AU, and without the gravitational instability pushing material outward, a sharp drop in the surface density can be observed near \(r=10\) AU. Although this disk ends up with less total mass than in the gravitationally unstable case, this mass is much more concentrated in the inner disk. Therefore the third major difference is that surface densities in the inner disk are now 1-2 orders of magnitude higher. Since this increases the viscous heating rate, the radial extent of the CAI Factory has also increased, though only slightly, to 1 AU. Figure 35 shows the evolution of the CAI abundance profile in this model. The denser gas slows down inward radial drift of CAIs, and they remain spread out fairly evenly between the pressure bump and \(r=10\) AU, beyond which virtually no CAIs are present. The peak abundance value of nearly 3% is naturally lower in this result than in the gravitationally unstable case, and in line with observed metoritic abundances of a couple percent. In order to achieve a result where the inner disk drains of CAIs however, we had to increase the planet gap depth by increasing the value of \(\alpha_{\rm peak}\) in Equation (42) from 0.01 to 0.1. Without this adjustment, there is sufficient leakage of CAIs through the planet gap that the abundance on both sides remains almost the same. As Figure 36 shows, there is only a narrow range of values for \(\Omega_{\rm c}\) for which the final abundance profile resembles that in Figure 35. Increasing the rotation rate by a factor two leads to a situation in which the gravitational instability kicks in again, while the trapping effect rapidly becomes ineffective for lower values. When \(\Omega_{\rm c}=6\times 10^{-16}\) rad s\({}^{-1}\), the centrifugal radius barely exceeds the disk inner edge anymore, and very few CAIs are left in the disk to be trapped. We did not explore the effects of moving the disk inner edge closer to the Sun. The final result we'll show for our gravitationally stable model is a parameter search over the planet formation location \(a_{\rm planet}\). The smaller radial extent of this disk increases the sig Figure 34: Viscous evolution of the gas surface density \(\Sigma_{g}\) for a model with a reduced cloud rotation rate and no gravitational instability. Figure 35: Time evolution of the CAI abundance throughout the disk for a model with a reduced cloud rotation rate and no gravitational instability. Figure 36: Abundances after 5 Myr for different values of the cloud rotation rate \(\Omega_{\rm c}\). There is only a narrow range of values for which the abundance profile agrees with metoritic observations without triggering the gravitational instability. Figure 37: Abundances after 5 Myr for different values of the planet formation location \(a_{\rm planet}\). For formation locations at greater radial distances than 3 AU, the CAI abundance behind the planet gap rapidly decreases, while it increases in the inner disk. nificance of this parameter considerably. As Figure 37 shows, moving Jupiter's formation location out any further than 3 AU will rapidly decrease the CAI abundance in the pressure bump, and increase the abundance in the inner disk, as not enough CAIs get trapped behind its orbit. ## 4 Discussion The main science question we set out to answer was whether the DKA18 solution to the CAI storage problem still works when the effects of the infall phase from a parent molecular cloud core are included into the model. In a nutshell, we can answer this question positively. But while Jupiter's planet gap does serve as an effective barrier preventing CAIs from drifting into the inner solar system and eventually vanishing into the Sun, there are also some important differences between the models. ### Main model The main focus of our work has been on a model in which the solar protoplanetary disk becomes gravitationally unstable during the infall phase, as we found that this situation arises for a wide range of input parameters. The first new result that we found in this model was that CAIs are only created during the infall phase when the disk is still building up. During this phase, the viscosity in the disk is so strong that viscous heating alone will cause an extended region in the inner disk to reach a temperature of 1400 K. This has a very important consequence: the rapid viscous expansion of the gas during the infall phase drags the various dust species along with it, spreading them out much further than would be achieved by turbulent diffusion or meridional flow in an already established disk would. In the DKA18 model, there are virtually no CAIs present at a heliocentric distance of 10 AU, while we find them to exist even a hundred times further out than that. The fact that CAIs are transported out so far into the outer disk leads to another important result. The required time for CAIs to drift all the way back in towards the Sun is now several million years, instead of the several tens of thousands of years it would take them to vanish from the disk when they are only found in the innermost part. The presence of CAIs in the disk at the time when meteorite parent bodies formed is thus a natural consequence of the viscous spreading of the infall phase. This solves the second part of the CAI storage problem without needing to invoke a planet. We briefly considered the possibility that the first part of the CAI storage problem might also be solved without Jupiter trapping any dust particles, due to the pile-up of CAIs creating a distinct peak in the outer disk that slowly drifts in. This second peak forms regardless of whether a planet exists in the model or not. However, without a planet in the model to keep CAIs in place, the inner disk will continuously be seeded with new dust grains drifting in, and the CAI abundance remains very high in the inner disk. So while Jupiter is not needed to explain the presence of CAIs beyond its orbit, it is still required in order to let the inner disk drain successfully. Another important difference between our model and the DKA18 model is the width of the abundance peak at the end of the simulation. In the DKA18 model, this peak extends to roughly 4 AU. In our model, it extends all the way out to 100 AU, although it must be noted that this is not entirely because of Jupiter's pressure bump. The inward radial drift of CAIs is a still ongoing process after 5 Myr, that would presumably continue if the disk itself would not dissipate yet. This model therefore predicts that CAIs should also be present in objects originating from far beyond the orbit of Jupiter, such as for example in Kuiper Belt objects. A final important difference we found is that in our model, the CAI abundance is significantly higher in the particle trap than in both the DKA18 model and real meteorites. While this seems problematic, the overall CAI abundance could be lowered by different parameter choices, most notably for the molecular cloud parameters, or by perhaps considering more realistic size distributions of CAIs. There are also some notable similarities between our model and that of Morbidelli et al (2022). Their model also begins with infall from a molecular cloud core. The early, high-viscosity disk undergoes rapid viscous expansion as the radial velocity of gas is positive beyond the centrifugal radius, efficiently transporting dust outward. As in our case, the disk then evolves into a low-viscosity accretion disk as infall ends. Their model then manages to predict the contemporaneous formation through the streaming instability of two isotopically distinct planetesimal groups, one at the snowline around 5 AU and another at the silicate sublimation line around 1 AU. These groups correspond to the parent bodies of CC and NC meteorites, respectively. The difference in composition is caused by a change over time in the composition of the molecular cloud core (which we did not include in our model). Planetesimals at the snowline incorporate mostly material that accreted onto the disk early on, transported outward during the expansion phase, while later infalling material changed the overall composition of the dust in the inner disk. They note, but did not model, that a barrier against dust drift, likely in the form of Jupiter's formation, is still required to prevent the disk from homogenizing before later planetesimal formation is complete. CAIs in their model are assumed to have condensated from early infalling material. ### Gravitationally stable model An argument against CAIs spreading out into the (far) outer disk early on is the observed lack of CAIs in CI chondrites. Only one CAI has been found in a CI chondrite (Frank et al., 2011). CI chondrite parent bodies are thought to have formed after 3-4 Myr at heliocentric distances \(r\geq 15\) AU (Desch et al., 2018), implying that CAIs had not reached these distances in significant quantities yet at this time. This observational constraint can be satisfied by not triggering the gravitational instability, which requires a molecular cloud with less angular momentum than in our main model. This can be achieved by considering models with lower cloud rotation rates and/or higher temperatures. These conditions can be achieved in high-mass star formation regions, where cloud cores with strong magnetic fields can slow down their rotation through magnetic braking (Wurster & Li, 2018). There is evidence that the Sun may have formed in such an environment (Hester & Desch, 2005). By using a smaller cloud rotation rate, we were able to produce a gravitationally stable model in which CAIs do remain trapped behind the planet gap, but don't extend out as far as 15 AU.7 However, while this matches the observation that (almost) no CAIs are found in CI chondrites, it does not explain why they are found in comets that formed in the same region but at later times. Our current models only predict that CAIs were transported to this region early (with gravitational instability) or not at all (without it). Footnote 7: We did not explore different combinations of the cloud rotation rate and temperature that might yield similar results. ### Suggested improvements There are several ways in which the model could be further improved. It might also be good to check the results of the model with a full hydrodynamic code. The midplane temperature calculation we made use of could be improved upon. The model used for the stellar luminosity (Baraffe et al. 2002) is not really reliable for the earliest phases of the simulation, where an accurate temperature calculation is arguably the most important. We also assumed a constant opacity throughout the disk, instead of calculating it from the available dust components. A more precise temperature model might influence when and where exactly the CAI Factory is active. We have seen that, at least for the \(\alpha\)-parameterization we employed, the disk in our main model becomes gravitationally unstable during the infall phase. Because this can lead to the emergence of overdense clumps and spiral arms which can transport angular momentum outward, these effects can (at least crudely) be mimicked by artificially increasing the \(\alpha\)-parameter for the viscosity. A full treatment of the gravitational instability would require a code that can handle non-axially symmetric disk models however. The way in which the planet gap is introduced into the model is also quite crude. While the width of the gap grows with the planet's mass, its depth does not. This is an important parameter however, because it determines how well the particle trapping effect works. A more sophisticated model could be used for the opening of the gap. Also not taken into account is the possibility that the gap itself moves over time due to migration of Jupiter. The disk mass in our main model evolves very little after the end of the infall phase, when most of the mass resides at large heliocentric distances, but the viscosity is too weak to move much of it back in to accrete onto the Sun. This is probably not very realistic. The model of Yang and Ciesla (2012) suffered from the same issue. Perhaps an \(\alpha\)-profile could be set up in such a way that a stronger accretion rate emerges, or different mechanisms of angular momentum transport could be included, such as magnetized winds. The dissipation phase of the disk due to photoevaporation could also be included into the model. This might at the same time also impact surface densities and radial drift velocities when meteoritic parent body formation is still ongoing. We have only modelled a single size of CAI grains simultaneously, even though CAIs exist in a size range from microns up to a centimeter. While we checked in section 3.2 how the final CAI abundances change for grains of different sizes (see Figure 28), each of these iterations still assumes that only that particular size of grain exists. A more realistic model would include a more complete sample of the CAI size distribution. This could be complicated to achieve however, because it is not clear whether CAIs of different sizes are created at the same time or with the same efficiency. It could equally well be that mostly large CAIs are created in the CAI Factory, with smaller CAIs being the result of later fragmentation in the disk. The effects of photoevaporation could be especially important in high-mass star formation environments with a high FUV flux, as this could have a significant impact on the disk evolution, for example by truncating the disk through rapid mass loss at the outer edge. It is also a possibility that mass loss due to photoevaporation might keep the disk gravitationally stable for higher values of the cold rotation rate \(\Omega_{\rm c}\). Finally, something that has also not been explored in this project is the possibility of a non-homogeneous distribution of matter in the molecular cloud, or a composition that changes over time. ## 5 Conclusion The model we built shows that the solution that Desch et al (2018) proposed for the CAI storage problem, in which a pressure maximum created by Jupiter opening a gap in the disk traps CAIs in place, also works when taking into account that the solar protoplanetary disk formed out of a collapsing molecular cloud core. We find that CAIs are created during the infall phase and are then very efficiently transported outward by the combined effects of advection by the rapidly expanding gas, redistribution of matter due to possible gravitational instability and turbulent diffusion. Our main focus was on a disk model massive enough to become gravitationally unstable. In this case, subsequent inward radial drift creates a double peak structure in the CAI abundance. As well as piling up in Jupiter's pressure bump, CAIs in the far outer disk start piling up when they drift into a region of higher gas surface density, decreasing their Stokes number and slowing them down. The two abundance peaks keep growing over the course of the simulation until eventually merging together, forming a broad (\(\pm\)100 AU) region with elevated CAI abundances starting just behind Jupiter, where carbonaceous chondrites are then assumed to have formed. An interesting result from this extended period of inward radial drift is that the presence of CAIs in the disk after 4-5 Myr is a natural consequence of the infall phase, and does not actually require Jupiter as an explanation. In the meantime, the inner disk drains of CAIs as they drift into the Sun, leaving a much lower level of abundances in the region where ordinary and enstatatic chondrites then form. For input parameters that do not lead to the gravitational instability, the final CAI abundance profile in the disk looks qualitatively similar. The abundance peak is less wide however, as the disk is naturally smaller, and no double peak structure is observed. We find that the results of our model do not strongly depend on most parameter choices. This applies even to some parameters that the DKA18 model is more sensitive to, such as (in the gravitationally unstable case) the time and location where Jupiter forms. The molecular cloud properties seem to have the largest quantitative impact on the CAI abundance. More importantly, the general shape of the CAI abundance profile is not disrupted by most parameter variations, so the Jupiter solution works for any sufficiently large dust particle. The presence of multiple planet gaps due to the other gas planets in our Solar System is not necessarily an impediment for the CAIs to drift back in towards Jupiter, as we showed that there are at least reasonable parameter choices that cause additional planet gaps to let through significant amounts of CAIs. Quantitatively, the CAI abundances our model predicts are quite uncertain, not only due to parameter uncertainty, but also due to simplifications in the model, such as the singular CAI grain size. Qualitatively however, we are confident that the described results are authentic. ###### Acknowledgements. We thank the referee, Steve Desch, for an extensive and very valuable referee report which greatly helped us to improve the paper.
Calcium-aluminium-rich inclusions (CAIs)の分布を研究することで、私たちの太陽系が形成されたプロtoplanetary diskの dynamical historyを学ぶことができます。CAIsの longstanding 問題は、CAIsの貯蔵問題です。CAIsは、太陽に近い高温で形成されると考えられますが、彼らは主に炭素質chondritesに見られます。炭素質chondritesは、木星軌道を超えた位置で形成されています。また、CAIの粒子径の放射方向の移動は、太陽系プロtoplanetary diskからそれらを排除するために、数百万年前の形成と遭遇した隕石の母体よりもはるかに早く行われるべきです。CAIsの貯蔵問題の以前の提案された解決策を、Desch, Kalyaan, and Alexanderが提案した方法を再考します。この方法によると、CAIsはディスクを放射状に移動させ、その後、木星の大規模な核の形成により、惑星
2301.00658
Comparative Analysis of Terahertz Propagation Under Dust Storm Conditions on Mars and Earth
Reliable Terahertz (THz) links are necessary for outdoor point-to-point communication with the exponential growth of wireless data traffic. This study presents a modified Monte Carlo simulation procedure for estimating THz link attenuation due to multiple scattering by dust particles on the THz beam propagation path. Scattering models are developed for beams through dust, based on Mie and Rayleigh approximations for corresponding frequencies for Earth (0.24 THz) and Mars (1.64 THz). The simulation results are compared, considering parameters such as the number of Monte-Carlo photon (MCP) packets, visibility, dust particle placement density along the beam, frequency, and distance between the transmitter and the receiver. Moreover, a channel capacity model was proposed, considering THz link attenuation due to dust storms, spreading loss and molecular absorption loss for Earth and Mars outdoor environments. Simulation results for Earth show that link attenuation increases with dust particle placement density, distance and frequency, and attenuation decreases with visibility. On Mars, similar results are obtained, except that the attenuation is variate around a constant value with the frequency increase. Channel capacity is estimated for Earth and Mars environments considering time and distance-dependent scenarios. Time windows that show a sudden drop of dust particles along the beam provide opportunities to communicate with high reliability. Moreover, increasing the distance between the transmitter and receiver severely reduces the channel capacity measurement in strong dust storm conditions in both environments. Our study has found that weak dust storms have relatively little effect on Mars, but much larger effects on Earth.
Lasantha Thakshila Wedage, Bernard Butler, Sasitharan Balasubramaniam, Yevgeni Koucheryavy, Mehmet C. Vuran
2022-12-11T00:44:21
http://arxiv.org/abs/2301.00658v1
# Comparative Analysis of Terahertz Propagation Under Dust Storm Conditions on Mars and Earth ###### Abstract Reliable Terahertz (THz) links are necessary for outdoor point-to-point communication with the exponential growth of wireless data traffic. This study presents a modified Monte Carlo simulation procedure for estimating THz link attenuation due to multiple scattering by dust particles on the THz beam propagation path. Scattering models are developed for beams through dust, based on Mie and Rayleigh approximations for corresponding frequencies for Earth (0.24 THz) and Mars (1.64 THz). The simulation results are compared, considering parameters such as the number of Monte-Carlo photon (MCP) packets, visibility, dust particle placement density along the beam, frequency, and distance between the transmitter and the receiver. Moreover, a channel capacity model was proposed, considering THz link attenuation due to dust storms, spreading loss and molecular absorption loss for Earth and Mars outdoor environments. Simulation results for Earth show that link attenuation increases with dust particle placement density, distance and frequency, and attenuation decreases with visibility. On Mars, similar results are obtained, except that the attenuation is variate around a constant value with the frequency increase. Channel capacity is estimated for Earth and Mars environments considering time and distance-dependent scenarios. Time windows that show a sudden drop of dust particles along the beam provide opportunities to communicate with high reliability. Moreover, increasing the distance between the transmitter and receiver severely reduces the channel capacity measurement in strong dust storm conditions in both environments. Our study has found that weak dust storms have relatively little effect on Mars, but much larger effects on Earth. THz Communication, Atmosphere, Attenuation, Scattering, Dust. ## I Introduction Sixth generation (6G) wireless networks aim to push the frequency spectrum into the Terahertz (THz) band to fulfill rising capacity demands and requirements, given the opportunity for higher bandwidths [1, 2, 3]. The 0.1 to 10 THz frequency range has the potential to (1) realize high bandwidth transmissions that can allow hundreds of GB/s data rates for communication [4, 5, 6], and (2) provide new opportunities to create miniature THz-enabled antennas due to the small wavelengths (30 \(\mu\)m - 3 mm), enabling us to design arrays with a large number of antenna units [7, 8, 9]. Numerous studies have shown that specific THz frequencies suffer high molecular absorption due to atmospheric gases (e.g., water vapor and oxygen). However, given the wavelength and high energy photons of THz signals, other particles can also significantly impact the link budget, which can result in scattering and absorption of signal power. Recent studies have shown that solid particles such as dust, sand and ice affect THz signals [10], in addition to molecular absorption from atmospheric gases [11]. However, past studies have paid little attention to signal attenuation caused by solid particles such as dust and sand. Therefore, further investigation is required to determine how dynamic environments composed of solid particles, such as dust storms, affect THz links. This requires further investigation, especially as we expand connectivity in rural areas and other planets (e.g., Mars) to interplanetary scale. In the case of Mars, the recent vision of colonizing the planet will require high-bandwidth connectivity to maximize chances for human survival. A dust storm is a physical layer of dust and debris blown into the atmosphere by winds with horizontal and vertical velocity components. On Earth, the wall of dust can be miles wide and several thousand feet high. Dust storms are more frequently found in arid regions such as the Middle East [12], North China [13], and North Africa [14] at specific periods of the year. In more densely populated areas, human activity creates dust when burning fossil fuels for heating, cooking, or transport. Industrial and construction processes also create dust. This study compares the effects of solid dust particles on (sub)THz signals on both Earth and Mars, taking account of varying environmental conditions. Considering the differences in atmospheric conditions on Earth and Mars, with or without dust, suggests the use of different frequencies to enable relatively long-distance wireless communication on both planets. Dust storms are one of the most remarkable features on Mars. Even though wind speed on Mars is not significantly higher than on Earth, the extremely dry, dusty surface yields more dust storms. Figure 1 provides an overview of _selected_ wireless communications applications on Earth, and _proposed_ wireless communication applications on Mars. While the applications differ, they are both affected by wireless channel losses, including those caused by dust particles that can scatter the EM waves used for communication. The rest of this paper considers the similarities and differences in the channel conditions, and includes models and simulations based on simulated dust storms that result in beam scattering, as shown at the bottom of Fig. 1. In a dusty environment, the dust particle density is higher than usual, and the effects of multiple scattering of EM waves due to dust particles are non-negligible. Recent studies have not considered this significant effect on the attenuation of EM waves [15, 16]. The lack of consideration of multiple scattering effects can result in significant gaps between theoretical and experimental results. This paper considers multiple scattering of EM waves due to dust particles along the beam propagation path. To this end, we model the EM wave as a _photon packet_ instead of a shower of photons. It is inaccurate to consider the EM wave as a shower of photons characterized by the position of a photon and its trajectory [17]. A photon packet models a portion of the _energy weight_ of the EM wave rather than single photons (which have quantum behavior). Therefore, we can consider an EM wave as a collection of energy packets and model multiple scattering effects utilizing the Monte Carlo algorithm, to infer the radiative transfer equation. The THz link scattering loss measurement in this study is inspired by [18], where the scattering loss due to charged dust particles is calculated by considering the energy of the transmitting signal as Monte Carlo Photon Packets. Vertical THz attenuation is determined in [18], but this study considers horizontal point-to-point communication for both Earth and Mars in dusty atmospheric scenarios. The contributions of this paper are: 1. A 3-D geometric scattering model for multiple photon-dust particle interactions is presented, using both Mie and Rayleigh approximations, to estimate the probability that a photon packet arrives at the receiver. 2. The model is used in simulation to estimate the overall channel capacity considering THz and sub-THz link budget degradation due to the combination of scattering by dust particles, molecular absorption loss due to the atmosphere, and free-space spreading loss. 3. Different communication channel conditions (on Earth and Mars) and their effect on channel capacity, including power loss caused by multiple scattering by dust, are compared and analysed. The rest of this paper is organized as follows: Section II describes dust conditions and how they affect EM propagation and contrasts the conditions that prevail on Earth and Mars; Section III describes how 3D dust storm simulation is affected by the number of dust particles on the EM wave propagation path. Then Section IV explains the Monte-Carlo simulation process for calculating the transmittance/attenuation when photon packets are scattered by multiple dust particles. Section V presents estimates of transmittance/attenuation obtained by Monte-Carlo simulation, in various parameter settings. Section VI describes the results of the simulation process. \begin{table} \begin{tabular}{|l|l|l|} \hline Gas & Composition on Earth & Composition on Mars \\ \hline N\({}_{2}\) & 78.084\% & 2.7\% \\ O\({}_{2}\) & 20.946\% & 0.13\% \\ Ar & 0.93\% & 1.6\% \\ H\({}_{2}\)O & 1.3\% & 100-400ppm \\ CO\({}_{2}\) & 0.003\% & 95.32\% \\ CH\({}_{4}\) & 1.5\(ppm\) & - \\ SO\({}_{2}\) & 1\(ppm\) & - \\ O\({}_{3}\) & 0.05\(ppm\) & 0.1\(ppm\) \\ N\({}_{2}\)O & 0.02\(ppm\) & - \\ CO & 0.01\(ppm\) & 0.08\% \\ NH\({}_{3}\) & 0.01\(ppm\) & - \\ NO & - & 100\(ppm\) \\ \hline \end{tabular} \end{table} TABLE I: Atmospheric gas composition comparison between Earth and Mars [16]; ppm is a concentration of parts per million. Fig. 1: THz wireless communication applications and links through Earth and Mars atmospheric and environmental conditions. VI presents a channel capacity model that combines the effect of spreading, molecular absorption and multiple scattering by dust, with simulated results. Finally, Section VII presents our conclusions. ## II Background ### _THz link behaviour in Dust Storms_ THz signal attenuation due to the scattering loss caused by high dust particle density on the THz beam propagation path is the main concern of this study. Dust particle density on Mars is expected to be higher than on Earth because of the dusty atmosphere with low water vapour concentration. Mars dust consists of basalt and montmorillonite clay [19]. On the other hand, Earth dust consists of pollen, bacteria, smoke, ash, salt crystals from the ocean, and small amounts of dirt or various rocks, including sand. Moreover, during dust storm conditions on Mars, the effective radius of the dust particle varies from 1 to 4 microns with an effective variance of 0.2 - 0.4 [20]. However, on Earth, the effective radius varies between 1 and 150 microns [18, 19]. Many researchers investigated the THz [16, 18, 21, 22] and lower frequency bands [23, 24] attenuation due to the presence of dust particles on the beam propagation path. In [18], Monte-Carlo simulation was used to calculate the transmittance of EM waves when they propagate through dust, considering multiple scattering effects for charged particles in 20 and 75 GHz frequencies. Hongxia et al. [21] also studied the attenuation characteristics of THz waves subject to multiple scattering caused by dust storms in the Tengger desert, using the _Mie_ scattering approximation and Monte Carlo simulation. In addition, considering the Mie theory, Diao et al. [16] investigated THz wave attenuation due to heavy dust in the Martian atmosphere in the 0.1-1 THz frequency range and compared with Earth measurements. [22] investigated attenuation at 0.625 THz caused by dust utilising an experimental setup and found that degradation of the THz link budget is minor due to dust, compared to that found using IR beams with 1.5 \(\mu m\) wavelength, and average attenuation of the THz link is proportional to the dust particle density. Moreover, Elshaikh et al. [23] developed a mathematical model to characterise the microwave attenuation due to dust, considering parameters such as visibility, frequency, particle size and complex permittivity. Li et al. [24] calculated the light scattering properties of partially charged dust particles utilising Mie scattering theory for various frequencies and found that for higher THz frequency EM waves, the attenuation effect of charge carried by sand particles can be ignored. Furthermore, [25] presents the EM scattering properties of the small partially charged sand/dust particles, using the Rayleigh approximation, for microwave frequencies. ### _Atmospheric Condition Differences between Mars and Earth_ When THz radio waves pass through the atmosphere, the signals experience attenuation due to many factors, which differ in their impact between Earth and Mars. This study focuses on point-to-point signal degradation in the lower part of the atmosphere (the troposphere) on Earth and Mars, when communicating antennas are placed 50 meters above the ground. Apart from improved line-of-sight properties, [26] shows that longer communication distances can be achieved on Mars because dust particle density decreases with height. The propagation medium in the troposphere of both planets includes gases, water vapour, clouds, fog, ice, dust, and assorted aerosols (haze), but the proportions vary. The impairment mechanisms include absorption, scattering, refraction, diffraction, multi-path, scintillation and Doppler shift. Impairment phenomena include fading, attenuation, depolarization, frequency broadening, and ray bending. However, this study considers only Line-of-Sight (LoS) transmission under dust storm scenarios through the troposphere of both Earth and Mars. It considers signal attenuation based on three factors: (1) free space path loss (which is the same for Earth and Mars), (2) molecular absorption due to atmospheric gases (which are different for Earth and Mars), and (3) scattering loss due to dust particles along the propagation path (Mars and Earth typically have different dust distributions). Free space path loss occurs due to misalignment between the transmitter and the receiver antennas. It is the same for both environments because it only depends on carrier frequency and distance. Molecular absorption loss plays a significant role on both planets. It measures the fraction of power loss (of the carrier wave) converted to kinetic energy due to molecular vibration when EM waves propagate through molecules of the atmosphere. Therefore, unlike spreading loss, molecular absorption loss depends on local atmospheric gas composition and density (see Table I), including carrier frequency and distance between the transmitter and the receiver. According to [27], certain frequencies of the THz spectrum, such as 183, 325, 380, 450, 550, and 760 GHz, suffer attenuation that is significantly greater than the free space propagation loss, due to water vapor absorption on Earth. However, the Martian atmosphere contains only about 1/1,000 as much water as Earth's Still, even this tiny amount can condense out, forming clouds that ride high in the atmosphere or swirl around the slopes of towering volcanoes [19]. This serious issue needs to be considered for vertical communication of Mars surface devices (Rovers, Habitats, etc.) and satellites. Since our study focuses on horizontal point-to-point communication, we do not need to consider upper atmospheric layer's impact on THz signal transmission. Therefore, at these frequencies, we expect lower molecular absorption loss and higher channel capacity on Mars compared to Earth. To the best of our knowledge, this is the first study that compares attenuation (at THz frequencies) due to dust storms on Earth and Mars, applying Monte Carlo simulation to the corresponding Mie and Rayleigh approximations. This paper also presents a channel capacity model that includes the effect of spreading, molecular absorption and dust scattering losses for sub-THz and THz links on Earth and Mars, respectively. ## III THz beam propagation through a simulated 3D dust storm This section discusses THz wave propagation through a randomly simulated 3D dust storm by simulating wind having both vertical (up-draught) and horizontal velocity components. This is used to calculate the number of dust particles on the beam path. The simulated dust storm (see Fig. 2) consists of a line source starting at \(X=0\) and a vertically upward movement of dust based on the vertex motion due to wind turbulence at \((6000,0,0)\). The line source dust storm in this study spreads for 8\(m\) along the Y-axis \((-4\leq Y\leq 4)\). Such a line source is more realistic than a point source for dust storm simulation on both Earth and Mars. MATLAB's wind package was used to simulate the storm and considered an exponential movement of dust along the positive X-axis coupled with strong wind in the same direction. Also, dust particles gradually precipitate from the atmosphere, when their weight exceeds upward forces. Dust particle movement of the point source dust storm downstream of the line source dust storm comprises both upward (point) wind and horizontal (line) wind, resulting in a vortex flow (a simulated whirlwind). When counting the number of dust particles on the cone-shaped THz beam, we followed the following method. First, consider the THz beam starting at the point of \((0,0,h)\), where \(h\) (50 \(m\)) is the transmitter antenna height and beam propagation direction is aligned with the positive X-axis. In this cone-shaped beam, the maximum radius of the impact area is calculated to be approximately 15 \(cm\) for the corresponding transmitting distance of 10 \(Km\). Since it is difficult to calculate the dust particle concentration on such a pencil-thin beam, we sub-divided the cone-shaped beam into \(1\times 10^{6}\) disks with 1 \(cm\) distances between the disk centres (see Fig. 2 (a)). Then we identified the position of each dust particle at 1 \(m\) below the antenna height and recorded its position considering the nearest two disks. By looping over the position of each dust particle, and comparing the distances between the nearest disk centres and the particle position to centre distance, we encoded the position as being inside (1) or outside (0) of the THz beam for each particle. From this, we calculated the number of dust particles along the THz propagation path. Considering the dust particle size on Mars to vary between 0.5 to 4 microns [20, 28], we simulate a scenario of sending a THz signal with the transmitter position (5500,0,50) and the receiver position (6500,0,50), which creates a 1000 \(m\) distance between them while placing the point source vertex movement at (6000,0,0). As a result, we found that the average ratio of the number of generated dust particles along the line of propagation of the THz transmission beam is 0.0022872. Moreover, considering just the dust particles on the cone-shaped beam path, their density averaged 10.1832 (say, 10) particles per \(cm^{3}\). Hence we can conclude that assuming the beam face area is 0.01 \(cm^{2}\), the number of dust particles along the beam for a distance of 10 \(m\) between the transmitter and the receiver is approximately 100 particles. However, in this random walk simulation process, it is difficult to control the effective radius of the dust particles. Therefore, depending on the dust particle size, scattering effects (which depend on the number and size of the particles) along the beam propagation path can vary. ## IV Modelling Monte Carlo Photon packets propagation through Dust particles In this section, we calculate the transmittance and the corresponding attenuation of the THz EM wave when it propagates through suspended dust particles. The initial intention was to consider the THz EM wave as a collection of photons. However, photon position and trajectory are not meaningful here [17] but collections of photons enable us to discretize the beam in a physically meaningful way. Monte Carlo simulation is used to estimate transmittance, where the incident plane EM wave is discretized as Monte Carlo photon (MCP) packets/units. Such photon packets provide an appropriate physical unit for discrete event simulation [18]. Each MCP packet is considered to be an equally divided portion of the energy weight of the EM wave field. In this simulation model, we assume that the particle number concentration is uniformly distributed throughout the THz beam area, and dust particles are randomly positioned. The intensity (\(I\)) of the incident THz EM wave can be expressed as \(I_{0}=M.W\), where \(M\) is the number of MCP packets per unit area per unit time and \(W\) is the energy weight of each MCP. Here we suppose that MCP packet \(i\)\((i=1,2,...,M)\) is randomly scattered by dust particles \(j\) before it either exits the beam cone or reaches the receiver interface boundary at \(X=D\). We assume that MCP packets enter from the point \((0,0,h)\) (which is height \(h\) corresponding to the transmitter antenna height) (see Fig.2) and are forward-scattered by scattering particles \(S_{i,l}\)\((l=1,2,...,j)\) whose positions are denoted by \((x_{i,l},y_{i,l},z_{i,l})\), which are assumed random. Moreover, the algorithm will randomly select the number of scattering particles (\(l\)) that collide with an MCP packet from \(j\) dust particles on the EM wave propagation path because each MCP packet will randomly change its Fig. 2: Multiple scattering processes of EMWs in a sand/dust storm with (a) the decision-making (in/out) method of dust particles from the beam and (b) the local coordinate system. propagation direction after every collision and dust particles are uniformly distributed along the beam path. We suppose the initial energy weight of each MCP packet is \(W=W_{i,0}=1\) and after scattering due to scattering particle \(S_{i,l}\) is \(W_{i,l}\). We also define a local coordinate system \(xyz\) with the origin located at the scattering particle. So, we can define the propagation direction of the MCP packet \(i\) with respect to the local coordinate system \(xyz\), which is to the \(x\) direction considering forward scattering. The propagation direction for the scattering direction of the MCP packet \(i\) due to the impact with scattering particle \(S_{i,l}\) can be expressed using the direction cosines \((\mu^{x}_{i,l},\mu^{y}_{i,l},\mu^{z}_{i,l})\) in the global coordinate system \(XYZ\), which is defined with the help of scattering (or deflection) angle \(\theta\) and azimuth angle \(\phi\) (see Fig. 2 (b)). In each simulated collision, the scattering particle position \((x_{i,l},y_{i,l},z_{i,l})\) and the propagation angles \((\theta,\phi)\) to calculate the direction cosines were calculated randomly for the global coordinate system, which will be explained in detail later. Direction cosines can be calculated according to Fig. 2 (b) as follows, \[\mu^{x}_{i,l} =-sin(\theta_{i,l})cos(\phi_{i,l})\sqrt{1-(\mu^{x}_{i,l-1})^{2}}+ \mu^{x}_{i,l-cos}(\theta_{i,l}) \tag{1a}\] \[\mu^{y}_{i,l} =\frac{sin(\theta_{i,l})(\mu^{y}_{i,l-1}\mu^{x}_{i,l-1}cos(\phi_ {i,l})-\mu^{z}_{i,l-1}sin(\phi_{i,l}))}{\sqrt{1-(\mu^{x}_{i,l-1})^{2}}}+\mu^{ y}_{i,l-1}cos(\theta_{i,l})\] (1b) \[\mu^{z}_{i,l} =\frac{sin(\theta_{i,l})(\mu^{z}_{i,l-1}\mu^{x}_{i,l-1}cos(\phi_ {i,l})+\mu^{y}_{i,l-1}sin(\phi_{i,l}))}{\sqrt{1-(\mu^{x}_{i,l-1})^{2}}}+\mu^{ z}_{i,l-1}cos(\theta_{i,l}). \tag{1c}\] If \(|\mu^{x}_{i,l-1}|>0.99999\), then \[\mu^{x}_{i,l} =\frac{\mu^{x}_{i,l-1}}{|\mu^{x}_{i,l-1}|}cos(\theta_{i,l}) \tag{2a}\] \[\mu^{y}_{i,l} =sin(\theta_{i,l})cos(\phi_{i,l})\] (2b) \[\mu^{z}_{i,l} =sin(\theta_{i,l})sin(\phi_{i,l}) \tag{2c}\] In the initial stage, we suppose that the MCP packet \(i\) enters the dust particle zone form \(X=0\) at point (0,0,h) and propagates along the direction of \((\mu^{x}_{i,0},\mu^{y}_{i,0},\mu^{z}_{i,0})=(1,0,0)\). The simulation process depends on uniformly-distributed, randomly generated numbers \(\epsilon_{i,l},\nu_{i,l}\), and \(\chi_{i,l}\sim Uniform(0,1)\), which are used to calculate the random variables \(\Delta S_{i,l},\theta_{i,l}\) and \(\phi_{i,l}\). The \(\Delta S_{i,l}\) is the travelling distance between the scattering particles \(S_{i,l}\) and \(S_{i,l-1}\) and defined as, \[\Delta S_{i,l}=\frac{-ln(\epsilon_{i,l})}{C_{ext}} \tag{3}\] where \(C_{ext}\) is the total extinction cross-section efficiency of spherical dust particles with radius \(r\). The total extinction cross-section efficiency is expressed [18] as, \[C_{ext}=\int_{r_{min}}^{r_{max}}N_{0}P(r)c_{ext}d(r), \tag{4}\] where \(P(r)\) is the log-normal size distribution of dust particles for both Earth [29] and Mars [16] environments. Here, \(N_{0}\) is the number of dust particles per unit volume, and it can be expressed as a function of visibility (\(V_{b}\)) [16], which is represented as, \[N_{0}=\frac{15}{0.034744V_{b}\int_{0}^{2r_{max}}\pi r^{2}P(r)dr}. \tag{5}\] On Earth, dust particle radius varies between 1 to 150 \(\mu m\). Therefore, when considering the approximate equality of the effective diameter of the dust particles and the wavelength of the THz frequency utilised in this study, we can use the Mie approximation [30] to infer the total extinction cross-section (\(c_{ext}\)) [19], which is the sum of the absorption cross-section and scattering cross-section. The \(c_{ext}\) is expressed by the Mie solution for spherical particles with dielectric constant \(\epsilon\)[23] as, \[c_{ext}=\frac{k^{3}r\lambda^{2}}{2}(c_{1}+c_{2}(kr)^{2}+c_{3}(kr)^{3}) \tag{6}\] where, \[c_{1}=\frac{6\epsilon^{\prime\prime}}{(\epsilon^{\prime}+2)^{2}+( \epsilon^{\prime\prime})^{2}} \tag{7a}\] \[c_{2} =\epsilon^{\prime}\frac{6\,7(\epsilon^{\prime})^{2}+7(\epsilon^{ \prime\prime})^{2}+4\epsilon^{\prime}-20}{[(\epsilon^{\prime}+2)^{2}+( \epsilon^{\prime\prime})^{2}]}+\frac{1}{15}+\frac{5}{3[(2\epsilon^{\prime}+3)^ {2}+4(\epsilon^{\prime\prime})^{2}]}\] (7b) \[c_{3} =\frac{4}{3}\Big{\{}\frac{(\epsilon^{\prime}-1)^{2}(\epsilon^{ \prime}+2)+[2(\epsilon^{\prime}-1)(\epsilon^{\prime}+2)-9]+(\epsilon^{\prime \prime})^{4}}{[(\epsilon^{\prime}+2)^{2}+(\epsilon^{\prime\prime})^{2}]}\Big{\}}. \tag{7c}\] The complex refractive index of dry dust particles on Earth can be expressed as \(\sqrt{3+\imath\frac{18.256}{f}}\)[21], where \(f\) is the frequency and \(\imath\) is the imaginary unit \(\sqrt{-1}\). On the other hand, the total extinction cross-section of the dust particles on Mars can be evaluated utilising the Rayleigh approximations [25] because the effective diameter of the dust particles on Mars (1-8 \(\mu m\)) is less than one-tenth of the wavelength of the frequency [31] used in this study. Therefore, the total extinction cross-section can be expressed as, \[c_{ext}=\frac{8}{3}\pi k^{4}r^{6}\Big{|}\frac{\epsilon_{r}-1}{|\epsilon_{r}+2| ^{2}}+12\pi k^{\prime\prime}_{r}r^{3}\frac{1}{|\epsilon_{r}+2|^{2}}+\frac{ \pi}{6}\frac{k\epsilon^{\prime\prime}\epsilon^{2}}{E_{0}^{2}\epsilon_{0}^{2}}| \epsilon_{r}-1|^{2} \tag{8}\] where \(\epsilon_{r}\) is the relative permittivity [25]. Moreover, we can take the complex refractive index of dust particles on Mars as \(1.52+0.01i\)[16, 26] corresponding to the radius range (0.5-4 \(\mu m\)) used in this study. The scattering angle, \(\theta_{i,l}\), due to the \(i^{th}\) MCP packet impact with scatter \(l\) is represented as, \[\theta_{i,l}=\left\{\begin{array}{l}cos^{-1}\Big{\{}\frac{1}{2g}\big{(}(1+g^ {2})-\big{(}\frac{1-g^{2}}{1-g+2g\nu_{i,l}}\big{)}^{2}\big{)}\Big{\}}\text{ for }g\neq 0\\ cos^{-1}(2\nu_{i,l}-1)\text{ for }g=0\end{array}\right. \tag{9}\] where \(\phi_{i,l}\) is the azimuth angle due to the same impact and \[\phi_{i,l}=2\pi\chi_{i,l} \tag{10}\] and \(g=<cos(\theta)>\in[0,1]\) is the asymmetry factor (here, \(g=0\) refers to the isotropic scattering and 1 refers to forward direct scattering). We assume \(g\) varies uniformly between 0.5 and 1 for our simulations which is the average value for the direct forward scattering. After calculating the random variables \(\epsilon_{i,l},\nu_{i,l}\), and \(\chi_{i,l}\), we can determine the (random) position of the scattering particle \(S_{i,j}\) (\(X_{i,l},Y_{i,l},Z_{i,l}\)) in the global coordinate system utilising the equations (1), (2) and (3) as below. The position of the scattering particles can be expressed as, \[X_{i,l} =X_{i,l-1}+\Delta S_{i,l}\:\mu_{i,l-1}^{x} \tag{11}\] \[Y_{i,l} =Y_{i,l-1}+\Delta S_{i,l}\:\mu_{i,l-1}^{y}\] \[Z_{i,l} =Z_{i,l-1}+\Delta S_{i,l}\:\mu_{i,l-1}^{z}\] Successful transmittance occurs only for MCP packets that reached the receiver interface at a distance of \(D\) from the transmitter. If \(X_{i,l}>=D\), this means that \(S_{i,l-1}\) is the last scattering particle encountered by the MCP packet \(i\) before it leaves the region boundary (\(X=D\)) from the receiving interface. Therefore, we can stop the simulation process and go to the next MCP packet (\(i+1\)) after calculating its current energy weight (\(W_{i,l}\)). The energy weight follows the Beer-Lambert law, which determines how \(W_{i,l-1}\) is related to the \(W_{i,l}\)[18, 32]. Thus, the energy weight of an MCP packet after collision with a scattering particle can be expressed as, \[W_{i,l}=W_{i,l-1}\:exp\Big{\{}\frac{-C_{ext}(X_{i,l}-X_{i,l-1})}{\mu_{i,l-1}^ {x}}\Big{\}}. \tag{12}\] If \(X_{i,l}<D\), this means MCP packet \(i\) is unable to reach the receiver interface after impacting with \(l\) scatters. From this point, we focus our interest more on the energy weight of the MCP packet \(i\) and calculate the energy weight of the MCP packet \(W_{i,l}\) using eq. (12). In this instance, we assume that the initial energy weight of the MCP packet is a unit (i.e., \(W_{i,0}=1\)), where we define a threshold (\(\epsilon_{t}\)) value of \(1\times 10^{-5}\) to consider as the minimum energy weight that an MCP packet can take after \(l\) impacts with the scatters. If the energy weight of an MCP packet does not exceed this minimum threshold (i.e., \(W_{i,l}<\epsilon_{t}\)), the packet is assumed not to reach the receiver and is recorded as such. Therefore, we can set \(j=l\) and \(W_{i,l}=0\). Based on the calculation of energy weights of the MCP packets that reached the receiver interface, we can calculate the transmittance of the THz EM wave by, \[T_{MS}=\frac{\sum_{i=1}^{M}W_{i,l}\:exp\Big{\{}\frac{-C_{ext}(D-X_{i,l})}{\mu_ {i,l}^{x}}\Big{\}}}{I_{0}}. \tag{13}\] From our calculations, we noticed that eq. (13) does not converge to a finite value for every simulation. Therefore, we assume the transmittance to be zero when it is divergent and set the simulation to the next Monte-Carlo process. Based on the transmittance measurements at the end of this procedure, we can calculate the specific attenuation \(A_{MS}\) in \((dB/m)\) according to [18], as, \[A_{MS}=\frac{-4.343\:ln(T_{MS})}{D}. \tag{14}\] ## V THz Channel Capacity To evaluate the channel capacity in the THz band, we can decompose the received signal as a sum of the sub-bands, where each sub-band channel is narrow and has a flat-band response [33]. The \(i^{th}\) frequency sub-band is defined as \(\Delta f_{i}=f_{i+1}-f_{i}\) with power \(P_{i}\) under the constraint \(\sum_{i=1}^{N_{B}}P_{i}\leq P_{t}\), where \(N_{B}\) refers to the total number of sub-bands, and \(P_{t}\) stands for the total transmit power. In the \(i^{th}\) narrow-band, the sub-band capacity, \(C_{i}\), is expressed in [33] as, \[C_{i}=\Delta f_{i}\log\Big{(}1+\frac{|h_{LoS}|^{2}P_{i}}{\Delta f_{i}S_{D}(f_ {i})}\Big{)}, \tag{15}\] where \(S_{D}\) is the power spectral density of the additive white Gaussian noise (AWGN) and \(h_{LoS}\) is the frequency-dependent channel response for attenuation due to dust particles including spreading loss and molecular absorption loss due to gas molecules on the LoS signal propagation path. According to [33], \(h_{LoS}\) can be expressed as, \[h_{LoS}(\tau)=\alpha_{LoS}\delta(\tau-\tau_{LoS}). \tag{16}\] where \(\alpha_{LoS}\) refers to the attenuation and \(\tau_{LoS}\) refers to the propagation delay due to dust particles and gas molecules on the signal propagation path and \(\tau_{LoS}=\frac{d_{LoS}}{\delta}\). Where \(d_{LoS}\) is the signal travelling distance through dust, which is \(D\) in this study, because we calculate the transmittance using the MCP packets that reached the fixed receiver interface. Also, the power spectral density of AWGN can be expressed as \(S_{D}(\tau)=\frac{n_{0}}{2}\delta(\tau)\) and, in the frequency domain, \(PSD(S_{D}(f))\)= \(\frac{n_{0}}{2}\). Utilizing the Wiener-Khinchin theorem, the frequency-dependent channel response for LoS attenuation can be expressed as [33], \[h_{LoS}(\tau)=|H_{LoS}(f)|\delta(\tau-\tau_{LoS}). \tag{17}\] The free space direct ray or LoS channel transfer function, \(H_{LoS}\), consists of the spreading loss function (\(H_{Spr}\)), the molecular absorption loss function (\(H_{Abs}\)), and scattering loss function due to dust particles (\(H_{Dust}\)), which can be expressed as, \[H_{LoS}(f)=H_{Spr}.H_{Abs}.H_{Dust}e^{-j2\pi f\tau_{LoS}}. \tag{18}\] The free space path loss or the spreading loss (\(PL_{Spr}\)) measures the fraction of power lost by a beam with frequency \(f\) over a distance \(D\) in a vacuum, and it can be expressed according to [34] as, \[PL_{spr}=\Big{(}\frac{4\pi Df}{c}\Big{)}^{2}, \tag{19}\] where \(c\) is the speed of light in the medium. Thus, according to [33] the corresponding channel transfer function for the spreading loss can be expressed as, \[H_{spr}=(PL_{spr})^{-1/2}=\Big{(}\frac{c}{4\pi Df}\Big{)}. \tag{20}\] The molecular absorption loss measures the fraction of power converted to kinetic energy due to molecular vibration when EM waves propagate through molecules in the atmosphere. Thus, when transmitting frequency \(f\) through a homogeneous medium between a transmitter and receiver at a distance \(D\), the molecular absorption loss is obtained with the help of the Beer-Lambert law [34], which is represented as, \[PL_{abs}=e^{k(f)D}, \tag{21}\] where \(k(f)=\sum_{i,g}k_{g}^{i}(f)\) and \(k_{g}^{i}(f)\) is the monochromatic absorption coefficient of the \(i^{th}\) isotopologue of \(g^{th}\) gas at frequency \(f\). When calculating the absorption coefficient, we consider water vapour and nine other gases for Earth and six gases for Mars (see Table I), except for Argon. This allows us to consider the vastly different gas concentrations between the two planets. The monochromatic absorption coefficient for each isotopologue of a particular gas in the Martian and Earth atmosphere at frequency \(f\) is provided in [35], \[k_{g}^{i}(f)=S_{g}^{i}(T)F^{i}(f), \tag{22}\] where \(S_{g}^{i}(T)\) is the line intensity at temperature \(T\) (210K for Mars) referenced to the temperature 296K of the \(i^{th}\) isotopologue of \(g^{th}\) gas, which can be easily calculated using the high-resolution transmission (HITRAN) molecular spectroscopic data. Where, \(F^{i}\) is the spectral line shape function at frequency \(f\). In the lower atmosphere on Earth, pressure broadening of spectral lines dominates the line shape and a Lorentz profile can be assumed as the line shape function and it is given by [35], \[F_{L}^{i}(f)=\frac{1}{\pi}\frac{\gamma(p,T)}{\gamma(p,T)^{2}+[f-(f_{g}^{i}+ \delta(P_{ref})P)]^{2}} \tag{23}\] where \(f_{g}^{i}\) is the resonant frequency for the isotopologue \(i\) of gas \(g\), \(\gamma(P,T)\) is the Lorentzian (pressure-broadened) HWHM for a gas at pressure \(P\) (atm), temperature \(T\) (K), and \(\delta(P_{ref})\) is the pressure shift at reference pressure (\(P_{ref}\)= \(1\ atm\)). Since Doppler-broadening dominates the line shape in low-pressure environments such as Martian environment, a Gaussian profile can be assumed as the line shape function, and it is given by, \[F_{G}^{i}(f)=\sqrt{\frac{\ln 2}{\pi{\alpha_{D}^{i}}^{2}}}\exp\Bigg{(}-\frac{ (f-f_{g}^{i})^{2}\ln 2}{{\alpha_{D}^{i}}^{2}}\Bigg{)}\, \tag{24}\] where \(\alpha_{D}^{i}\) is the Doppler broadening half-width, \[\alpha_{D}^{i}=\frac{f_{g}^{i}}{c}\sqrt{\frac{2N_{A}k_{B}T\ln 2}{M^{i}}}, \tag{25}\] where \(M^{i}\) is the molar mass of isotopologues which can be obtained from the HITRAN database [35], and \(N_{A}\) and \(k_{B}\) are the Avogadro and Boltzmann constants. Thus, according to [33] the corresponding channel transfer function for the molecular absorption loss due to the gas molecules on the atmosphere can be express as, \[H_{abs}=(PL_{Abs})^{-1/2}=e^{-\frac{1}{2}k(f)D}, \tag{26}\] Finally, \(H_{Dust}\) is the transfer function for the attenuation due to dust particles that can be expressed as following the relationship between the transfer functions and the attenuation functions for spreading loss and molecular absorption loss, respectively, as well as utilising the dust attenuation function in eq. (14) in dB, which is represented as, \[H_{dust}=\frac{1}{\sqrt{10^{-0.4343\ln{(T_{MS})}}}}. \tag{27}\] Therefore, substituting the functions derived for \(H_{Spr}\), \(H_{Abs}\), and \(H_{Dust}\), to eq. (18), we can calculate the channel transfer function. Moreover, in this study, we consider one narrow frequency band for each environment, which is 0.22 - 0.24 THz for Earth and 1.64 - 1.67 THz for Mars. Therefore, we can suppose that \(P_{i}=P_{t}\) and the eq. (15) can be rewritten as, \[C=\Delta f\log\Big{(}1+\frac{|h_{LoS}|^{2}P_{t}}{\Delta fS_{D}(f)}\Big{)}. \tag{28}\] ## VI Results and Discussion ### _Transmittance and Attenuation measurements through Monte-Carlo Simulations_ This subsection presents the simulation results for the transmittance and attenuation measurements of the THz link and the generated estimation models for the THz attenuation due to dust particles on the beam propagation path by varying parameters such as the MCP packets, visibility, dust particle number, the distance between transmitter and the receiver, and EM frequency. When simulating data for a targeted parameter, we have kept the other parameters constant to make the interpretation easy (see Table II). Thus, we have chosen a frequency of 1.64 THz as the constant frequency for Mars [26], and 0.24 THz frequency for Earth [36, 37] corresponding to low molecular absorption and high transmission distance. However, it is crucial to consider molecular absorption on Mars, even though it has a thin atmosphere with very low water vapour concentration. Moreover, we have considered 100 dust particles on the beam propagation path for a 10 \(m\) fixed distance between the transmitter and receiver, as explained in section III for simulations on Earth. It is unrealistic to consider the same amount of dust particles for the Mars simulations because of the tiny particle sizes on Mars. Therefore, considering Fig. 3: Simulation measurements of a) the transmittance and b) the attenuation for a THz beam of 0.24 THz and 1.64 THz frequency for Earth and Mars by varying MCP packets from 10 to 10000 while fixing dust particle number (100/10000) on the propagation path, visibility and the distance (10 \(m\)) between transmitter and the receiver. the blockage that this dust particle creates and the proportional relationship between blockage area and dust particle radius, we consider 10000 dust particles for Mars corresponding to 100 dust particles on Earth. When selecting the number of MCP packets, we considered 10000 packets in this study. Figure 3 illustrate the simulated data for the transmittance and attenuation measurements using the simulation setup explained in section IV for Earth and Mars environments by varying the number of MCP packets from 10 to 10000, while keeping the other parameters constant. As we can see from the figures, the transmittance measurements for both Earth and Mars environments (see Fig. 3 (a) and (c)) are increasing rapidly when the MCP packets increase at the beginning up to 100. After that, it converges to a particular value corresponding to each environment. Furthermore, attenuation measurements decrease following a power function for both Earth and Mars environments (see Fig. 3 (b) and (d)) and converge approximately to a value of 3.6 \(dB/m\) and 2.8 \(dB/m\), respectively. In addition, the fitted power function for attenuation against the MCP packets (\(N_{MCP}\)) can be expressed as Attenuation (\(dB/m)=2145N_{MCP}^{-2.721}+2.839\) for Earth and Attenuation (\(dB/m)=410N_{MCP}^{-0.6885}+3.727\) for Mars. Next, we investigated the effect of visibility on the transmittance and attenuation measurement for Earth and Mars environments by varying the parameter values from 10 to 10000 (see Fig. 4). Generally, when the visibility increases between the transmitter and the receiver, we will see fewer dust particles on the beam propagation path with a high distance variance between the particles. Therefore, we can expect high transmittance and low attenuation measurements when the visibility increases. According to Fig. 4 (a), the transmittance measurements of the Earth's environment are increasing dramatically, with the visibility and attenuation measurements (see Fig. 4 (b)) decreasing following a power function as expected. Moreover, the attenuation measurements approximately converge to 2.1 \(dB/m\) value with an increase of visibility near 10000 \(m\), which we can consider as clear sky condition. The fitted power function for the attenuation against the visibility (\(V\)) can be expressed as Attenuation (\(dB/m)=63.41V^{-0.9694}+2.105\) for the Earth environment. On the other hand, the transmittance measurements for the Mars environment (see Fig. 4 (c)) show an increasing trend with visibility. However, transmittance measurements for some visibility parameter values diverge from the trend because, according to our simulation process, each MCP packet will randomly select the scatters. Therefore, even if we have low dust density on the beam propagation path with high visibility, the transmittance can be low due to high collision with dust particles. Corresponding to the transmittance measurements, we can see a slight linear decrease in attenuation on Mars (see Fig. 4 (d)) with increased visibility. The fitted linear function can be expressed as Attenuation (\(dB/m)=-4.184\times 10^{-5}V+4.256\). The dust particle density can vary unpredictably with the wind in a dust storm on Earth and Mars. There can be time windows with very low and high dust particles on the beam propagation path, which will be perfect for transmission. To investigate the effect of dust particle count, we have measured the transmittance and attenuation for Earth and Mars environments by varying dust particle numbers from 10 (very low) to 10000 (very high). As shown in Fig. 5 (a) and (c), the transmittance measurement drops dramatically to near zero with the increase of dust particles for both environments. This rapid transmittance drop happens due to the high amount of scatters on the THz beam propagation path that each MCP packet should randomly collide. Moreover, the attenuation measurements (see Fig. 5 (b) and (d)) increased rapidly following a power function for both environments. The fitted power function for attenuation against the dust particle number (\(D_{PN}\)) can be expressed as Attenuation (\(dB/m)=0.4423D_{PN}^{0.5799}+1.213\) for Earth and Attenuation (\(dB/m)=7.534D_{PN}^{0.6917}-7.262\) for Mars. Furthermore, as we can notice, attenuation measurements do not converge to a particular value as in previous cases when increasing the number of dust particles on the beam propagation path. Therefore, we \begin{table} \begin{tabular}{|l|l|l|} \hline Parameter & Earth & Mars \\ \hline Frequency & 0.24 THz & 1.64 THz \\ MCP packets & \(10^{4}\) & \(10^{4}\) \\ Dust Density & \(10^{2}\) per 10\(m\) & \(10^{4}\) for 10 \(m\) \\ Dust radius & 1–150 microns & 0.5–4 microns \\ Dust size distribution & log-Normal & log-Normal \\ Antenna height & 50 \(m\) & 50 \(m\) \\ Approximation & Mie & Rayleigh \\ Distance & 1 - 200 \(m\) & 1 - 200 \(m\) \\ Temperature & 288 K & 210 K \\ Surface Pressure & 1013 \(mb\) & 6.1 \(mb\) \\ Surface density & 1.29 \(Kg/m^{3}\) & 0.02 \(Kg/m^{3}\) \\ \hline \end{tabular} \end{table} TABLE II: Channel conditions and simulation settings on Earth and Mars. Fig. 4: Simulation measurements of a) the transmittance and b) the attenuation for a THz beam of 0.24 THz and 1.64 THz frequency for Earth and Mars, respectively, by varying the visibility from 10 to 10000 \(m\) while fixing MCP packets (10000) and the distance (10 \(m\)) between transmitter and the receiver. can expect a communication blackout in a regional/global dust storm situation which will boost the dust particle density in the communication area. As mentioned above, dust density on the beam propagation path can vary significantly on Earth and Mars due to the unpredictable wind, temperature and pressure behaviour. Therefore, to conduct more realistic simulations, we have investigated the effect of distance between the transmitter and the receiver when we have various dust densities that vary with the distance. Dust density usually measures the number of dust particles per unit volume. However, in this study, we define it as the number of dust particles per meter for simplicity because the THz beam is assumed to be cone shape, and its face area is very tiny. This means that if we consider 10 dust particles per meter (10/\(m\)) for 100 \(m\), we assume that 1000 dust particles are uniformly distributed on the beam propagation path. As we can see in Fig. 6 (a), the transmittance measurements on Earth drop dramatically with the distance, and when increasing the dust density, the transmittance measurements reaches near zero rapidly beyond 100 \(m\). However, on Mars (see Fig. 6 (c)), the transmittance measurements decreases slightly compared to Earth. Moreover, we can clearly see that the transmittance measurements for 25/\(m\) dust density are significantly lower than the dust density at 10/\(m\), as expected. However, we can not see much difference between the transmittance measurements for 50/\(m\) and 100/\(m\) dust densities up to 150 \(m\). On the other hand, attenuation measurements (see Fig. 6 (b) and (d)) advance with the increase of distance and dust density for both environments corresponding to transmittance measurements. Furthermore, we investigated the impact of frequency on the transmittance and attenuation measurements on Earth and Mars. As illustrated in Fig. 7 (a), the transmittance measurements for Earth's environment decrease following an exponential function with the frequency increase from 0.1 to Fig. 5: Simulation measurements of a) the transmittance and b) the attenuation for a THz beam of 0.24 THz and 1.64 THz frequency for Earth and Mars, respectively, by varying the dust particle number on the beam propagation path from 10 to 10000 while fixing the number of MCP packets (10000) and the distance (10 \(m\)) between transmitter and the receiver. Fig. 6: Simulation measurements of a) the transmittance and b) the attenuation for a THz beam of 0.24 THz and 1.64 THz frequency for Earth and Mars, respectively, by varying the distance between transmitter and the receiver from 1 to 200 \(m\) for different particle number densities of 10/\(m\), 25/\(m\), 50/\(m\) and 100/\(m\) on the beam propagation path while fixing MCP packets (10000). Fig. 7: Simulation for a THz beam by varying the frequency from 0.1 to 10 THz while fixing dust particle number (100) on the beam propagation path, MCP packets (10000), and the distance (10 \(m\)) between the transmitter and the receiver. 4 THz. We were unable to calculate transmittance measurements following our simulation process beyond the 4 THz frequency limit since we are considering the dust particles with a radius of 1 to 150 microns for Earth environment, wavelengths can be comparably low or approximately equal to the dust particles' size after some frequencies threshold, creating more difficulties for data transmission. The corresponding attenuation for the transmittance measurements on Earth increases following a power function that can be fitted as Attenuation (\(dB/m\)) \(=2.277D_{PN}^{2.054}\). On the other hand, the transmittance and attenuation measurements for the Mars environment do not show a particular increase or decrease trend. However, we noticed that the attenuation measurements vary around 4.2 \(dB/m\) with the frequency increase from 0.1 to 10 THz. Finally, we investigated the effect of time-dependent turbulence on the transmittance and the attenuation of the THz signal on Earth and Mars, which corresponds to the 0.24 THz and 1.64 THz frequencies, respectively and the distance between the transmitter and the receiver for 10 \(m\). Here, we compare Earth and Mars simulation scenarios in which we assume that the dust particle number on the beam propagation path will suddenly increase due to the unpredictable behaviour of wind after 5 \(s\). In the first 5 \(s\), we assume that the dust particle number on the beam will vary between 10-20 in the Earth environment and 100-200 in the Mars environment in clear sky conditions. After five seconds, the dust particle number on the beam will increase between 100-200 on Earth and 10000-2000 on Mars due to the sudden wind turbulence in the communication area. As demonstrated in Fig. 8 (a), dust particle number on the beam propagation path is low in the first 5 \(s\) compared to the next 5 \(s\) for both environments. Also, the dust particle number is higher for the Martian environment than the Earth environment in the considered time interval. When scrutining the transmittance measurements (see Fig. 8 (b)), we can notice that transmittance drops suddenly after 5 \(s\) for both environments. However, average transmittance measurement values are approximately similar within the first 5 \(s\). Also, the transmittance measurements on Mars after the turbulence are significantly low compared to Earth. Corresponding to the transmittance measurements, attenuation measurements (see Fig. 8 (c)) increases dramatically after 5 \(s\) and are high on Mars, concluding that the turbulence effect on Mars should be investigated thoroughly. ### _Channel Capacity simulation for Earth and Mars Under Dust storm_ This subsection investigates the channel capacity measurement of the THz links considering two scenarios. In the first scenario, we assume that there are time windows when the number of dust particles on the THz beam propagation path drops, creating opportunities to communicate with high data rates for both Earth (0.24 THz) and Mars (1.64 THz) environments. Here we analyse the channel capacity for Earth and Mars environments considering spreading loss and Molecular absorption loss with the THz attenuation due to dust on the beam propagation path. Also, we investigate channel capacity variations in this scenario for different transmitter powers. In the second scenario, we investigate the channel Fig. 8: Comparison of simulations for time-dependent turbulence (turmoil after 5 seconds) corresponding to the measurements of a) particle number count, b) the transmittance, and c) the attenuation (\(dB/m\)) considering Earth and Mars environments. Fig. 9: Measurements of Transmittance and Attenuation of 0.24 THz and 1.64 THz links for Earth and Mars due to the sudden movement of dust particles on the beam propagation path for a fixed transmitter and receiver distance of 1 \(m\). capacity variation with the distance in clear sky and dust storm conditions. In our first model, we assume that the dust density varies randomly for the Earth environment from 100 and 200 and Mars environment from 10000 to 20000 particles corresponding to a 1 \(m\) distance between the transmitter and the receiver. Here we know that considering a 1 \(m\) distance for simulation is unrealistic. However, in this scenario, we need to infer the effect of the sudden dust particle drop for the channel capacity measurements. Therefore, it is adequate to consider a 1 \(m\) distance for the experiment. As we mentioned above, the significant variation of dust particles on Earth and Mars is in its effective radius. On Earth, the average effective radius of a dust particle varies between 1 and 150 microns [18], and on Mars, it varies between 0.5 to 4 microns [20]. Therefore, to measure transmittance/attenuation considering approximately similar beam-blocking areas by dust particles on both Earth and Mars, we should consider 100 times more dust on Mars than on Earth, following the relationship between dust effective radius and area. Moreover, we sampled the dust particle number for each environment every second for 20 \(s\). In addition, we assume there are two-time intervals (t=[7 \(s\),9 \(s\)], t=[15 \(s\),17 \(s\)]) when the dust particle number drops to less than 30 and 300 particles (see Fig. 9) due to the unpredictable behaviour of wind. Such time intervals might represent occasions when the wind that causes dust particles to be suspended in the atmosphere falls away to near zero. Furthermore, as we can see from Fig. 9, the relative dust particle density decrease at the interval centred on t=16 \(s\) is much higher than at the interval centred on t=8 \(s\) for both environments. We noticed that corresponding to the low dust density, the transmittance measurement is higher at the interval 15-17 \(s\) than at the 7-9 \(s\) interval, and the attenuation shows the opposite variation to transmittance. In principle, those time intervals represent attractive time windows for communication in both environments because of lower attenuation due to the momentary absence of dust particles in the channel. Figure 10 illustrates the channel capacity variation in the clear sky and dust storm conditions on Earth and Mars for the same dusty scenario. When scrutinising Fig. 10, we noticed that in clear sky conditions, channel capacity on Mars is approximately 1.39 \(\times 10^{12}\)\(bits/s\), and on Earth, its nearly 0.85 \(\times 10^{12}\)\(bits/s\). This shows more than 550 \(GB/s\) difference between the channel capacity measurement in clear sky conditions on Mars and Earth due to much higher molecular absorption on Earth. Also, it is noticeable that dust appears to have a more significant relative effect on channel capacity on Mars than on Earth in a dust storm situation due to the high number of tiny dust particles on the beam propagation path. Moreover, channel capacity measurements in dust storm situation on Earth is lower than in clear sky condition, but the difference is minimal. However, the difference on Mars is relatively enormous, and it is approximately two orders of magnitude less than clear sky conditions. On the other hand, channel capacity measurements in dust storm condition on Mars is higher by approximately five orders of magnitude than in dust storm condition on Earth. In addition, the free space path loss is constant in this scenario because it only depends on the carrier frequency and the distance between the transmitter and the receiver, which are both constant in this simulation process. In Fig. 11, we investigated the channel capacity measurement variations for different transmitter power in discrete time windows considering the free space path loss and the molecular absorption loss effect on the channel. This figure shows that the channel capacity increases as antenna transmitter power increases for both Earth and Mars environments by approximately 5 to 20 \(GB/s\). Also, we noticed that we could have reliable communication links with high channel capacities when communicating in the time windows with low dust densities on Mars. Moreover, the channel capacity measurements show a significant variation on Earth compared to Mars for all transmitted powers. Therefore, these measurements imply that we can reach high channel capacities on Mars than on Earth using lower transmitter power antennas. Fig. 11: Channel capacity measurement variation for the scenario of a sudden drop in the number of dust particles on the beam due to the wind behaviour for different Transmitted powers (1, 5, 10 \(dBm\)) of antenna with considering spreading and molecular absorption (a) on Earth and (b) on Mars. Fig. 10: Channel capacity variation comparison for the scenario of a sudden drop in the number of dust particles on the beam due to the wind behaviour on Earth and Mars environments with considering spreading loss and molecular absorption loss. In our second and final simulation scenario, we investigated the channel capacity for various distances between the transmitter and the receiver for both Earth and Mars environments, comparing clear sky and dust storm conditions with different dust densities per meter (See Fig. 12). The transmitter power was taken as 10 \(dBm\) in this simulation. Here, we allowed the dust particle count density to vary as a factor of distance to simulate more realistic dust storm conditions. For the Earth's environment, we have considered dust storms that result in 10-20 (very low) and 100-200 (very high) dust particles per meter dust densities on the THz beam propagation path. Similar conditions for the mars environment were considered by comparing the dust particle sizes with Earth. Thus, we assumed that dust storms on Mars would carry 100-200 and 1000-2000 dust particles per meter to the beam propagation path. However, we should have considered 10000-20000 dust particles per meter on Mars. Nevertheless, due to computational difficulties with high dust densities with increasing the distance between the transmitter and the receiver, we are considering above mentioned numbers for channel capacity measurements. Moreover, we have taken account of spreading loss and molecular absorption loss when calculating the channel capacities for each distance. As shown in Fig. 12, the channel capacity decreases gradually for clear sky conditions for both environments showing high channel capacities in the considering distance range. However, the decrement is high on Earth due to high molecular absorption. Moreover, the channel capacity measurements decrease dramatically on Earth with the distance for 100-200/\(m\) dust storm conditions, showing communication black-out beyond 70 \(m\) distance. Also, at 10-20/\(m\) dust storm, channel capacity measurements decrease slowly, showing that this dust particle number on the beam propagation path is not a massive issue for achieving high channel capacities. However, Earth's channel capacity is significantly dropping when compared with the channel capacity decrement on Mars for the 100-200/\(m\) dust particle density. Again, 100-200/\(m\) dust particles on the beam propagation path on Mars do not significantly affect the channel capacity measurement. Therefore, we can neglect the THz link budget degradation due to the small amount of dust on Mars. In addition, when investigating the high dust particle number density effect on the channel capacity on Mars (1000-2000/\(m\)), the channel capacity measurements are computationally difficult when the distance is greater than 60 \(m\). However, we can notice that the channel capacity measurements for Mars are decreasing rapidly with the increase in the distance. Also, we can see that the channel capacities are equal at a distance of approximately 60 \(m\) for clear sky conditions on Earth and the high dust density scenario on Mars. ## VII Conclusion High-speed, reliable communication between devices on Earth and Mars is needed to fulfil future communication requirements. In this study, we investigated the impact of atmospheric dust and dust storms for communication using THz links, utilising a modified Monte Carlo simulation algorithm. The calculated transmittance and attenuation measurements are based on Mie and Rayleigh approximations depending on the dust particle sizes and carrier frequency utilised for communication on the two planets. Moreover, we presented a channel capacity model and analysed it for two different time-dependent and distance-dependent scenarios. The Monte-Carlo simulation results show that attenuation measurements decrease for both Earth and Mars environments when the MCP packets and visibility increase. In addition, for both environments, the attenuation increases with higher dust particle number on the beam propagation path and distance between the transmitter and the receiver. We noticed the exact attenuation behaviour with the increased frequency for the Earth's environment. However, the attenuation measurements vary around a constant value for the Mars environment. When scrutinising the channel capacity measurements from the time-dependent scenario, we can conclude that the time windows showing sudden dust particle density drops create the best communication opportunities with high data rates. Also, we noticed that the channel capacity measurements dramatically drop with the increase in distance between the transmitter and the receiver in severe dust storm situations on both Earth (100-200/\(m\)) and Mars (1000-2000/\(m\)) environments, even if we use low molecular absorption frequencies and high transmitter power antennas. However, the impact from the local dust storm is negligible on Mars (100-200/\(m\)) but should be further investigated on Earth (10-20/\(m\)). ## Acknowledgment This publication came from research conducted with the financial support of Science Foundation Ireland (SFI) and the Department of Agriculture, Food and Marine on behalf of the Government of Ireland (Grant Number [16/RC/3835] - VistaMilk), the support of XL Verkot, Finland, and US National Science Foundation (NSF) ECCS-2030272 grant. Fig. 12: Channel capacity measurements by varying the distance between the transmitter and the receiver with and without dust storms situations on Mars and Earth.
信頼できるテラヘルツ(THz)リンクは、無線データ通信の exponential growthに伴って、屋外点対点通信に必要不可欠です。この研究では、塵粒子によるTHz波伝播路における多重散乱によるリンク衰減を予測するための修正されたモンテカルロシミュレーション手順を提案しています。塵を通過するビームのための散乱モデルは、地球(0.24 THz)と火星(1.64 THz)のそれぞれに対応する周波数に基づいてMieとRayleighの近似式を使用して作成されています。シミュレーションの結果は、モンテカルロ光子(MCP)パケットの数、視認性、ビームのDust粒子配置密度、周波数、送信装置と受信装置間の距離などのパラメーターを考慮して比較されます。さらに、塵の嵐によるTHzリンク衰減を考慮したチャネル容量モデルが提案されました。地球と火星
2310.00119
Fewshot learning on global multimodal embeddings for earth observation tasks
In this work we pretrain a CLIP/ViT based model using three different modalities of satellite imagery across five AOIs covering over ~10\% of Earth's total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar amplitude and interferometric coherence. This model uses $\sim 250$ M parameters. Then, we use the embeddings produced for each modality with a classical machine learning method to attempt different downstream tasks for earth observation related to vegetation, built up surface, croplands and permanent water. We consistently show how we reduce the need for labeled data by 99\%, so that with ~200-500 randomly selected labeled examples (around 4K-10K km$^2$) we reach performance levels analogous to those achieved with the full labeled datasets (about 150K image chips or 3M km$^2$ in each area of interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to think that the model has captured significant earth features useful in a wide variety of scenarios. To enhance our model's usability in practice, its architecture allows inference in contexts with missing modalities and even missing channels within each modality. Additionally, we visually show that this embedding space, obtained with no labels, is sensible to the different earth features represented by the labelled datasets we selected.
Matt Allen, Francisco Dorr, Joseph A. Gallego-Mejia, Laura Martínez-Ferrer, Anna Jungbluth, Freddie Kalaitzis, Raúl Ramos-Pollán
2023-09-29T20:15:52
http://arxiv.org/abs/2310.00119v2
# Fewshot learning on global multimodal embeddings ###### Abstract In this work we pretrain a CLIP/ViT based model using three different modalities of satellite imagery across five AOIs covering over 10% of Earth's total landmass, namely Sentinel 2 RGB optical imagery, Sentinel 1 SAR radar amplitude and interferometric coherence. This model uses \(\sim 250\) M parameters. Then, we use the embeddings produced for each modality with a classical machine learning method to attempt different downstream tasks for earth observation related to vegetation, built up surface, croplands and permanent water. We consistently show how we reduce the need for labeled data by 99%, so that with 200-500 randomly selected labeled examples (around 4K-10K km\({}^{2}\)) we reach performance levels analogous to those achieved with the full labeled datasets (about 150K image chips or 3M km\({}^{2}\) in each area of interest - AOI) on all modalities, AOIs and downstream tasks. This leads us to think that the model has captured significant earth features useful in a wide variety of scenarios. To enhance our model's usability in practice, its architecture allows inference in contexts with missing modalities and even missing channels within each modality. Additionally, we visually show that this embedding space, obtained with no labels, is sensible to the different earth features represented by the labelled datasets we selected. ## 1 Introduction Earth Observation (EO) has made remarkable progress with the rise of deep learning (DL) methods in recent years, and this potential is fueled by the ever increasing availability of satellite imagery [1]. According to the UCS Satellite Database1, as of May 2022 there were 470 optical satellites in orbit, 102 radar satellites and 62 tagged as producing some form of imaging (hyperspectral, multispectral), among others. Within ESA's Sentinel missions alone, 80 PB of user-level data were downloaded during 2021 [2]. However, it is well known that DL methods are overwhelmingly hungry for labeled data, and one of the main hurdles to effectively exploit DL methods in EO is its endemic scarcity [3]. Even if new EO annotated datasets are published regularly, the time and effort involved cannot keep up with the amount of new data produced by present and future orbiting missions. It is in this context that methods that can significantly reduce the need for labeled data become valuable. See Section 2. CLIP [4] is an SSL method that constrastively learns how to align the representations of image/text pairs collected from the Internet, allowing it to deal with different modalities of the same reality within the same embedding space for a variety multi-modal applications including detection, captioning, VQA and conditional image generation among others. CLIP's architecture, heavily based on Vision Transformers (ViT) [5], has been applied to merge multimodal representations beyond text and in different domains [6], [7], [8]. CLIP like models are also starting to appear in satellite imagery under different scenarios for temporal and multi-spectral imagery [9], temporal multimodal pixel wise [10] or a multispectral contrastive model for Landsat imagery [11]. In this work we built a three tower CLIP architecture, feed them with Sentinel 2 RGB optical imagery, Sentinel 1 amplitude and Sentinel 1 interferometric coherence, and use the produced embeddings in several downstream classification tasks, representing different earth features. We show how, in this representation space, a small fraction of data is enough to obtain full-dataset level performance, reducing by two orders of magnitude the need for labeled data. This paper is structured as follows. Section 2 discusses related previous works. Section 3 describes the Areas of Interest (AOI) used, the input imagery and downstream labels. Section 4 describes the SSL architecture and training procedure, together with the downstream task. Section 5 shows results and visualization and in Section 6 we draw some conclusions. ## 2 Previous works A number of approaches have been developed to address the labeled data scarcity challenge, including a variety of methods under self supervised learning (SSL) [3] and weakly supervised methods [12], among others, more often that not blended into foundation models [13]. Weakly supervised methods consider a range of scenarios where labels are noisy, incomplete, inexact or inaccurate and have also been applied in Earth Observation [14]. For instance, the teams on the data fusion contest [15] attempt to produce fine grained semantic segmentation maps for land cover when only low resolution reference data is available. Or also, in [16] a transfer learning method is used to pre-train a model over a region with large label density, to finetune it somewhere else with very few labels. The success of foundation models in language tasks is still hard to translate to earth observation scenarios [17], but there are convincing works pre-training models with large amounts of geospatial data with the expectation to be useful in a wide variety of downstream tasks, see [18], [19] or [20]. Our work contributes in such direction by explicitly considering multimodality in the pretraining step while allowing downstream applications to use function even if not all modalities or channels are available. Given the variability of EO data across time and geographical locations, we believe this is a key step to enhance the practical applicability of general pretrained models in EO. ## 3 Data and downstream tasks Input modalitiesWe use three modalities for input data, taking observations during the first three months of 2020, obtained from the Sentinel-1 and Sentinel-2 ESA missions. Sentinel-1 is a Synthetic Aperture Radar (SAR) sensor, for which we use amplitude and coherence, whereas Sentinel-2 is an optical satellite. The intuition here is that both missions complement each other, offering different perspectives on the same earth features which are necessarily correlated. For each AOI (see below) we built a grid containing tiles (image chips) of size 448m \(\times\) 448m. For Sentinel-2 optical data (s2rgbm) we use only the three RGB channels which we average per month (thus, 9 channels). For Sentinel-1 SAR amplitude (s1grdm) we use the vv and vh polarizations, plus their logarithmic difference, taking also their average per month (thus, 9 channels). Both Sentinel-2 and Sentinel-1 amplitude were tiled from Google Earth Engine using geetiles2 which provides a 10m resolution and thus each chip is 448x448 pixels. We obtained Sentinel-1 interferometric coherence (gunv) from ARIA Sentinel-1 Geocoded Unwrapped Interferograms database [21] as available through Alaska's Satellite Facility3, using sartiles4. Since interferometric coherence is built using pairs of Sentinel-1 observations we selected the pairs available whose second observation was within 2020Q1 and the first one at most 48 days before. Thus, we have a potentially variable number of channels in each tile, depending on the number of interferometric pairs which could be formed. Its resolution is around 90m per pixel, and we upsample the image chips to match the 448x448 pixel size of s1grdm and s2rgbm. Footnote 2: [https://github.com/rramosp/geetiles](https://github.com/rramosp/geetiles) Footnote 3: [https://asf.alaska.edu/data-sets/derived-data-sets/sentinel-1-interferograms/](https://asf.alaska.edu/data-sets/derived-data-sets/sentinel-1-interferograms/) Footnote 4: [https://github.com/rramosp/sartiles](https://github.com/rramosp/sartiles) Footnote 5: [https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B](https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B) Footnote 6: [https://ghsl.jrc.ec.europa.eu/download.php?ds=bu](https://ghsl.jrc.ec.europa.eu/download.php?ds=bu) Areas of InterestWe defined five AOIs covering regions in Conterminous United States (CONUS, 167K image chips), South America (83K chips), Pakistan and India (Pakin, 147K chips), China (285K chips) and the Middle East (163K chips), as illustrated in Fig. 1. The AOIs extent is determined by the 2020 coverage of the gunw dataset. Observe that we do geographical aware splits into train (60%), validation (20%) and test (20%) to avoid as much as possible data leakage from contiguous image chips being in different splits. Downstream tasksWe selected four use cases with global label coverage so that we could experiment on ablations with an increasing number of available labels. **Vegetation estimation**: we used the MOD44B.006 Terra vegetation continuous fields yearly dataset for 2020, focusing on the tree percentage estimation at 250m per pixel5. **Built Up Surface**, the Global Human Settlement Layer Built-Up surface dataset from the Joint Research Center of the European Commission6 for 2020 at 100m per pixel. **Croplands**, the ESA World Cover 2020 class representing croplands [22] at a 10m/pixel resolution. **Permanent water**, the ESA World Cover 2020 class representing permanent water bodies [22] at a 10m/pixel resolution. The JRC dataset was downloaded from their site and tiled using sartiles, whereas the rest were downloaded and tiled from Google Earth Engine using geetiles Footnote 5: [https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B](https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD44B) Footnote 6: [https://ghsl.jrc.ec.europa.eu/download.php?ds=bu](https://ghsl.jrc.ec.europa.eu/download.php?ds=bu) For each dataset we define a binary classification task to predict the mean value per chip, thresholded on each AOI so that we get two balanced classes. Within the same task, this threshold is usually different for each AOI as they have different distributions as shown in Fig 2. So, for instance, for Figure 1: Areas of Interest (AOIs) used in this study. Bands indicate the splits for train (yellow), validation (blue) and test (pink). In total there are 167K image chips for CONUS, 163K chips for Middle East, 147K chips for Pakistan-India, 285K chips for China and 83K chips for South America, which aggregates to 845K chips covering a surface of 16.9M km. vegetation percentage** we set forth to predict whether an image chip has high or low vegetation, for **builtup** surface we predict whether a chip has high or low built surface, etc. ## 4 Method ### Model pretraining We use a Self Supervised Learning approach using Visual Transformers (ViT) and a CLIP based loss architecture, where we have one ViT tower per input modality (s1grdm, s2rgbm and gunw). This architecture produces an embedding for each input image and modality and pushes embeddings of different modalities on the same chip to be similar, and others to be different. See Figure 3. In practice we are bound to occasional unavailability of particular channels: sometimes vv or vh are not available on Sentinel 1, clouds occasionally hinder Sentinel 2, and the number of interferometric pairs formed for SAR coherence is not always the same. To cope with this, each of the ViT accepts a single channel structure, and we select randomly one single channel of each input modality to feed each ViT at each inference request or training step. Besides enhancing the usability of the pretrained model for downstream tasks to noisy or missing data scenarios, we observed that this setup produces more robust outputs, probably due to the input noise induced by this procedure being compensated by correlations between modalities. Since we tried different ViT configurations and embedding sizes, we use the train data split for training the model and the validation split to select the best self supervised model according to the CLIP loss. Test data is only using to measure the results presented here. Our final ViT configuration produces an embedding of size 768 for each input image chip in each modality, containing 259M trainable parameters. Train data amounts to about 500K image chips, equivalent to some 10M km\({}^{2}\) and taking about 100 hours of training time on an Nvidia DGX architecture with 8 A100 GPUs, 250 CPU cores and 2TB RAM. ### Downstream tasks For each of the five AOIs and each modality we train a Random Forest to binary classify whether the mean value of each measure (vegetation, built surface, croplands and permanent water) is below or above the median. We use the embeddings representation for each modality as produced by the pretrained model which have a size of 768 components. We create an additional modality by concatenating all three modalities and, thus, produce a vector of size 2304 for each image chip. We do an ablation on the size of the sample taken from the train dataset with 5, 10, 100, 250, 500, 1000, 5000, 20000 and the full dataset. The sample is random uniform across all chips on the training split within each AOI. We therefore train (3+1) modalities \(\times\) 5 AOIs \(\times\) 9 train dataset sizes. We use validation data to select the overall best Random Forest configuration (50 estimators and 7 max depth), and then we measure the accuracy on the test data. This is the only time that test data is used. Observe that train data is both used to train the pretrained model and the downstream Random Forest. We repeat this procedure 10 times and report the mean value of each experiment set. Figure 2: Distribution of labels on each downstream task and AOI shown as a quantile plot. Observe that most tiles do not contain built surface or permanent waters ## 5 Results ### Ablations Fig. 4 shows the overall accuracy results for our experimentation sets. Recall that we are doing a balanced binary classification task, with a different threshold in each AOI to ensure this balance, thus, reporting accuracy is straightforward. Observe that different tasks have different degrees of difficulty for different modalities. It is interesting to see that, in general, slgrdm embeddings perform better than the rest. Also, concatenating modalities embeddings (modsconcat if Fig. 4) seems to marginally improve overall results. We take as reference the accuracy obtained when using the full dataset, and measure how far we are from it in each experiment. Black dots in Fig. 4 show when the experiment produces an accuracy of at least 95% that of the one obtained with the full labeled dataset. This happens always with less than 500 image chips, and most of the times with less than 250. Considering an average training dataset size of 150K. This means that with only 0.3% of train data (3 per thousand) we can attain 95% of the top performance. The standard deviation was <0.05 when we used 50 or less shots, and <0.01 with larger datasets, so we did not include it in Fig. 4 for clarity. ### Embeddings Finally, Fig. 5 shows a 2D TSNE reduction of the embeddings obtained for each modality (columns), colored by the downstream task log mean value before thresholding for binary classification for each downstream task (rows). Observe that the labels are not used to compute neither the embeddings, nor the 2D TSNE position. And we do still get clear positional patterns where similar values of the downstream tasks cluster together. We find this significant, as it illustrates how the embeddings do capture terrain features which are useful on different downstream tasks. Although somewhat subtle, observe as well how, for the same task, different modalities separate clusters value a bit better than others. Fig. 6 shows a couple of example images in the three modalities. ## 6 Conclusion This work showed the effectiveness of multimodal models pretrained with large amounts of planetary data to reduce the number of required labeled examples in different downstream earth observation classification tasks. The reduction of the required amount of labeled data reaches the orders of 99%. We also run our experiments with smaller pretrained ViT architectures with 11M to 100M parameters and embeddings of size 192 and 364. Although the combined CLIP loss is usually similar to the one Figure 3: Architecture of our CLIP-based model with three input modalities and separate ViT encoder for each modality. Similarity is measured for each pair of modalities and then averaged. Like the original CLIP, within the same batch, our loss encourages similarity of different modalities of the same locations to have similar encodings, and from other locations to be different. Observe as well how our encoders are single channel, operating on whatever channel was randomly selected for each modality. obtained with our 250M parameter / 768 encoding size model, the performance of the downstream tasks is degraded, even if it preserves the 95% relative performance as described earlier. We also believe that multimodality settings such as this one allow models to leverage the complementarity or correlations of the same earth features as being observed by different sensors. This leads us to plan future work with planetary wide datasets and larger models. ## 7 Acknowledgements This work has been enabled by Frontier Development Lab Europe ([https://fdleurope.org](https://fdleurope.org)) a public / private partnership between the European Space Agency (ESA), Trillium Technologies, the University of Oxford and leaders in commercial AI supported by Google Cloud and Nvidia, developing open science for all Humankind. L.M-F. was supported by the European Research Council (ERC) Synergy Grant "Understanding and Modelling the Earth System with Machine Learning (USMILE)" under the Horizon 2020 research and innovation programme (Grant agreement No. 855187). M. J. A. was supported by the UKRI Centre for Doctoral Training in Application of Artificial Intelligence to the study of Environmental Risks [EP/S022961/1], and additionally by Trinity Hall, Cambridge. We are also indebted to Nicolas Longepe, Carlos Lopez-Martinez, Fabio A. Gonzalez Osorio, Samuel Brancroft, Emma Hatton, Alison Lowndes, Alistair Francis, Ioanna Bouri and the rest of reviewers during 2023 FDL-Europe sprint. Figure 4: Accuracy on binary classification for each downstream task, AOI and modality. The x-axis is non-linear and represents the number of image chip embeddings used to train a model. Dots signal the minimum number of training image chips which with 95% of top accuracy for each task is achieved. Observe that in the vast majority of cases, with less than 250 labeled image chip we can achieve at least 95% of the accuracy obtained with the full training dataset of labeled images. Training dataset sizes ranges from 50K in South America to 171K in China (60% of the total image chips in each AOI). Accuracy is measured on the full test split (20% of data). Figure 5: Embeddings for each AOI and modality projected to a TSNE 2D space for visualization and colored with each downstream task label. Each dot correspond to one image chip projected to this space. These embeddings are trained and computed unsupervisedly with no label information and yet they are sensible to different land features as represented by each downstream task. Scale is logarithmic to better appreciate the value ranges of labels. Figure 6: Location in the s1grdm 2D TSNE embedding space for CONUS of two sample image chips with different vegetation percentage value. Different colouring from 5 simply signals a different experimental run.
この作業では、衛星画像の3種類の異なるモデリティを用いて、地球の約10%をカバーする5つのAOIにわたってCLIP/ViTベースのモデルを事前学習します。これらのAOIには、Sentinel 2 RGB画像、Sentinel 1 SARの放射波強度と干渉相関を表す情報が含まれます。このモデルには約250億のパラメータが含まれています。その後、各モデリティで生成されたエンベディイングを使用して、古典的な機械学習方法を用いて、地球観測に関する植生、構築物の表面、農地、および永久水に関するさまざまな下流タスクを試みます。私たちは、ラベル付きデータの必要性を99%削減することに成功し、約200-500個のランダムに選択されたラベル付きデータ(約4K-10Kkm$^2$)で、ラベル付きデータセット(約150Kの
2303.00021
Quantum equilibration and measurements -- bounds on speeds, Lyapunov exponents, and transport coefficients obtained from the uncertainty relations and their comparison with experimental data
We discuss our recent study of local quantum mechanical uncertainty relations in quantum many body systems. These lead to fundamental bounds for quantities such as the speed, acceleration, relaxation times, spatial gradients and the Lyapunov exponents. We additionally obtain bounds on various transport coefficients like the viscosity, the diffusion constant, and the thermal conductivity. Some of these bounds are related to earlier conjectures, such as the bound on chaos by Maldacena, Shenker and Stanford while others are new. Our approach is a direct way of obtaining exact bounds in fairly general settings. We employ uncertainty relations for local quantities from which we strip off irrelevant terms as much as possible, thereby removing non-local terms. To gauge the utility of our bounds, we briefly compare their numerical values with typical values available from experimental data. In various cases, approximate simplified variants of the bounds that we obtain can become fairly tight, i.e., comparable to experimental values. These considerations lead to a minimal time for thermal equilibrium to be achieved. Building on a conjectured relation between quantum measurements and equilibration, our bounds, far more speculatively, suggest a minimal time scale for measurements to stabilize to equilibrium values.
Saurish Chakrabarty, Zohar Nussinov
2023-02-28T19:00:27
http://arxiv.org/abs/2303.00021v1
Quantum equilibration and measurements - bounds on speeds, Lyapunov exponents, and transport coefficients obtained from the uncertainty relations and their comparison with experimental data ###### Abstract We discuss our recent study of _local quantum mechanical uncertainty relations_ in quantum many body systems. These lead to fundamental bounds for quantities such as the speed, acceleration, relaxation times, spatial gradients and the Lyapunov exponents. We additionally obtain bounds on various transport coefficients like the viscosity, the diffusion constant, and the thermal conductivity. Some of these bounds are related to earlier conjectures, such as the bound on chaos by Maldacena, Shenker and Stanford while others are new. Our approach is a direct way of obtaining exact bounds in fairly general settings. We employ uncertainty relations for local quantities from which we strip off irrelevant terms as much as possible, thereby removing non-local terms. To gauge the utility of our bounds, we briefly compare their numerical values with typical values available from experimental data. In various cases, approximate simplified variants of the bounds that we obtain can become fairly tight, \(i.e.\), comparable to experimental values. These considerations lead to a minimal time for thermal equilibrium to be achieved. Building on a _conjectured relation between quantum measurements and equilibration_, our bounds, far more speculatively, suggest a minimal time scale for measurements to stabilize to equilibrium values. ## 1 Introduction In this work, we summarize our recent findings discussed in Refs. [12, 10] and briefly compare rigorous bounds on physical quantities that we obtained using our approach with experimental data. A large number of conjectured bounds on physical quantities have been advanced. These include an upper bound on the Lyapunov exponent [8], a lower bound on various lifetimes and relaxation rates [1, 5, 9, 11], a lower bound on the viscosity [4, 11, 17, 19], a lower bound on the ratio of shear viscosity and entropy density [6], and many others. It is notable that early works by Eyring [2, 3] and other pioneers on chemical reaction rates and intuitive proposed extensions implicitly suggest similar inequalities (although these have not been proposed as fundamental bounds). Our primary goal is to rigorously derive such bounds in broad settings using local variants of the quantum mechanical uncertainty relations. ## 2 Bounds from local uncertainty relations in many body systems We consider a macroscopic system \(\Lambda\) of \(N_{\Lambda}\) particles, with a density matrix \(\rho_{\Lambda}\), whose dynamics is governed by the time independent Hamiltonian \(H_{\Lambda}\). The rate of change of an arbitrary local operator \(Q_{i}^{H}\) in the Heisenberg picture is \(\frac{dQ_{i}^{H}}{dt}=\frac{i}{\hbar}\left[H_{\Lambda},Q_{i}^{H}\right]\). The subscript \(i\) can be thought of as a particle index. We note that we can replace \(H_{\Lambda}\) in the above expression by the local Heisenberg picture Hamiltonian \(\tilde{H}_{i}^{H}\) which represents only the portion of \(H_{\Lambda}\) containing terms that do not commute with our chosen local operator \(Q_{i}^{H}\). With this, \(\frac{dQ_{i}^{H}}{dt}=\frac{i}{\hbar}\left[\tilde{H}_{i}^{H},Q_{i}^{H}\right]\). Next, we use the textbook type quantum uncertainty relation which is trivially provable to be valid \(\left(\text{via, }e.g.,\text{ the use of Cauchy-Schwarz inequalities for Hilbert-Schmidt (trace) type inner products satisfying the inner product positive semi-definite property \(\left(\mathsf{Tr}(\rho_{A}A^{\dagger}A)\geq 0\right)\) associated with the density matrices \(\rho_{A}\) providing expectation values\(\left)\) for general mixed states, \(\sigma_{A}\ \sigma_{B}\geq\frac{1}{2}\left|\left\langle\left[A,B\right]\right\rangle \right|\). Here, \(A\) and \(B\) are any two operators and \(\sigma_{A}^{2}=\left\langle\left(A-\left\langle A\right\rangle\right)^{2}\right\rangle\), \(\left\langle A\right\rangle\equiv\mathrm{Tr}\left(\rho_{A}A\right)\). Using this, \(\left|\left\langle\frac{dQ_{i}^{H}}{dt}\right\rangle\right|\leq\frac{2}{\hbar} \sigma_{\tilde{H}_{i}^{H}}\sigma_{Q_{i}^{H}}\). Now we focus on the value of \(\sigma_{\tilde{H}_{i}^{H}}^{2}\) when averaged over the entire system and consider the particular case of \(\rho_{A}\) defining a macroscopic thermal system at a temperature \(T\) for which the variances may be evaluated. For a translationally invariant system in thermal equilibrium, the variance \(\left(\sigma_{\tilde{H}_{i}^{H}}\right)^{2}\equiv k_{B}T^{2}C_{v,i}\) (defining an effective local heat capacity \(C_{v,i}\)) assumes the same value of each \(i\). (The energy variance of the full many body Hamiltonian \(H_{A}\) is given by \(k_{B}T^{2}C_{v}^{\left(A\right)}\) with \(C_{v}^{\left(A\right)}\) the heat capacity of the global system \(\Lambda\).) Putting everything together, \[\overline{\left(\frac{\left\langle\frac{dQ^{H}}{dt}\right\rangle^{2}}{\sigma_ {Q^{H}}^{2}}\right)}\leq\frac{4k_{B}T^{2}C_{v,i}}{\hbar^{2}}, \tag{1}\] where \(\overline{X}\equiv\frac{1}{N_{A}}\sum\limits_{i=1}^{N_{A}}X_{i}\). Even though the right hand side of Eq. 1 is independent of the spatial index \(i\), we have kept it to underscore that \(C_{v,i}\) is an effective _local_ heat capacity. ### Upper bound on the relaxation rate (Lower bound on the relaxation time) The left hand side of Eq. 1 is, dimensionally, the square of the relaxation rate associated with the operator \(Q_{i}^{H}\). This leads to a bound on the relaxation rate, \[\tau_{Q}^{-1}\leq\frac{2T\sqrt{k_{B}C_{v,i}}}{\hbar}. \tag{2}\] At high temperatures, when the equipartition theorem applies, \(i.e.\), \(C_{v,i}=\mathcal{O}(k_{B})\), this inequality becomes, \(\tau_{Q}^{-1}\leq\mathcal{O}\left(2k_{B}T/\hbar\right)\), implying that, \(\tau_{Q}\geq\mathcal{O}\left(\hbar/2k_{B}T\right)\). ### Upper bound on particle speeds and lower bounds on particle displacements Choosing the operator \(Q_{i}^{H}\) in the above analysis to be the \(\alpha^{\mathrm{th}}\) Euclidean component of the displacement of a particle in the system, we get, \(\overline{\left(\left\langle\frac{dr_{\alpha}^{H}}{dt}\right\rangle^{2}\left/ \sigma_{r_{\alpha}^{H}}^{2}\right\rangle}\leq\frac{4k_{B}T^{2}C_{v,i}}{\hbar^{ 2}}\). Here, \(\tilde{H}_{i}^{H}=\frac{\left(p_{i\alpha}^{H}\right)^{2}}{2m}\), implying that if equipartition holds (at high temperatures), \(C_{v,i}=k_{B}/2\). If in addition, we assume that the fluctuation of the particle positions is slowly varying, \(i.e.\), all the particles have similar values of \(\sigma_{r_{\alpha}^{H}}\), then, \[\sqrt{\overline{\left\langle\frac{dr_{\alpha}^{H}}{dt}\right\rangle^{2}}}\leq \frac{\sqrt{2}k_{B}T\sigma_{r_{B}^{H}}}{\hbar}. \tag{3}\] A related bound for the expectation value of the square of the velocity components can also be obtained using a similar analysis. [12] Thus, at high temperatures, \(\overline{\left\langle\left(\frac{dr_{\alpha}^{H}}{dt}\right)^{2}\right\rangle }\leq\frac{2\left(k_{B}T\right)^{2}\sigma_{r_{B}^{H}}^{2}}{\hbar^{2}}\). The advantage of this relation is that in the classical limit, the left hand side takes the value \(\frac{k_{B}T}{m}\), implying that the fluctuation of the each component of a particle's position is bounded from below. \[\sigma_{r_{\alpha}^{H}}^{2}\geq\frac{\hbar^{2}}{2mk_{B}T}=\frac{\lambda_{T}^{2 }}{4\pi}, \tag{4}\] \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\lambda_{T}\) being the thermal de Broglie wavelength. Other bounds that can be obtained using similar analysis are summarized in Table 1. These have been simplified using semi-classical and other arguments in order to obtain expressions devoid of specific system details. ## 3 Quantum measurements and equilibration In Refs. [12, 10], the Eigenstate Thermalization Hypothesis, associated entropy maximization, and other considerations were applied to the measurement problem. Here, the interactions \(H_{\sf device-Q}\) between a measuring device and a local microscopic quantity \(Q\) being measured were included in the full system Hamiltonian \(H_{A}\). It was illustrated that a time average of \(Q\) (over its equilibration time set by \(\tau_{Q}\)) is given by _eigenstate expectation values_ when the interactions in \(H_{\sf device-Q}\) are appreciable. That is, inasmuch as local measurements of \(Q\) are concerned [12, 10], \[\rho_{\sf collapse}``="\rho_{\sf equil.}, \tag{5}\] where \(\rho_{\sf collapse}\) is the density matrix associated with this short time averaged measurement and \(\rho_{\sf equil.}\) emphasizes that the latter short time average may be replaced by an average with the density matrix of the equilibrated system that includes the measurement device and the typical microscopic quantity \(Q\) being measured. Here, "\(=\)" highlights that this equality and density matrix are not associated with a bona fide "collapse" to an eigenstate of \(Q\) but rather to a time average over an interval which can be exceedingly short for a small local observable \(Q\) (see Table 1) for which equilibration may indeed typically be very rapid. Ref. [13] more recently raised a conjecture similar to the one of Eq. (5) that we earlier proposed in Refs. [12, 10]. ## 4 Conclusions Our local quantum uncertainty based bounds on the relaxation times in equilibrated quantum systems [12, 10] are intimately related to conjectured Matsubara like Planckian time scales [18] and do not hinge on the Lieb-Robinson [7] and related bounds [14] on the speed in which information may spread. These bounds may further relate to possible fundamental limits on measurement and equilibration times (a conjectured connection between measurement and equilibration was briefly reviewed). Our lower bound on the shear viscosity is closely connected to proposed bounds on the viscosity to entropy density ratio [6], and other viscosity bounds [19, 17, 16]. Our upper bound on the shear viscosity in equilibrated systems, that follows from the bound on the diffusion constant when the Stokes-Einstein relation applies is, like others reviewed here (e.g., those on general spatial gradients of general functions), new [12]. When applied to various observables, our bound on the Lyapunov exponent is slightly tighter than the celebrated conjectured chaos bound of Ref. [8]. Furthermore, our derivation uses a definition of the Lyapunov exponent similar to that in the the classical arena which does not rely on the use of regularized Out of Time Ordered Correlators (OTOC). When contrasted with experimental data for commonplace systems such as water and aluminum, our simplified bounds are relatively tight (see Table 1 and [12] (and further comparisons for the viscosity bound in [4, 17])). A comprehensive study further contrasting some of our other bounds (both exact and their approximate simplified variants) with experimental data will be illuminating.
量子 muchos 身体系における局所量子力学不確定関係の研究において、私たちは議論しています。この結果は、速度、加速度、弛豫時間、空間勾配、およびLyapunov指数などの量子量に基礎的な制限を与えるものです。さらに、粘性、拡散定数、および熱伝導率などのさまざまな輸送係数を制限しています。これらの制限のいくつかは、Maldacena、Shenker、Stanfordの混沌の制限など、既存の仮説と関連付けられています。一方、他の制限は新しいものです。私たちの方法は、一般的に適用できる設定で正確な制限を導出するための直接的な方法です。局所量に対する不確定関係を使用して、可能な限り不必要な項を除去し、非局所項を除去します。これらの制限の有用性を評価するためには、数値的に、実験データから得られる典型的な値と比較して、これらの制限を簡略化されたバリアントを簡単に比較して、実験
2309.05102
Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? An analysis using stochastic processes
In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this paper, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs.
Sören Christensen, Jan Kallsen
2023-09-10T18:12:52
http://arxiv.org/abs/2309.05102v3
# Is Learning in Biological Neural Networks based on Stochastic Gradient Descent? ###### Abstract In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this paper, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs. **Keywords:** Biological neural networks, Schmidt-Hieber model, stochastic gradient descent, supervised learning ## 1 Introduction In order to understand how biological neural networks (BNNs) work, it seems natural to compare them with artificial neural networks (ANNs). Although the definition of the latter is inspired by the former, they also differ in several aspects. One of them is the way the network parameters are updated. In simple terms, an ANN learns from data by adjusting the weights of the connections between nodes in order to minimize a loss function that measures the difference between the desired output and the actual output of the network. More specifically, the optimization step is performed using the Stochastic Gradient Descent (SGD) algorithm, which iteratively updates the weights of the network by moving them in the direction of the steepest descent of the empirical loss function of a single training sample. The gradient itself is computed with the so-called backpropagation algorithm. In particular, the update of any parameter is based on the states of all other parameters. Such a mechanism does not seem to be biologically plausible for BNNs, as many authors have pointed out. Parameter update in BNNs occurs only locally, and distant neurons are only indirectly connected through the endogenous reward system. This observation is closely related to the weight transportation problem [6; 2; 4]. We refer to [12; 11] for a detailed discussion about the role of SGD in BNN, which the author of [10; Section 5] summarizes as follows: "[T]here are various theories that are centered around the idea that the learning in BNNs should be linked to gradient descent. All of these approaches, however, contain still biological implausibilities and lack a theoretical analysis." The starting point for the present paper is the recent article [10] just cited. In this seminal study, the author proposes a very persuasive stochastic model for brain-supervised learning which has a thorough biological foundation in terms of spike-timing-dependent plasticity. We review and discuss this setup in Section 2. In this model the local updating rule of the connection parameters in BNNs turns out to be a zero-order optimization procedure. More precisely, it is shown in [10] that the expected value of the iterates coincides with a modified gradient descent. However, this holds only on average. The noise for such zero-order methods is so high that one can hardly imagine effective learning based on it, see [3, 8, 1]. The author himself writes in [10, Section 4]: "It remains to reconcile the observed efficiency of learning in biological neural networks with the slow convergence of zero-order methods." In this paper we make an attempt to achieve this reconciliation. To this end, we consider in Section 3 a slight modification of the model of [10]. More specifically, we relax the assumption that for each learning opportunity and each connection, exactly one spike is released. Instead, we assume that a large number of spikes is released for each training sample in the BNN model and thus many parameter updates are made. It turns out that with this modification, the updates correspond approximately to a continuous descent step along the gradient flow, see Theorem 1. This can be interpreted in the sense that it is not biologically implausible that BNNs use a kind of SGD algorithm after all, but without explicitly computing the gradient. ## 2 The Schmidt-Hieber model for BNNs revisited We begin this section by reviewing the model introduced in [10]. It considers a classical instance of supervised learning: input-output pairs \((\mathbf{X}_{1},Y_{1}),(\mathbf{X}_{2},Y_{2}),\ldots\) are given as observations, all being identically distributed. The goal is to predict the output \(Y\) for each new input \(\mathbf{X}\) based on previous training data. This setting includes, for example, classification (when the set of possible outcomes of \(Y\) is finite) or regression problems. A (feedforward) biological neural network (BNN) is modeled in [10] as a directed acyclic graph with input neurons receiving information from the observations \(\mathbf{X}_{k}\) and generating a predicted response \(\widehat{Y}_{k}\) as output. The nodes represent the neurons in the network and an edge \(\nu=(i,j)\) between two nodes \(i\) and \(j\) indicates that neuron \(i\) is presynaptic for neuron \(j\). Each element \((i,j)\) in the edge set \(\mathcal{T}\) has a weight \(w_{ij}\) which indicates the strength of the connection between \(i\) and \(j\). While the structure of the graph does not change, the weights \(w_{ij}\) are adjusted in each learning step. Spike-timing-dependent plasticity (STDP) is chosen as the biological mechanism to update the parameters. It is considered as a form of Hebbian learning [5], which states that neurons that fire together wire together. More precisely, the synaptic weight \(w_{ij}\) changes depending on the timing of the spikes from neuron \(i\) to neuron \(j\). The weight decreases when neuron \(i\) spikes before neuron \(j\), and increases when neuron \(j\) spikes before neuron \(i\). The closer the spikes are in time, the larger the change in weight. It is important to note that the spike times are modeled as random variables. After some standardization (see [10, Equation (4.3)]) the update of the parameter for edge \((i,j)\) becomes \[w_{ij}\gets w_{ij}+w_{ij}C(e^{-U_{ij}}-e^{U_{ij}}),\] where \(U_{ij}\) are uniformly distributed random variables on some interval \([-A,A]\), i.e. \(U_{ij}\sim\mathcal{U}(-A,A)\), modeling the random spike times. The constant \(C\) represents the effect of a reward system, for example by neurotransmitters such as dopamine. It plays a key role for any meaningful learning process. In the present setup, the reward is tied to the success of predicting \(Y_{k}\), more specifically, whether the task is solved better or worse than in earlier trials. Using the standardization \(\theta_{ij}:=\log w_{ij}\) and a Taylor approximation, the following structure is derived in [10, Equation (4.5)] for the update of \(\theta_{ij}\) at step \(\ell\): \[\theta_{ij}^{(\ell)}=\theta_{ij}^{(\ell-1)}+\alpha^{(\ell-1)}\Big{(}L^{(\ell-1 )}(\boldsymbol{\theta}^{(\ell-1)}+\mathbf{U}^{(\ell)})-L^{(\ell-2)}( \boldsymbol{\theta}^{(\ell-2)}+\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-U_{ij}^ {(\ell)}}-e^{U_{ij}^{(\ell)}}\Big{)}, \tag{2.1}\] where \(\alpha^{(\ell)}>0\) is a learning rate, \(\mathbf{\theta}^{(\ell)}=\left(\theta^{(\ell)}_{ij}\right)_{(i,j)\in T}\) denotes the vector of parameters for all edges, \(\mathbf{U}^{(\ell)}\) the vector of the independent uniformly distributed random variables \(U^{(\ell)}_{ij}\) for all edges \((i,j)\), and \(L^{(\ell)}\) the loss function associated with the respective step \(\ell\). Thus, the update of the weights of the individual edges is in fact affected by the state of the entire network, but only through the value of the common loss function, which provides an assessment of the learning success. In particular, no gradient appears in the mechanism. In [10] the author considers the case where for each input-output pair \((\mathbf{X}_{k},Y_{k})\) the parameters are updated only once. To this end, the loss of the current input-output pair is compared with the previous one. More specifically, we have \(\ell=k\) as well as \(L^{(k)}(\mathbf{\theta})=L(\mathbf{\theta},\mathbf{X}_{k},Y_{k})\), \(k=1,2,\dots\), leading to the update rule \[\theta^{(k)}_{ij}=\theta^{(k-1)}_{ij}+\alpha^{(k-1)}\Big{(}L( \mathbf{\theta}^{(k-1)}+\mathbf{U}^{(k)},\mathbf{X}_{k-1},Y_{k-1})-L(\mathbf{\theta}^ {(k-2)}+\mathbf{U}^{(k-1)},\mathbf{X}_{k-2},Y_{k-2})\Big{)}\Big{(}e^{-U^{(k)}_ {ij}}-e^{U^{(k)}_{ij}}\Big{)}. \tag{2.2}\] As a main result, the author shows in [10, Theorem 1] that this procedure corresponds on average to a gradient descent method, with a gradient evaluated not exactly at \(\mathbf{\theta}^{(k-1)}\) but slightly perturbed randomly. However, as noted in [10], sufficiently fast convergence cannot be expected for such a zero-order method. ## 3 Multiple updates per learning opportunity The key ingredient leading to (2.2) is that, for each learning opportunity and each connection, exactly one spike is triggered and thus only one update of the parameters is made. The author himself calls this assumption "strong", see [10, Section 4]. Given the average spike frequency in real biological systems and the strong brain activity even at immobile rest [7], it seems more reasonable to assume instead a large number of spikes per learning opportunity. This corresponds to a series \(\mathbf{\theta}^{(k,0)},\dots,\mathbf{\theta}^{(k,n)}\) of updates to the parameters after observing any input-output pair \((\mathbf{X}_{k-1},Y_{k-1})\). The assessment of the update steps is based on the loss function associated with the most recent observation \((\mathbf{X}_{k-1},Y_{k-1})\). More specifically, equation (2.1) turns into \[\theta^{(k,\ell)}_{ij} =\theta^{(k,\ell-1)}_{ij}+\alpha^{(k-1,\ell-1)}\Big{(}L(\mathbf{ \theta}^{(k,\ell-1)}+\mathbf{U}^{(k,\ell)},\mathbf{X}_{k-1},Y_{k-1})-L(\mathbf{ \theta}^{(k,\ell-2)}+\mathbf{U}^{(k,\ell-1)},\mathbf{X}_{k-1},Y_{k-1})\Big{)}\] \[\quad\times\left(e^{-U^{(k,\ell)}_{ij}}-e^{U^{(k,\ell)}_{ij}}\right) \tag{3.1}\] for \(\ell\geq 1\) with initial values given by \(\mathbf{\theta}^{(k,0)}:=\mathbf{\theta}^{(k,-1)}:=\mathbf{\theta}^{(k-1,n)}\). Since \(n\) is considered to be large, the individual update steps should be small in order to avoid overfitting. We start by analyzing this update rule for a fixed observation \((\mathbf{X}_{k-1},Y_{k-1})\), i.e. for a fixed \(k\). For ease of notation we suppress the dependence on \(k\) in the following considerations. Since our goal is to study the limiting behavior for a large number of update steps \(n\), we instead make the dependence on \(n\) explicit. So we rewrite (3.1) as \[{}^{n}\theta^{(\ell)}_{ij}={}^{n}\theta^{(\ell-1)}_{ij}+{}^{n}\alpha^{(\ell-1) }\Big{(}L^{(n}\mathbf{\theta}^{(\ell-1)}+{}^{n}\mathbf{U}^{(\ell)})-L^{(n}\mathbf{ \theta}^{(\ell-2)}+{}^{n}\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-{}^{n}U^{( \ell)}_{ij}}-e^{{}^{n}U^{(\ell)}_{ij}}\Big{)}. \tag{3.2}\] The parameter update depends on increments of a loss function which resembles a gradient on first glance. Note, however, that this increment term is the same for all edges and, moreover, randomness occurs in both the loss function and in the external factor. We now consider the following dependencies on \(n\): \[{}^{n}\alpha^{(\ell-1)}:=\alpha,\quad A_{n}:=n^{-1/3}A,\quad{}^{n}U^{(\ell)}_ {ij}\sim\mathcal{U}(-A_{n},A_{n}), \tag{3.3}\] with constants \(\alpha,A>0\). Moreover, we rescale and extend the discrete-time process \((^{n}\theta^{(\ell-1)}_{ij})_{\ell=-1,\ldots,n}\) in time, defining a continuous-time process \(\mathbf{Z}^{n}=(\mathbf{Z}^{n}_{t})_{t\in[-1/n,1]}\) by \[\mathbf{Z}^{n}_{t_{\ell}^{n}}=\mathbf{Z}^{n}_{t_{\ell-1}}+\alpha_{n}\Big{(}L( \mathbf{Z}^{n}_{t_{\ell}^{n}}+^{n}\mathbf{U}^{(\ell)})-L(\mathbf{Z}^{n}_{t_{ \ell-2}^{n}}+^{n}\mathbf{U}^{(\ell-1)})\Big{)}\Big{(}e^{-u^{(\ell)}_{ij}}-e^{u ^{(\ell)}_{ij}}\Big{)} \tag{3.4}\] for \(t_{\ell}^{n}:=\ell/n\) and by \(\mathbf{Z}^{n}_{t}:=\mathbf{Z}^{n}_{\lfloor tn/n\rfloor}\) for arbitrary \(t\in[0,1]\). As a candidate limit for large \(n\) we consider a standard rescaled gradient process \(\mathbf{Z}=(\mathbf{Z}_{t})_{t\in[0,1]}\), which is defined as the solution to the deterministic ordinary differential equation (ODE) \[\frac{d\mathbf{Z}_{t}}{dt}=-4\alpha\nabla L(\mathbf{Z}_{t}),\quad\mathbf{Z}_ {0}=\mathbf{\theta}^{(k,0)}, \tag{3.5}\] see e.g. [9]. In order for our main theorem to hold, we assume that \[\nabla L\] is bounded and Lipschitz continuous with Lipschitz constant \[\lambda\] . (3.6) \(\mathbf{Z}\) naturally emerges as a limit if you run the ordinary gradient descent algorithm for minimizing the function \(L\) with many small steps. The main result of this paper states that the rescaled STDP process \(\mathbf{Z}^{n}\) converges to \(\mathbf{Z}\) as well. More precisely, we have **Theorem 1**.: _Assume (3.3) and (3.6). Then, for each fixed training sample \(k\), the rescaled process \(\mathbf{Z}^{n}\) of the BNN weights converges to the rescaled gradient process \(\mathbf{Z}\) uniformly in \(L^{2}\), i.e._ \[\lim_{n\to\infty}\mathbb{E}\left(\sup_{t\in[0,1]}\|Z^{n}_{t}-Z_{t}\|^{2} \right)=0.\] The previous theorem shows that learning in BNNs based on the local principle of STDP may indeed lead to optimization of parameters according to SGD if the number of spikes per learning opportunity is high. To wit, we start with an initial parameter vector \(\mathbf{\theta}^{(0)}\) and loss function \(L=L(\cdot,\mathbf{X}_{1},Y_{1})\). According to Theorem 1 the STDP nearly performs a continuous gradient step as in (3.5), leading to an updated parameter vector \(\mathbf{\theta}^{(1)}\). Switching now to the loss function \(L=L(\cdot,\mathbf{X}_{2},Y_{2})\), the next approximate gradient step leads to an updated vector \(\mathbf{\theta}^{(2)}\) etc. This procedure only differs from the classical SGD in that we make many small instead of one large gradient step per learning opportunity. Interestingly, neither the gradient nor even the functional dependence of the loss function on the parameters need to be known explicitly for this purpose. By contrast, it relies crucially on the randomness in the update, which may seem counterintuitive because the desired gradient ODE (3.5) is deterministic. Proof of Theorem 1.: Following the argument in [10, Proof of Theorem 1], we may decompose the dynamics of \(\mathbf{Z}^{n}\) in coordinate \(\nu=(i,j)\) as \[Z^{n}_{t_{\ell}^{n},\nu}=Z^{n}_{t_{\ell-1},\nu}+c^{n}_{\nu}\Big{(}\mathbf{Z}^{ n}_{t_{\ell-1}}\Big{)}+D^{n}_{\ell,\nu}, \tag{3.7}\] where \[b^{n}_{\nu}(z) :=-n\alpha e^{-A_{\nu}}C(A_{n})\partial_{\nu}L(z),\] \[c^{n}_{\nu}(z) :=\frac{1}{n}\mathbb{E}b^{n}_{\nu}(z+^{n}\mathbf{V}^{\nu}),\] and \[D^{n}_{\ell,\nu} :=\alpha\Big{(}L\big{(}\mathbf{Z}^{n}_{t_{\ell}^{n}}+^{n}\mathbf{ U}^{(\ell)}\big{)}-L\big{(}\mathbf{Z}^{n}_{t_{\ell-1}^{n}}+^{n}\mathbf{U}^{( \ell-1)}\big{)}\Big{)}\Big{(}e^{-u^{(\ell)}_{\nu}}-e^{u^{(\ell)}_{\nu}}\Big{)} -c^{n}_{\nu}\Big{(}\mathbf{Z}^{n}_{t_{\ell-1}^{n}}\Big{)}\] is a martingale difference process with respect to the filtration \((\mathcal{F}_{\ell})_{\ell=0,\ldots,n}\) that is generated by all randomness up to step \(\ell\). Moreover, the random vector \({}^{n}\mathbf{V}^{n}\) has independent components where all but the \(\nu\)th are uniformly distributed on \([-A_{n},A_{n}]\) and the \(\nu\)th has density \[f_{A_{n}}(x):=C(A_{n})^{-1}(e^{A_{n}}-e^{x})(e^{A_{n}}-e^{-x})\] on \([-A_{n},A_{n}]\) with normalizing constant \[C(A_{n}):=\int_{-A_{n}}^{A_{n}}(e^{A_{n}}-e^{x})(e^{A_{n}}-e^{-x})dx=2A_{n}(e^{ 2A_{n}}+1)+2-2e^{2A_{n}}.\] We obtain \[\mathbf{Z}_{t}^{n}-\mathbf{Z}_{t}=\int_{0}^{\lfloor tn\rfloor/n}\Big{(}b_{ \nu}(\mathbf{Z}_{s}^{n})-b_{\nu}(\mathbf{Z}_{s})\Big{)}ds+\mathbf{C}_{\lfloor tn \rfloor/n}^{n}+\mathbf{M}_{\lfloor tn\rfloor/n}^{n}+\delta_{t}^{n},\] for all \(t\in[0,1]\) where \[b_{\nu}(z) :=-4\alpha\partial_{t}L(z),\] \[\mathbf{C}_{\lfloor tn\rfloor/n}^{n} :=\sum_{\ell:t_{\ell}^{n}\leq t}\Big{(}\mathbf{c}^{n}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}\Big{(}\mathbf{Z}_{t _{\ell-1}^{n}}^{n}\Big{)}\Big{)}\] \[=\sum_{\ell:t_{\ell}^{n}\leq t}\Big{(}\mathbf{c}^{n}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}^{n}\Big{(}\mathbf{Z} _{t_{\ell-1}^{n}}^{n}\Big{)}\Big{)}+\frac{1}{n}\sum_{\ell:t_{\ell}^{n}\leq t} \Big{(}b_{\nu}^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-b_{\nu}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\Big{)},\] \[\mathbf{M}_{\lfloor tn\rfloor/n}^{n} :=\sum_{\ell:t_{\ell}^{n}\leq t}\mathbf{D}_{t_{\ell}^{n}}^{n},\] \[\delta_{t}^{n} :=\mathbf{Z}_{\lfloor tn\rfloor/n}-\mathbf{Z}_{t}.\] Set \[\varepsilon_{n}(t):=\sqrt{\mathbb{E}\sup_{s\leq t}\|\mathbf{Z}_{s}^{n}- \mathbf{Z}_{s}\|^{2}}.\] Using (3.6) and the triangle inequality, we conclude that \[\varepsilon_{n}(t)\leq\gamma_{n}+4\alpha\lambda\int_{0}^{t}\varepsilon_{n}(s)ds\] with \[\gamma_{n}=\sqrt{\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{C}_{t_{n}^{n} }^{n}\right\|^{2}}+\sqrt{\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{M}_{t_ {n}^{(\ell)}}^{n}\right\|^{2}}+\sup_{t\in[0,1]}\|\delta_{t}^{n}\|.\] By Gronwall's inequality we obtain \[\varepsilon_{n}(t)\leq\gamma_{n}\exp(\lambda).\] It is therefore sufficient to prove that \(\gamma_{n}\to 0\). Note that \[\left\|b_{\nu}^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-b_{\nu}\Big{(} \mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\right\|\leq\alpha\Big{|}e^{-A_{n}}nC(A _{n})-4\Big{|}\sup_{z}\|\partial_{\nu}L(z)\|\] and \[\left\|c^{n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}-\frac{1}{n}b_{\nu}^ {n}\Big{(}\mathbf{Z}_{t_{\ell-1}^{n}}^{n}\Big{)}\right\|\leq\frac{1}{n}\alpha e ^{-A_{n}}nC(A_{n})\lambda A_{n}\] because the random vectors \({}^{n}\mathbf{V}^{n}\) are all concentrated on \([-A_{n},A_{n}]\). Since \(e^{-A_{n}}nC(A_{n})\to 4\) and \(A_{n}\to 0\) as \(n\to\infty\), we obtain that \(\mathbb{E}\sup_{\ell=0,\ldots,n}\left\|\mathbf{C}_{t_{n}^{(\ell)}}^{n}\right\| ^{2}\to 0\) as desired. Moreover, we have that \(\sup_{t\in[0,1]}\|\delta_{t}^{n}\|\to 0\) because \(\mathbf{Z}\) is uniformly continuous as a continuous function on a compact interval. So it only remains to be verified that \(\mathbb{E}\sup_{\ell=0,\dots,n}\|\mathbf{M}_{t_{\ell}^{n}}^{n}\|^{2}\to 0\). Doob's inequality yields \[\mathbb{E}\sup_{\ell=0,\dots,n}\left\|\mathbf{M}_{t_{\ell}^{n}}^{n}\right\|^{2} \leq 4\mathbb{E}\left(\sum_{\nu}\sum_{\ell=1}^{n}\left(D_{\ell,\nu}^{n} \right)^{2}\right).\] Since the gradient of \(L\) is bounded and the components of the \({}^{n}\mathbf{U}^{(\ell)}\) are concentrated on the interval \([-A_{n},A_{n}]=[-n^{1/3},n^{1/3}]\), we have that \[\left\|\alpha\left(L\left(\mathbf{Z}_{t_{\ell}^{n}}^{n}+{}^{n}\mathbf{U}^{( \ell)}\right)-L\left(\mathbf{Z}_{t_{\ell-1}^{n}}^{n}+{}^{n}\mathbf{U}^{(\ell- 1)}\right)\right)\left(e^{-\nu\mathbf{U}^{(\ell)}}-e^{\nu\mathbf{U}^{(\ell)}} \right)\right\|\leq an^{-2/3}+b\left\|\mathbf{Z}_{t_{\ell}^{n}}^{n}-\mathbf{Z}_ {t_{\ell-1}^{n}}^{n}\right\| \tag{3.8}\] for some constants \(a,b\in\mathbb{R}_{+}\). By Lemma 2 below and \(\mathbf{Z}_{t_{0}^{n}}^{n}-\mathbf{Z}_{t_{\ell-1}^{n}}^{n}=0\) this implies that (3.8) is bounded by a multiple of \(n^{-2/3}\) and hence its square by \(n^{-4/3}\). Moreover, \(\|c_{\nu}^{n}(\mathbf{Z}_{t_{\ell-1}^{n}}^{n})\|\) is bounded by a multiple of \(1/n\). Together, this yields that \(\sum_{\ell=1}^{n}(D_{\ell,\nu}^{n})^{2}\) is bounded by a multiple of \(n^{-1/3}\), which yields the desired convergence. **Lemma 2**.: \(x_{n}\leq a+bx_{n-1}\)_, \(n=1,2,\dots\) for \(a,b,x_{n}\in\mathbb{R}_{+}\) implies_ \[x_{n}\leq x_{0}b^{n}+a\frac{1-b^{n}}{1-b}.\] Proof.: This follows by induction on \(n\).
近年、生物的ニューラルネットワーク(BNN)の学習と人工ニューラルネットワークの学習の違いについての激しい議論が行われています。脳の接続更新は、ローカル情報のみによって行われるとされ、したがって確率的勾配降下型最適化手法が使えないと考えられています。この論文では、BNNの教師あり学習のための確率的モデルを研究します。私たちは、学習機会ごとに多くのローカルアップデートが処理されることで、(連続的な)勾配ステップが概ね発生することを示しました。この結果から、確率的勾配降下法は、BNNの最適化において役割を果たす可能性があります。 Please let me know if you have any other questions.
2304.00086
Machine Learning for Economics Research: When What and How?
This article provides a curated review of selected papers published in prominent economics journals that use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly preferred, and (3) how they are used for economic applications. The review highlights that ML is particularly used to process nontraditional and unstructured data, capture strong nonlinearity, and improve prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggests that ML is becoming an essential addition to the econometrician's toolbox.
Ajit Desai
2023-03-31T19:21:56
http://arxiv.org/abs/2304.00086v2
# Machine Learning for Economics Research: ###### Abstract This article provides a curated review of selected papers published in prominent economics journals that use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly preferred, and (3) how they are used for economic applications. The review highlights that ML is particularly used to process nontraditional and unstructured data, capture strong nonlinearity, and improve prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggests that ML is becoming an essential addition to the econometrician's toolbox. Economics Econometrics Machine learning _JEL Codes:_ A10, B23, C45, C55 ## 1 Introduction The economy is becoming increasingly digital, and as a result, the size and complexity of economic data are growing rapidly. This presents both opportunities and challenges for analysts who want to process and interpret this data to gain insights into economic phenomena. Machine learning (ML) which has emerged as a powerful tool for analyzing large and complex datasets across disciplines, has the potential to mitigate some of the challenges posed by the digitization of the economy for economics research and analysis. The ability of ML models to effectively process large volumes of diverse data could allow us to build more complex models. As a result, the use of ML has expanded in economics research. The number of academic publications that use ML tools has increased significantly in the last few years, as depicted in Figure 1, which shows the number of articles published in ten leading economics journals that use ML. This trend is expected to continue as researchers explore new ways to apply ML techniques to a wide range of economic problems. Nevertheless, the suitability and applicability of these tools is not widely understood among economists and data scientists. To bridge this gap, in this article, we provide a curated review of selected papers published in prominent economics journals that employ ML tools. The objective of this review is to assist economists interested in leveraging ML tools for their research and analysis, as well as data scientists seeking to apply their skills to economic applications.2 Footnote 2: The article is motivated from the American Economic Association’s (AEA) continuing education session on Machine Learning and Big Data at the 2023 ASSA annual meeting [1]. The article aims to showcase the potential benefits of utilizing ML in economics research and policy analysis, while also offering suggestions on where, what and how to effectively apply ML models. It should be noted that the article takes a suggestive approach, rather than an explanatory one, with a particular focus on supervised learning methods commonly used for prediction problems such as regression and classification. The article is organized into three main sections, each focusing on one key question: 1) When ML is used in economics? 2) What ML models are commonly preferred?, and 3) How ML is used for economic applications? Finally, we briefly discuss the limitations of machine learning in its current state. The key lessons of the review are summarized as follows: First, ML models are used to process nontraditional and unstructured data such as text and images, to capture strong nonlinearity that is difficult to capture using traditional econometric models, and to improve prediction accuracy, extract new information, or automate feature extraction when dealing with large but traditional datasets. ML tools are probably not useful for cases with small data complexity, and the traditional econometric models will likely to suffice. Second, the choice of ML model depends on the type of application and underlying data characteristics. For example, when conducting textual analysis, the latent Dirichlet allocation (LDA), which is an ML algorithm for probabilistic topic modelling, is commonly preferred. However, deep learning models such as Transformers are also employed for handling large text or audio data. Convolutional neural network models such as ConvNext are preferred for image or video datasets. Ensemble learning models are commonly employed for traditional datasets, and causal ML models are utilized when analysis focuses on causal inference. Third, the usefulness of ML models can be greatly improved by tailoring them to the specific application, especially when dealing with large and complex but traditional data. This approach is more effective than using off-the-shelf tools. Moreover, pre-trained models with transfer learning can be advantageous when dealing with nontraditional but limited data and deep learning.3 Footnote 3: See Appendix A for more details on LDA model, Appendix B for Transformer model, Appendix C for ConvNext model, Appendix D for ensemble learning models, and Appendix E for transfer learning. This review highlights the potential benefits of using new tools provided by ML in various areas of economics research, but also acknowledges the challenges that needed to overcome to effectively use it for economic analysis and research. For instance, ML models require large amounts of data and ample computational resources, which can be a limitation for some researchers because it can be difficult to obtain high-quality data, and data may be incomplete or biased. Additionally, ML models are prone to overfitting and can be challenging to interpret, which limits their utility. Moreover, most ML models do not have standard errors, and other statistical properties have not yet been well-defined, which can make it difficult to draw conclusions from the results. Therefore, caution is recommended when using ML models. Despite these limitations, our review suggests, ML is successfully employed alongside traditional econometrics tools to advance our understanding of economic systems. By combining the strengths of both fields, researchers can improve the accuracy and reliability of economic analyses to better inform policy decisions. Figure 1: The number of publications over five years (between 2018-2022) in the leading economics journals that use ML. The data includes articles from the following ten journals: American Economic Review (AER), Econometrica, Journal of Economic Perspectives (JEP), Journal of Monetary Economics (JME), Journal of Political Economy (JPE), Journal of Econometrics (JoE), Quarterly Journal of Economics (QJE), Review of Economic Studies (RES), American Economic Journal (AJE): Macroeconomics and Microeconomics. The relevant papers are identified using the following search terms: Machine learning, Ensemble learning, Deep learning, Statistical learning, Reinforcement learning, and Natural language processing. When ML is used in economics? The literature suggest that the following three cases where the ML models could add value to economic research and analysis. * To process non-traditional data, such as images, texts, audios, and videos. * To capture nonlinearity which is difficult to capture using traditional models. * To process traditional data at scale to improve prediction accuracy, extract new information, or automate feature extraction. For instance, article [2] suggest that the big data, due to its sheer size and complexity, may require more powerful manipulation tools, which ML can offer. Also, we may have more potential predictors (or features) than appropriate for estimation in some cases, so we need to make some variable selections, where ML can help. Lastly, large datasets allow for more flexible relationships than simple linear models can capture. ML techniques are handy in those cases due to their ability to model intricate and nonlinear relationships potentially offering new insights. Similarly, article [3] argue that ML not only provides new tools but also solves a different problem. They assert ML's success is largely due to its ability to discover the complex structure that was not specified in advance. They suggest applying ML to economics requires finding relevant tasks, for instance, where the focus is on increasing prediction accuracy or uncovering generalizable patterns from complex datasets. Also, article [4] point that the methods developed in the ML have been particularly successful in big data settings, where we observe information on a large number of units, many pieces of information on each unit, or both. The authors suggest that for using ML tools for economics research and analysis, researchers should clearly articulate their goals and why certain properties of ML algorithms may or may not be important. ### ML is used for processing non-traditional data. Non-traditional datasets, such as images, text, and audio, can be difficult to process using traditional econometric models. In such cases, ML models can be used to extract valuable information that can be incorporated into traditional models to address economic questions. For instance, article [5] published in QJE, the authors assess how transparency, a key feature of central bank design, affects monetary policy makers' deliberations, using an ML algorithm for probabilistic topic modelling. Similarly, article [6] published at JME, they use a large news corpus and ML algorithms to investigate the role played by the media in the expectations formation process of households, and in [7] published in JoE, uses the twitter data and ML model to measure inflation expectation. Likewise, articiel [8] published in AER, the authors use satellite data to measure GDP growth at the sub and supranational regions, in [9] also published in AER, the authors employed a computer vision algorithm that measures the perceived safety of streetscapes, and how strongly it is correlated with population density and household income, and in [10] published in AER, the authors use deep learning to detect emotions embedded in press conferences after the Federal Open Market Committee meeting and examine the influence of the detected emotions on financial markets. ### ML is used for capturing strong nonlinearity The ML could be useful if the data and application contain strong nonlinearity, which is hard to capture using traditional approaches. For instance, in the article [11], published in QJE, the authors evaluate whether ML models can help improve judges' decisions on bail or no bail. Although the outcome is binary, this is a highly complex problem that demands processing complex data to make prudent decisions. Similarly, in the article [12], published in JME, the authors use ML to solve dynamic economic models by casting them into nonlinear regression equations. Here ML is used to deal with multicollinearity and to perform the model reduction. Likewise, article [13], published in QJE, uses ML to test how effective physicians are at diagnosing heart attacks. Using a large and complex dataset available at the time of the physician's decision, they estimate the model to predict the outcome of testing to uncover potential sources of error in decisions. ML is used for processing traditional data at scale to improve prediction accuracy or extract new information ML could be useful for processing large and complex but traditional data sets with many variables. In such cases, the ML models can help to 1. improve prediction accuracy, 2. extract new information or 3. automate feature extraction. For instance, in the article [14], published in AER, the authors combine a data-rich environment with an ML model to provide new estimates of time-varying expectational errors embedded in survey responses and show that the ML can be productively deployed to correct errors in human judgment and improve predictive accuracy. Similarly, in the article [15], published in JPE, the authors measure CEO behaviour types using high-frequency, high-dimensional diary data and an ML algorithm. In [16], published in JoE, the author uses ML for fraud detection in insurance claims using unstructured data comprising inputs of varying lengths and variables with many categories. They argue that ML alleviates these challenges that are otherwise hard for traditional methods. Likewise, in [17] published in RES, the authors suggest that the accuracy of ML-based data driven decision rule for consumer lending could be more accurate than examiner-based decisions. ML is probably not useful for the cases where data complexity (which could be related to shape, size, collinearity, nonlinearity, etc.) is small, and traditional econometric models would likely suffice. However, if the data complexity increases, i.e., when dealing with big data, the value added by ML models could be higher after a certain threshold, as shown by the dotted line in Figure 2. ## 3 What ML models are commonly preferred? With a variety of ML tools available, different models are better suited for different types of applications and data characteristics. This section will discuss which models are most effective for a given type of economic application. ### Deep learning models are used when dealing with nontraditional data Natural language processing (or NLP) primarily relies upon processing textual data and has many applications in economics. For instance, it could be used for topic modelling or sentiment analysis. The model of choice for the topic modelling to quantify text is LDA proposed in [19]. It is an ML algorithm for probabilistic topic modelling that decomposes documents in terms of the fraction of time spent covering a variety of topics [5, 6]. An review of the use of text as data and ML methods for various economics applications is presented in [20]. The use of deep learning models for NLP is evolving rapidly, and various large language models (LLMs) could be used to process text data; However, _transformer_ models [21] are proven to be more useful to efficiently extract useful information from the textual data [1]. For instance, in [10], they use transformer to detect emotions embedded in press conferences after the Federal Open Market Committee meeting for sentiment analysis. Moreover, almost all large general-purpose LLMs, including GPT-3 and chatGPT, are trained using Generative Pre-trained Transformer [22]. Figure 2: Schematic diagram representing the relative merits of ML and traditional econometric methods. The plot is adapted from [1, 18]. An interesting application of computer vision models in economics is using a broad set of satellite images or remote sensing data for analysis. For instance, in the article [8], the authors use satellite data to measure GDP growth at the sub and supranational regions. Similarly, in [23], using satellite data and machine learning, they develop high-resolution predictors of household income, wealth, and poverty rates in five African countries. A review of the literature, presenting opportunities and challenges in using satellite data and ML in economics, is documented in [24]. It concludes that such models have the potential to perform economic analysis at spatial and temporal frequencies that are an order of magnitude higher than those that are commonly available. Various deep learning models can be employed to extract useful information from images, but the _ConvNext_ model [25] is evolving to be more successful in efficiently processing image data sets [1]. However, the transformers can also be effectively employed for image processing [26]. ### Ensemble learning models are used when dealing with traditional data Ensemble learning models could be useful if the data size is small but includes many features and if there is collinearity or nonlinearity, which is hard to model. For instance, the article [3] compares the performance of different ML models in predicting house prices and demonstrates that the nonparametric ML algorithms, such as random forests, can do significantly better than ordinary least squares, even at moderate sample sizes and with a limited number of covariates. Similarly, article [13] use ensemble learning models which combine gradient-boosted trees and LASSO to study physicians' decision-making to uncover the potential sources of errors. Also, in [27], they propose generalized random forests, a method for nonparametric statistical estimation based on random forests. This can be used for three statistical tasks: nonparametric quantile regression, conditional average partial effect estimation and heterogeneous treatment effect estimation via instrumental variables. Moreover, the ensemble learning models are popular in macroeconomic prediction, as demonstrated by many articles [28, 29, 30, 31, 32]). ### Causal ML models are used when the focus is on causal inference The causal ML could be helpful when the primary objective is to make causal inferences, but the dataset is big and complex. For instance, in [33], the authors develop a nonparametric causal forest for estimating heterogeneous treatment effects that extends the random forest, an ML algorithm. They demonstrate that any type of random forest, including classification and regression forests, can be used to provide valid statistical inferences. In experimenting with these models, they find causal forests to be substantially more powerful than classical methods--especially in the presence of irrelevant covariates. Similarly, the article [34] use ML for estimating heterogeneity in causal effects in experimental and observational studies. Their approach is tailored for applications where there may be many attributes of a unit relative to the number of units observed and where the functional form of the relationship between treatment effects and the attributes is unknown. It enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size, and without sparsity assumptions. The applicability of these methods is demonstrated in [35] for predicting treatment heterogeneity for summer jobs application. ## 4 How ML is used for economic applications Pre-trained models and transfer learning are recommended, especially while using deep-learning models [1]. Deep Learning based approaches are state-of-the-art for processing non-traditional data; However, there are many difficulties in using them for economic applications. For instance, large data and ample computational resources are often necessary to train these models. Both of which are scarce resources in many economic applications. Moreover, these models are notoriously convoluted, and many economics researchers would benefit from using these methods but lack the technical background to implement them from scratch ([36]). Therefore, transfer learning, i.e., large models pre-trained on a similar application (for instance, large language or computer vision models), could be adapted for similar economic applications. Of-the-shelf ensemble learning models are useful when dealing with panel data with strong collinearity or nonlinearity, but it is recommended to adapt these models to suit the task. For instance, in [27], they adapt the popular random forests algorithm, which then can be used for nonparametric quantile regression or estimating average treatment effects. Similarly, in [16] propose an explainable attention network for fraud detection in claims management by adapting a standard neural network and demonstrate that the adapted model performs better than off-the-shelf. Likewise, in [37], the author proposed macroeconomic random forest, an algorithm adapting the canonical ML tool. They show that the adapted model forecasts better than off-the-shelf ML algorithms and traditional econometric approaches, and it can be interpreted. There are many other examples where the model or the procedure to train the model is adapted to improve the performance. For instance, [34] adopt the standard cross-validation approach, which then enables the construction of valid confidence intervals for treatment effects, even with many covariates relative to the sample size. Similarly, a variation of cross-validation approaches is proposed in [30] to improve the prediction performance of the macroeconomic nowcasting model during economic crisis periods. A few other practical recommendations discussed in the AEA's course are: 1. Bigger, both in terms of data and model size, is better when using deep learning models for processing non-traditional data. 2. Due to its speed and community support, the _Python_ programming language is preferred over languages for applied ML. 3. When training large ML models, it is suggested to use _Unix_-based operating systems over windows systems. ## 5 Other emerging applications Other types of ML approaches, such as unsupervised learning and reinforcement learning, are yet to make a notable impact in economics research. However, there are some initial applications of these approaches in the literature. For instance, in [38] uses an unsupervised dimension reduction model called autoencoder neural network for asset pricing models. Similarly, in [39] autoencoder-based unsupervised model is used for anomaly detection in high-value payments systems. Also in [40] using data on nearly 40 million Google keyword auctions and unsupervised machine learning algorithms to cluster keywords into thematic groups serving as relevant markets. Reinforcement learning (RL) models could be employed to model complex strategic decisions arising in many economic applications. For instance, in [41], the authors use RL to estimate optimal decision rules of banks interacting in high-value payment systems. Similarly, in [42], deep RL is used to solve dynamic stochastic general equilibrium models for adaptive learning at the interaction of monetary and fiscal policy, and [43], the authors use RL to optimize monetary policy decisions. Likewise, in [44], the authors use a RL approach to learn dynamic tax policies. ## 6 Limitations The key limitations of ML for economics research and analysis are outlined below: * Large data sets and ample computational resources are often necessary to train ML models--especially the deep learning models. * The ML models, owing to their flexibility, are easy to overfit, and their complexity makes them hard to interpret--which is crucial for many economic applications. * Most ML models have no standard errors and asymptotic properties--which could be essential for many economic applications. * ML models can be biased if the data used to train these models is of low quality and biased. The literature is evolving to address these challenges; however, some are hard and could take longer to mitigate. For instance, we have limited data in many economic applications, which restricts the applicability of large ML models. This could be potentially mitigated in certain applications as the economy becomes more digital, allowing us to gather more data at a much higher frequency than traditional economics data sets. The interpretability or explainability of models is another major challenge in using ML in economics. The researchers are making progress toward overcoming these challenges. For instance, one approach recently developed to mitigate interpretability issues is by using the Shapley-value-based methodologies such as developed in [45], and [46]. These methods are useful for macroeconomic prediction models in [46, 30, 47, 32]. However, note that, although such methods are based on game theory, they do not provide any optimal statistical criterion, and asymptotics for such approaches are not available yet. To overcome that, for instance, in the recent papers [48, 49], the authors propose ML-based mixed data sampling and develop the asymptotics in the context of linear regularized regressions. However, much progress needs to be made to use such asymptotic analysis for popular nonlinear ML approaches. Conclusions This concise review highlights that ML is increasingly used for economics research and policy analysis, particularly for analyzing non-traditional data, capturing nonlinearity, and improving prediction accuracy. Importantly, ML can complement traditional econometric tools by identifying complex relationships and patterns in data that can be incorporated into econometric models. As the digital economy and economic data continue to grow in complexity, ML remains a valuable tool for economic analysis. However, a few limitations need to be addressed to improve the utility of ML models, and the literature is progressing toward mitigating those challenges. Lastly, in Figure 3, we present the word clouds generated from the titles and abstracts of the articles in our dataset. These word clouds illustrate the frequency of certain terms, with larger font sizes indicating more frequent usage. For example, as shown in the word clouds, the terms "machine" and "learning" are prominently featured in both titles and abstracts, highlighting its relevance in those articles. This is followed by words such as "data," "effect," and "decision".
This article is a curated review of selected papers published in prominent economics journals which use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly used, and (3) how they are used for economic applications. The review highlights that ML is particularly used to process nontraditional and unstructured data, capture strong nonlinearity, and improve prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggests that ML is becoming an essential addition to the econometrician's toolbox.
2309.11177
Long-tail Augmented Graph Contrastive Learning for Recommendation
Graph Convolutional Networks (GCNs) has demonstrated promising results for recommender systems, as they can effectively leverage high-order relationship. However, these methods usually encounter data sparsity issue in real-world scenarios. To address this issue, GCN-based recommendation methods employ contrastive learning to introduce self-supervised signals. Despite their effectiveness, these methods lack consideration of the significant degree disparity between head and tail nodes. This can lead to non-uniform representation distribution, which is a crucial factor for the performance of contrastive learning methods. To tackle the above issue, we propose a novel Long-tail Augmented Graph Contrastive Learning (LAGCL) method for recommendation. Specifically, we introduce a learnable long-tail augmentation approach to enhance tail nodes by supplementing predicted neighbor information, and generate contrastive views based on the resulting augmented graph. To make the data augmentation schema learnable, we design an auto drop module to generate pseudo-tail nodes from head nodes and a knowledge transfer module to reconstruct the head nodes from pseudo-tail nodes. Additionally, we employ generative adversarial networks to ensure that the distribution of the generated tail/head nodes matches that of the original tail/head nodes. Extensive experiments conducted on three benchmark datasets demonstrate the significant improvement in performance of our model over the state-of-the-arts. Further analyses demonstrate the uniformity of learned representations and the superiority of LAGCL on long-tail performance. Code is publicly available at https://github.com/im0qianqian/LAGCL
Qian Zhao, Zhengwei Wu, Zhiqiang Zhang, Jun Zhou
2023-09-20T09:57:20
http://arxiv.org/abs/2309.11177v1
# Long-tail Augmented Graph Contrastive Learning for Recommendation ###### Abstract Graph Convolutional Networks (GCNs) has demonstrated promising results for recommender systems, as they can effectively leverage high-order relationship. However, these methods usually encounter data sparsity issue in real-world scenarios. To address this issue, GCN-based recommendation methods employ contrastive learning to introduce self-supervised signals. Despite their effectiveness, these methods lack consideration of the significant degree disparity between head and tail nodes. This can lead to non-uniform representation distribution, which is a crucial factor for the performance of contrastive learning methods. To tackle the above issue, we propose a novel **L**ong-tail **A**ugmented **G**raph **C**ontrastive **L**earning (LAGCL) method for recommendation. Specifically, we introduce a learnable long-tail augmentation approach to enhance tail nodes by supplementing predicted neighbor information, and generate contrastive views based on the resulting augmented graph. To make the data augmentation schema learnable, we design an auto drop module to generate pseudo-tail nodes from head nodes and a knowledge transfer module to reconstruct the head nodes from pseudo-tail nodes. Additionally, we employ generative adversarial networks to ensure that the distribution of the generated tail/head nodes matches that of the original tail/head nodes. Extensive experiments conducted on three benchmark datasets demonstrate the significant improvement in performance of our model over the state-of-the-arts. Further analyses demonstrate the uniformity of learned representations and the superiority of LAGCL on long-tail performance. Keywords:Recommender system Graph neural networks Contrastive learning Self-supervised learning. ## 1 Introduction Recommender systems are a critical component of numerous online services, ranging from e-commerce to online advertising. As a classic approach, collaborative filtering (CF) [11, 19] plays a vital role in personalized preference prediction by representing user and item embeddings from observed user-item interactions such as clicks and conversions. Recently, enhanced by the powerful Graph Convolutional Networks (GCNs) [13], GCN-based recommendation methods [10, 22] have demonstrated significant potential in improving recommendation accuracy. GCN-based recommendation methods represent interaction data as graphs, such as the user-item interaction graph, and iteratively propagate neighborhood information to learn effective node representations. Compared to traditional CF methods, GCN-based recommendation methods are better equipped to capture higher-order collaborative signals, leading to improved user and item embedding learning. Despite the effectiveness, GCN-based recommendation methods still face data sparsity issue in real-word scenarios. Most existing models follow the supervised learning paradigm [1, 10, 22, 27], where the supervision signal is derived from the observed user-item interactions. However, the observed interactions are considerably sparse in contrast to the entire interaction space [24]. As a result, they may not be sufficient to acquire effective representations. Although several recent studies have attempted to alleviate the data sparsity of interaction data through contrastive learning [24, 28], they generally rely on pre-defined data augmentation strategies, such as uniformly dropping edges or shuffling features. These methods lack consideration of the significant disparity in the graph structure between head and tail nodes and lack the ability to construct adaptive data augmentation tailored for various recommendation datasets. This can lead to non-uniform representation distribution, which is a crucial factor for the performance of contrastive learning methods [21, 28]. In the light of the above limitations and challenges, we propose a novel **L**ong-tail **A**ugmented **G**raph **C**ontrastive **L**earning (LAGCL) method for recommendation. To illustrate, consider the head user in Fig. 1(a) and the tail user in Fig. 1(b) who share similar preference. Our approach aims to extract informative transition patterns from head users and adapt to tail users effectively, as shown in Fig. 1(c). Specifically, we first design an auto drop module to convert head nodes into pseudo-tail nodes that mimic the patterns of real-tail nodes. Next, we leverage the dropped neighbor information of pseudo-tail nodes to learn the knowledge transfer module, which then augments the tail nodes as pseudo-head nodes by adding predicted neighbor information. These modules are updated using the adversarial training mechanism to ensure the distribution match between Figure 1: An illustrated example depicting (a) a head node, (b) a tail node, and (c) a tail node augmented with predicted neighbor information. pseudo-head/tail nodes and real-head/tail nodes. Finally, we use an effective and efficient approach to generate contrastive views by injecting random noise into the graph embedding of both head nodes and augmented tail nodes, capitalizing on their uniformity to yield better contrastive performance. The main contributions of this paper are summarized as follows: * We propose a novel graph contrastive learning method, which encourages the model to learn the knowledge between head and tail nodes and generate uniform representation distribution for improving the GCN-based recommendation. * We designed a learnable data augmentation scheme that can adaptively enhance tail nodes representation and easily generalize to different GCN-based recommendation scenarios. * Extensive experiments are conducted on three public datasets, demonstrating our approach is consistently better than a number of competitive baselines, including GCN-based and graph contrastive learning-based recommendation methods. ## 2 Preliminaries ### GCN-based Recommendation Given the user set \(\mathcal{U}=\{u\}\) and the item set \(\mathcal{I}=\{i\}\), the observed user-item interaction data is denoted as \(\mathbf{R}\in\mathbb{R}^{|\mathcal{U}|\times|\mathcal{I}|}\), where each entry \(r_{u,i}=1\) if there exists an interaction between user \(u\) and item \(i\), otherwise \(r_{u,i}=0\). The number of nodes is \(n=|\mathcal{U}|+|\mathcal{I}|\). GCN-based recommendation methods formulate the available data as a user-item bipartite graph \(\mathcal{G}=(\mathcal{V},\mathbf{A})\), where \(\mathcal{V}=\mathcal{U}\cup\mathcal{I}\) and \(\mathbf{A}\in\mathbb{R}^{n\times n}\) is the adjacent matrix defined as \[\mathbf{A}=\begin{bmatrix}\mathbf{0}^{|\mathcal{U}|\times|\mathcal{U}|}& \mathbf{R}\\ \mathbf{R}^{T}&\mathbf{0}^{|\mathcal{I}|\times|\mathcal{I}|}\end{bmatrix}. \tag{1}\] With a slight abuse of notation, we use \(|\mathbf{A}_{i}|\) to refer to \(\sum_{j\in\mathcal{N}_{i}}\mathbf{A}_{ij}\), where \(\mathcal{N}_{i}\) denotes the neighbor set of node \(i\). GCN-based recommendation methods utilize graph structure information to aggregate and produce the embedding of users and items on bipartite graph \(\mathcal{G}\) through Eq. (2). \[\mathbf{H}^{(l)}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}} \mathbf{H}^{(l-1)}, \tag{2}\] where \(\mathbf{D}\in\mathbb{R}^{n\times n}\) is the diagonal degree matrix of \(\mathcal{G}\), in which each entry \(\mathbf{D}_{ii}\) denotes the number of non-zeros in the \(i\)-th row of the matrix \(\mathbf{A}\). \(\mathbf{H}^{(l)}\in\mathbb{R}^{n\times d}\) denotes the \(d\)-dimensional node embedding matrix after \(l\)-th graph convolution layer, and \(\mathbf{H}^{(0)}\) is the initial node embedding matrix that need to be learned. Finally, we combine all the \(L\) layers output node embeddings, _i.e.,_\(\mathbf{H}=f_{\text{readout}}([\mathbf{H}^{(0)};\mathbf{H}^{(1)};\cdots; \mathbf{H}^{(L)}])\), to generate preference scores between users and items for recommendation, while \(f_{\text{readout}}(\cdot)\) is the mean pooling operation here. ### Long-Tail Distribution in the Graph Graph convolutional networks heavily rely on rich structural information to achieve high performance. However, for nodes with low degrees, their number of neighbors is typically very small, leading to unsatisfactory performance for these nodes. In this paper, in order to investigate the long-tail distribution in the graph, we partition nodes into head nodes \(\mathcal{V}_{head}\) and tail nodes \(\mathcal{V}_{tail}\) based on their degree with a predetermined threshold value \(k\), i.e., \(\mathcal{V}_{head}=\{i:\mathbf{D}_{ii}>k\}\) and \(\mathcal{V}_{tail}=\{i:\mathbf{D}_{ii}\leq k\}\). Specifically, we have \(\mathcal{V}_{head}\cap\mathcal{V}_{tail}=\emptyset\) and \(\mathcal{V}_{head}\cup\mathcal{V}_{tail}=\mathcal{V}\). ## 3 Methodology In this section, we introduce the proposed long-tail augmented graph contrastive learning (LAGCL) model for recommendation. First, we briefly exhibit the overall architecture of LAGCL. Then, we elaborate the detail implementation of LAGCL. ### Overview of LAGCL Our framework follows the general contrastive learning paradigm, which aims to achieve maximum agreement between representations of different views. Fig. 2 illustrates the overall framework. Firstly, we enhance tail node by a knowledge transfer module and then generate two graph views on the augmented graph. Figure 2: Overall framework of our proposed long-tail augmented graph contrastive learning method for recommendation. To enhance readability, only user-side augmentation is shown. The core of the model is long-tail augmentation, which extracts informative transition from head nodes and augments tail nodes using the knowledge transfer module. The augmented graph is then used to generate the view pairs for contrastive learning. Finally, we train recommendation task, contrastive learning task and knowledge transfer constraints jointly using the multi-task training strategy. Next, we employ a contrastive loss function to conduct the contrastive learning task, which encourages representations of the same nodes in the two different views to be similar, while representations of different nodes in those views to be distinct. Finally, we adopt a multi-task training strategy to improve recommendation performance. Specifically, the main task is the recommendation task, while contrastive learning task and knowledge transfer constraints serve as auxiliary tasks. ### Long-tail Augmentation Assuming that each node has a ground truth neighbor set, the difference between head and tail nodes lies in the former having a fully observed neighbor set while the latter has only partially observed neighbor information. This incomplete information adversely affects the performance of the model. Therefore, the technique of long-tail augmentation aims to supplement the tail nodes with missing neighbor information, with the aim of enforcing representation uniformity between head and tail nodes. However, conducting long-tail augmentation is a challenging task due to unavailability of ground truth labels for the missing neighbors of tail nodes. The intuition behind this approach is that head nodes have fully observed neighbor information. If we could generate pseudo-tail nodes by dropping the neighbor information of head nodes to mimic tail nodes, we could leverage the dropped neighbor information of pseudo-tail nodes to learn the long-tail augmentation strategy. Our long-tail augmentation boils down to three key modules: auto drop module, knowledge transfer module and generative adversarial learning module. #### 3.2.1 Auto Drop Module. An intuitive approach is to randomly sample neighbors from head nodes to generate pseudo-tail nodes. However, this method lacks an effective supervisory signal to guide the sampling process, leading to a distribution deviation between the enhanced data samples and the actual tail node samples. To improve the data augmentation process, we propose an auto drop module equipped with a trainable dropout strategy to minimize the distribution deviation between the enhanced data samples and the actual tail node samples. Specifically, the dropout strategy consists of two main steps: sorting by edge weight and selecting top-\(K\) neighbors based on their importance weight, which is calculated between node pairs as: \[\mathbf{S}=\left(\mathbf{H}^{(0)}\mathbf{W}_{s}{\mathbf{H}^{(0)}}^{T}\right) \odot\mathbf{A}, \tag{3}\] where \(\mathbf{S}\in\mathbb{R}^{n\times n}\), \(\mathbf{W}_{s}\in\mathbb{R}^{d\times d}\) is trainable parameters and \(\odot\) denotes the element-wise product. Then, to mimic real-tail nodes, we randomly choose the neighbor size \(k_{i}\) of head nodes uniformly from the range of tail node neighbor sizes \([1,k]\): \[k_{i}=\begin{cases}\text{Uniform}[1,k],&\mathbf{D}_{ii}>k,\\ \mathbf{D}_{ii},&\mathbf{D}_{ii}\leq k.\end{cases} \tag{4}\] Finally, inspired by the top-rank selection method in [5, 15], the new adjacency matrix \(\hat{\mathbf{A}}\) is constructed by selecting the top-\(K\) neighbors based on their importance weight, which will be used in the graph aggregation of tail nodes. The new adjacency matrix \(\hat{\mathbf{A}}\) is defined as: \[\hat{\mathbf{A}}_{ij}=\begin{cases}\frac{\exp(\delta\mathbf{S}_{ij})}{1+\exp( \delta\mathbf{S}_{ij})},&\mathbf{S}_{ij}\in\text{top-}k_{i}(\mathbf{S}_{i}), \\ 0,&\mathbf{S}_{ij}\notin\text{top-}k_{i}(\mathbf{S}_{i}),\end{cases} \tag{5}\] where \(\delta\) is a hyperparameter that controls the smoothness of the adjacency matrix \(\hat{\mathbf{A}}\), and the corresponding neighborhood of node \(i\) is denoted as \(\hat{\mathcal{N}}_{i}\). #### 3.2.2 Knowledge Transfer Module. After constructing the auto drop module, we can design a knowledge transfer module that leverages the dropped neighbor information of pseudo-tail nodes for predicting the missing neighbor information of real-tail nodes. Specifically, we use a multi-layer perceptron (MLP) function, denoted by \(f_{t}(\mathbf{h}_{i}^{(l)},\mathbf{h}_{\hat{\mathcal{N}}_{i}}^{(l)};\theta_{t }^{(l)})=\mathbf{m}_{i}^{(l)}\), to predict the missing neighbor information based on the center node features and observed neighbor information. Here, \(\mathbf{h}_{i}^{(l)}\) represents the \(l\)-layer graph embedding of node \(i\), \(\mathbf{h}_{\mathcal{N}_{i}}^{(l)}\) is the mean pooling representation of observable neighbors. Then, the predicted information is added to the neighbor aggregation process of real-tail node. We can calculate the node embedding of head node \(i\) in the sparse graph \(\hat{\mathbf{A}}\) with predicted neighbor information by the knowledge transfer function: \[\hat{\mathbf{h}}_{i}^{(l)}=\sum_{j\in\hat{\mathcal{N}}_{i}}\frac{1}{\sqrt{| \hat{\mathbf{A}}_{i}|}\sqrt{|\hat{\mathbf{A}}_{j}|}}\hat{\mathbf{h}}_{j}^{(l-1 )}+f_{t}(\hat{\mathbf{h}}_{i}^{(l-1)},\hat{\mathbf{h}}_{\hat{\mathcal{N}}_{i} }^{(l-1)},\theta_{t}^{(l-1)}), \tag{6}\] where \(\hat{\mathbf{h}}^{(0)}=\mathbf{h}^{(0)}\). The embedding of head node \(i\) in the original graph \(\mathbf{A}\) is \[\mathbf{h}_{i}^{(l)}=\sum_{j\in\mathcal{N}_{i}}\frac{1}{\sqrt{|\mathbf{A}_{i} |}\sqrt{|\mathbf{A}_{j}|}}\mathbf{h}_{j}^{(l-1)}. \tag{7}\] In order to train the knowledge transfer function, we define the translation loss is defined as follows: \[\mathcal{L}_{trans}=\sum_{i\in\mathcal{V}_{head}}\sum_{l=1}^{L}\|\mathbf{h}_{ i}^{(l)}-\hat{\mathbf{h}}_{i}^{(l)}\|_{2}^{2}. \tag{8}\] #### 3.2.3 Generative Adversarial Learning Module. To learn effective long-tail augmentation, the representation distribution of real-tail nodes should match the representation distribution of pseudo-tail nodes generated by the auto drop module, which is calculated as follows: \[\tilde{\mathbf{h}}_{i}^{(l)}=\sum_{j\in\hat{\mathcal{N}}_{i}}\frac{1}{\sqrt{| \hat{\mathbf{A}}_{i}|}\sqrt{|\hat{\mathbf{A}}_{j}|}}\tilde{\mathbf{h}}_{j}^{(l -1)}, \tag{9}\] where \(\tilde{\mathbf{h}}^{(0)}=\mathbf{h}^{(0)}\). Additionally, the distribution of pseudo-head nodes augmented by the knowledge transfer module should match that of real-head nodes. To achieve this, we use Generative Adversarial Networks [7]. The discriminator distinguishes pseudo-head/tail nodes from real-head/tail nodes based on the node representations, while the generator aims to provide information that is consistently classified as real nodes by the discriminator. Here we regard the output layer of LAGCL as the generator, which contests with the discriminator in the learning process. In particular, we use the following loss for the tail nodes adversarial constraint: \[\mathcal{L}_{tail-disc}=\sum_{i\in\mathcal{V}} \mathbb{1}(i\notin\mathcal{V}_{tail})\textsc{CrossEnt}(\mathbf{0},f_{d}(\tilde{\mathbf{h}}_{i};\theta_{d})) \tag{10}\] \[+\mathbb{1}(i\in\mathcal{V}_{tail})\textsc{CrossEnt}(\mathbf{1},f_{d}(\mathbf{h}_{i};\theta_{d})),\] and use the following loss for the head nodes adversarial constraint: \[\mathcal{L}_{head-disc}=\sum_{i\in\mathcal{V}} \textsc{CrossEnt}(\mathbf{0},f_{d}(\hat{\mathbf{h}}_{i};\theta_{d })) \tag{11}\] \[+\mathbb{1}(i\in\mathcal{V}_{head})\textsc{CrossEnt}(\mathbf{1},f_{d}(\mathbf{h}_{i};\theta_{d})),\] where \(\textsc{CrossEnt}(\cdot)\) is the cross entropy function, \(\mathbb{1}(\cdot)\) is the indicator function, \(f_{d}(\cdot;\theta_{d})\) is the discriminator function parameterized by \(\theta_{d}\), which calculates the probability of a node being a head node, as \[f_{d}(\mathbf{h}_{i};\theta_{d})=\sigma\Big{(}\mathbf{w}_{d}^{ \top}\textsc{LeakyReLU}(\mathbf{W}_{d}\mathbf{h}_{i}+\mathbf{b}_{d})\Big{)}, \tag{12}\] where \(\textsc{LeakyReLU}(\cdot)\) is used as the activation function, \(\sigma(\cdot)\) is the sigmoid function, \(\theta_{d}=\{\mathbf{W}_{d}\in\mathbb{R}^{d\times d},\mathbf{b}_{d}\in\mathbb{ R}^{d\times 1},\mathbf{w}_{d}\in\mathbb{R}^{d\times 1}\}\) contains the learnable parameters of the discriminator \(f_{d}\). ### Contrastive Learning **View Generation.** With the knowledge transfer function mentioned above, we can obtain the augmented tail node embedding, as follows: \[\mathbf{h}_{i}^{(l)}=\sum_{j\in\mathcal{N}_{i}}\frac{1}{\sqrt{| \mathbf{A}_{i}|}\sqrt{|\mathbf{A}_{j}|}}\mathbf{h}_{j}^{(l-1)}+f_{t}(\mathbf{ h}_{i}^{(l-1)},\mathbf{h}_{\mathcal{N}_{i}}^{(l-1)};\theta_{t}^{(l-1)}). \tag{13}\] Then, we follow the approach of SimGCL [28] and generate different views by slightly rotating refined node embeddings in space. This method retains original information while introducing the InfoNCE loss as an additional self-supervised task to improve robustness, as follows: \[\mathbf{h}_{i}^{(l)^{\prime}}=\mathbf{h}_{i}^{(l)}+\Delta_{i}^{(l)^{\prime}}, \mathbf{h}_{i}^{(l)^{\prime\prime}}=\mathbf{h}_{i}^{(l)}+\Delta_{i}^{(l)^{ \prime\prime}}, \tag{14}\] where the noise vectors \(\Delta_{i}^{\prime}\) and \(\Delta_{i}^{\prime\prime}\) are subject to \(||\Delta||_{2}=\epsilon\) and \(\Delta=\bar{\Delta}\odot\text{sign}(\mathbf{h}_{i}^{(l)})\), \(\bar{\Delta}\in\mathbb{R}^{d}\sim U(0,1)\). We can use \(\epsilon\) to control the rotation angle of \(\mathbf{h}_{i}^{(l)^{\prime}},\mathbf{h}_{i}^{(l)^{\prime\prime}}\) compared to \(\mathbf{h}_{i}^{(l)}\). Since \(\mathbf{h}_{i}^{(l)},\mathbf{h}_{i}^{(l)^{\prime}},\mathbf{h}_{i}^{(l)^{\prime \prime}}\) always belong to the same hyperoctant, so adding noise will not cause significant deviation. The noise is injected into each convolution layer and we average each layer output as the final node embedding. To simply the notation, we denote \(\mathbf{h}_{i}\) as the final node embedding after \(L\) layers, \(\mathbf{h}_{i}^{\prime}\) and \(\mathbf{h}_{i}^{\prime\prime}\) as two generated views. #### 3.3.2 Contrastive Loss. After obtaining the generated views of each node, we utilize the contrastive loss, InfoNCE [8], to maximize the agreement of positive pairs and minimize that of negative pairs: \[\mathcal{L}_{cl}^{U}=\sum_{u\in\mathcal{U}}-\log\frac{\exp(s(\mathbf{h}_{i}^{ \prime},\mathbf{h}_{i}^{\prime\prime})/\tau)}{\sum_{v\in\mathcal{U}}\exp(s( \mathbf{h}_{i}^{\prime},\mathbf{h}_{j}^{\prime\prime})/\tau)}, \tag{15}\] where \(s(\cdot)\) measures the similarity between two vectors, which is set as cosine similarity function; \(\tau\) is the hyper-parameter, known as the temperature in softmax. Analogously, we obtain the contrastive loss of the item side \(\mathcal{L}_{cl}^{I}\). Combining these two losses, we get the objective function of self-supervised task as \(\mathcal{L}_{cl}=\mathcal{L}_{cl}^{U}+\mathcal{L}_{cl}^{I}\). ### Multi-task Training We leverage a multi-task training strategy to optimize the main recommendation task and the auxiliary tasks including translation task, discrimination task and contrastive learning task jointly: \[\mathcal{L}=\mathcal{L}_{rec}+\lambda_{1}\mathcal{L}_{trans}+\lambda_{2} \mathcal{L}_{disc}+\lambda_{3}\mathcal{L}_{cl}+\lambda_{4}||\Theta||^{2}, \tag{16}\] where \(\Theta\) is the set of model parameters, \(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}\) are hyperparameters to control the strengths of the diversity preserving loss. \(\mathcal{L}_{rec}\) is the loss function of the main recommendation task. In this work, we adopt Bayesian Personalized Ranking (BPR) loss [18]: \[\mathcal{L}_{rec}=\sum_{u,i,j\in\mathcal{O}}-\log\sigma(\hat{y}_{u,i}-\hat{y} _{u,j}), \tag{17}\] where \(\hat{y}_{u,i}=\mathbf{h}_{u}^{\top}\mathbf{h}_{i}\) is the preference score. \(\sigma(\cdot)\) denotes the sigmoid function. \(\mathcal{O}=\{(u,i,j)|(u,i)\in\mathcal{O}^{+},(u,j)\in\mathcal{O}^{-}\}\) denotes the training data, and \(\mathcal{O}^{-}\) is the unobserved interactions. ## 4 Experiments To verify the effectiveness of the proposed LAGCL, we conduct extensive experiments and report detailed analysis results. ### Experimental Setups #### 4.1.1 Datasets. We evaluate the LAGCL using three widely-used public benchmark datasets: Yelp20181, Amazon-Book2 and Movielens-25M3. The statistics of these datasets are presented in Table 1. Footnote 1: [https://www.yelp.com/dataset](https://www.yelp.com/dataset) Footnote 2: [https://cseweb.ucsd.edu/~jmcauley/datasets.html#amazon_reviews](https://cseweb.ucsd.edu/~jmcauley/datasets.html#amazon_reviews) Footnote 3: [https://grouplens.org/datasets/movielens/25m/](https://grouplens.org/datasets/movielens/25m/) #### 4.1.2 Evaluation Metrics. Due to our focus on Top-N recommendation, following the convention in the previous research[29], we discard ratings less than 4 in Movielens-25M, and reset the rest to 1. We split the interactions into training, validation, and testing set with a ratio of 7:1:2. In the test set, we evaluate the performance of each model using the relevancy-based metric Recall@20 and the ranking-aware metric NDCG@20. #### 4.1.3 Baselines. We compare LAGCL with other GCN-based recommendation methods, including: * **LightGCN[10]** designs a light graph convolution to improve training efficiency and representation ability. * **SGL[24]** designs an auxiliary tasks via perturbation to the graph structure (such as edge dropout), which achieves greater benifits in long-tail recommendation. * **NCL[16]** improves the recommendation performance by clustering similar nodes to provide semantic neighbors and structural neighbors. * **RGCF[20]** integrates a graph denoising module and a diversity preserving module to enhance the robustness of GCN-based recommendation. * **SimGCL[28]** proves the positive correlation between the uniformity of representations and the ability to debias through feature-level perturbation and contrastive learning, and achieved greater long-tail performance than SGL. #### 4.1.4 Settings and Hyperparameters. We develop the model using the open-source SELFRec4[29]. For a fair comparison, we prioritize the hyperparameters reported in the original papers for each baseline when feasible. In cases \begin{table} \begin{tabular}{l c c c c} \hline \hline Dataset & \#Users & \#Items & \#Interactions & Density \\ \hline Yelp2018 & 31,668 & 38,048 & 1,561,406 & 0.1296\% \\ Amazon-Book & 52,643 & 91,599 & 2,984,108 & 0.0619\% \\ Movielens-25M & 155,002 & 27,133 & 3,612,474 & 0.0859\% \\ \hline \hline \end{tabular} \end{table} Table 1: The statistics of three datasets. where this information is not provided, we conduct a grid search to adjust the hyperparameters. As for the general settings of all the baselines, the Xavier initialization[6] is used on all embeddings. The embedding size is 64, the parameter for \(L_{2}\) regularization is \(10^{-4}\) and the batch size is 2048. We use Adam[12] with the learning rate 0.001 to optimize all the models. More settings can be found in [https://github.com/im0qianqian/LAGCL](https://github.com/im0qianqian/LAGCL). ### Performance Comparison Table 2 shows our comparison results with other baselines in three datasets. We have the following observations: (1) Our proposed LAGCL consistently outperforms all baselines in different datasets and metrics. Specifically, LAGCL achieves a relative improvement of 25.6%, 61.8%, and 9.6% on Recall@20 compared to LightGCN on the Yelp2018, Amazon Book, and Movielens-25M datasets, respectively. Compared to the strongest baseline (SimGCL), LAGCL also achieves better performance, e.g, about 1.81%, 2.35%, 0.73% performance improvement of Recall@20 on the same datasets. (2) All graph contrastive learning based methods _e.g.,_ SGL, NCL, RGCF, SimGCL, show significant improvement compared to LightGCN on three datasets, which verifies the effectiveness of contrastive learning for collaborative filtering. SimGCL achieves better performance than other baselines, demonstrating that feature-level augmentation is more suitable for collaborative filtering tasks than structure-level augmentation. It is worth noting that our method predicts neighborhood information for tail nodes while preserving the original graph structure. LAGCL incorporates the advantages of feature-level augmentation and avoids the possibility of drastic changes in tail node information due to graph structural changes caused by structure-level augmentation. As a result, our method LAGCL achieves the state-of-the-art performance. ### Ablation study We conduct an ablation study to compare several ablated variants, including the "w/o AD" variant that uses random dropout instead of the auto drop module. We also consider the "w/o KT" variant that do not use knowledge transfer \begin{table} \begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Yelp2018} & \multicolumn{2}{c|}{Amazon Book} & \multicolumn{2}{c}{Movielens-25M} \\ & Recall@20 & NDCG@20 & Recall@20 & NDCG@20 & Recall@20 & NDCG@20 \\ \hline LightGCN & 0.0583 & 0.0486 & 0.0323 & 0.0254 & 0.3267 & 0.2276 \\ SGL & 0.0659(+13.0\%) & 0.054(+11.4\%) & 0.0443(+37.0\%) & 0.0352(+38.5\%) & 0.3471(+6.2\%) & 0.2404(+7.2\%) \\ NCL & 0.0663(+13.7\%) & 0.0547(+12.5\%) & 0.0426(+32.0\%) & 0.0331(+30.2\%) & 0.3292(+0.8\%) & 0.2306(+1.3\%) \\ RGCF & 0.0591(+1.5\%) & 0.0487(+0.1\%) & 0.0345(+6.9\%) & 0.0274(+7.9\%) & 0.3137(-4.0\%) & 0.2060(-9.5\%) \\ SimGCL & 0.0719(+23.4\%) & 0.0600(+23.4\%) & 0.0510(+57.9\%) & 0.0406(+59.8\%) & 0.3553(+8.8\%) & 0.2468(+8.4\%) \\ **LAGCL** & **0.0732**(**+**25.6\%)** & **0.0604**(**+**24.3\%)** & **0.0522**(**+**61.8\%)** & **0.0415**(**+**63.4\%)** & **0.3579**(**+**9.6\%)** & **0.2509**(**+**10.2\%)** \\ \hline \hline \end{tabular} \end{table} Table 2: Overall performance comparison. The percentage in brackets denote the relative performance improvement over LightGCN. The best results are bolded and the best results of baseline are underlined. module to augment the tail nodes, and the "w/o GAN" variants, which are generative adversarial networks proposed in Section 3.2. We have the following observations from Fig. 3. (1) Removing any of the components leads to a performance decrease, with the knowledge transfer module contributing the most. This demonstrates the importance of using knowledge transfer module to augment tail nodes is crucial in GCN-based recommendation scenarios. (2) Using random dropout instead of the auto drop module results in a decrease in performance. It indicates that auto drop module can help to extract informative transition patterns from head nodes that the random dropout strategy cannot learn. (3) Removing the generative adversarial networks significantly decreases the performance. This demonstrates that we cannot achieve meaningful data augmentation without the generative adversarial networks to ensure the distribution match between pseudo-head/tail nodes and real-head/tail nodes. ### Further Analysis of LAGCL #### 4.4.1 Distribution uniformity analysis. A key contribution of the proposed LAGCL is the utilization of long-tail augmentation to supplement the neighbor information of tail nodes for GCN-based recommendation. To better understand the benefits brought by LAGCL, we visualize the learned embeddings in Fig 4 to illustrate how the proposed approach affects representation learning. We use Gaussian kernel density estimation (KDE) [2] to plot user and item embedding distributions in two-dimensional space. Additionally, we visualize each node's density estimations on angles on the unit hypersphere \(\mathcal{S}^{1}\) (i.e., circle with radius 1). We can see that, the embeddings learned by LightGCN fall into serveral clusters located on narrow arcs. Graph contrastive learning-based methods exhibit a more uniform distribution than LightGCN, where SimGCL has a more uniform distribution than other structure-based graph contrastive learning-based methods (SGL, NCL and RGCF). When compared to the best baseline SimGCL, the Figure 3: Ablation study in the Yelp2018. LAGCL distribution has fewer dark areas on the circle, indicating that LAGCL has a more uniform distribution that benefits the model performance. Previous studies [21, 28] have shown a strong correlation between contrastive learning and the uniformity of learned representations. Thus, we speculate that a more uniform distribution of embeddings could endow the model with better capacity to capture diverse user preferences and item characteristics. #### 4.2.2 Long Tail Degree Analysis. To verify whether LAGCL can provide additional performance gains for tail nodes, we divide each user into 10 groups of equally size based on their node degree in the user-item bipartite graph, as shown in Fig. 5. The smaller the Group Id, the lower node degree, and the lower user activity. We compare LAGCL with other baselines that alleviate long-tail distribution in graph, such as SGL employs structure-level augmentation and SimGCL utilizes feature-level augmentation. The results show that the performance of these methods are very similar for high-activity users, while for low-activity users, LAGCL exhibits better performance. This proves that our method has significant gains for modeling tail users. Figure 4: The distribution of user and item representations learned from the Yelp2018 dataset. The top of each figure plots the Gaussian Kernel Density Estimation (KDE) in \(\mathbb{R}^{2}\) (the darker the color is, the more points fall in that area). The bottom of each figure plots the KDE on angles (i.e., \(\text{atan2}(y,x)\) for each point \((x,y)\in\mathcal{S}^{1}\)) **Degree threshold \(k\).** As shown in Fig. 6, we conduct a parameter sensitivity experiment on the threshold for dividing the head and tail nodes using the Yelp2018 dataset. The results show that our model achieve the best performance at \(k=20\). We conclude that: (1) LAGCL can learn a transfer strategy through the head users to help bring benefits to tail users. (2) A larger \(k\) will result in fewer number of the head users, which may cause the knowledge transfer module to be insufficiently learned. Conversely, a smaller \(k\) will result in significant quality differences within the head users, causing inaccurate learning in the knowledge transfer module. Therefore, choosing a suitable threshold for division can maximize the overall benefit. ## 5 Related Work ### GCN-based Recommendation GCN-based recommendation methods have treated interaction data as a bipartite graph and applied graph neural networks to obtain embeddings while cap Figure 5: Performance comparison of different user groups. In addition to the classic method LightGCN, we also select two different approaches to alleviate the long-tail distribution in the graph, namely SGL and SimGCL. Figure 6: Performance of LAGCL in the Yelp2018 when adjusting the degree threshold k. turing high-order information. For example, GC-MC [1] applies the Graph Convolution Network (GCN) on the user-item graph and employs one convolutional layer to exploit the direct connections between users and items. PinSage [27] is an industrial solution that leverages efficient random walks and graph convolutions to generate item embeddings in Pinterest. NGCF [22] models high-order connectivities by effectively injecting the collaborative signal into the embedding process explicitly. LightGCN [10] simplifies the design of GCN by removing feature transformation and nonlinear activation, making it more effective for recommendation purpose. However, all of these works perform training in the supervised learning paradigm, which heavily relies on labeled data. Since the observed interactions are considerably sparse compared to the entire interaction space, they may not be sufficient to acquire effective representations. ### Self-supervised Learning in Recommender Systems Self-supervised learning is an emerging technique that leverages unlabeled data to achieve impressive success in various fields, including computer vision [3, 9], natural language processing [4, 14] and graph learning [25, 17, 26]. Inspired by the success of these works, self-supervised learning has also been applied to recommender systems, where it has shown great improvement [29]. For instance, \(S^{3}\)-Rec utilizes four self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence by maximizing the mutual information [30]. Similarly, CLCRec further utilizes contrastive learning to mitigate the cold-start issue by maximizing the mutual dependencies between item content and collaborative signals [23]. Moreover, a promising line of research has incorporated contrastive learning into graph-based recommenders to tackle the label sparsity issue with self-supervision signals. For example, SGL [24] uses node dropout, edge dropout and random walk on the user-item interaction graph to generate different views and maximize the agreement between them. NCL [16] incorporates structural and semantic neighbors to enhance graph-based collaborative filtering. SimGCL [28] generates contrastive views by adding uniform noise in the embedding space instead of graph augmentations. ## 6 Conclusion In this work, we propose a novel long-tail augmented graph contrastive learning (LAGCL) method for recommendation. Specifically, we introduce a learnable long-tail augmentation schema that enhances tail nodes by supplementing them with predicted neighbor information. To make the data augmentation schema learnable, we design an auto-drop strategy to generate pseudo-tail nodes from head nodes and a knowledge translation module to reconstruct the head nodes from pseudo-tail nodes. To ensure the effectiveness of the data augmentation, we leverage adversarial constraints to distinguish between pseudo-tail and real-tail nodes, as well as between augmented tail nodes and real-head nodes. Comprehensive experiments demonstrate the effectiveness of our proposed LAGCL. ## Ethics Statement Our work aims to improve the accuracy of recommendations using only exposure and click data, without collecting or processing any personal information. We recognize the potential ethical implications of recommendation systems and their impact on user privacy, and we have taken steps to ensure that our work adheres to ethical standards. We understand that recommendation systems have the potential to influence user behavior and may raise concerns related to user privacy. To address these concerns, we have designed our approach to rely solely on non-personal data, ensuring that our methods do not infringe on user privacy or rights. In summary, our work focuses on improving recommendation accuracy while upholding ethical standards related to user privacy. We believe that our approach can contribute to the development of recommendation systems that are both effective and ethical.
グラフコンVOLUTIONALネットワーク(GCNs)は、推奨システムにおいて、高い階層の関連性を効果的に利用できるため、有望な結果を示してきました。しかしながら、これらの方法は、実世界のシナリオでは、データの散在性問題に遭遇することが多いです。この問題に対処するために、GCNベースの推奨方法では、自己 supervised 信号を導入して対比学習を採用しています。これらの方法の有効性にもかかわらず、これらの方法では、ヘッドとテールのノード間の重大な格差に対する考慮が不足しています。これは、対比学習方法のパフォーマンスに影響を与える重要な要因である非均等な表現分布につながります。この問題に対処するために、私たちは、推奨システムのための新しい Long-tail Augmented Graph Contrastive Learning (LAGCL) メソッドを提案しました。具体的には、予測隣接情報を追加することで、テールのノードを強化するための学習可能なロング・テールアグジェgation アプローチを
2309.16618
Revisiting Neural Program Smoothing for Fuzzing
Testing with randomly generated inputs (fuzzing) has gained significant traction due to its capacity to expose program vulnerabilities automatically. Fuzz testing campaigns generate large amounts of data, making them ideal for the application of machine learning (ML). Neural program smoothing (NPS), a specific family of ML-guided fuzzers, aims to use a neural network as a smooth approximation of the program target for new test case generation. In this paper, we conduct the most extensive evaluation of NPS fuzzers against standard gray-box fuzzers (>11 CPU years and >5.5 GPU years), and make the following contributions: (1) We find that the original performance claims for NPS fuzzers do not hold; a gap we relate to fundamental, implementation, and experimental limitations of prior works. (2) We contribute the first in-depth analysis of the contribution of machine learning and gradient-based mutations in NPS. (3) We implement Neuzz++, which shows that addressing the practical limitations of NPS fuzzers improves performance, but that standard gray-box fuzzers almost always surpass NPS-based fuzzers. (4) As a consequence, we propose new guidelines targeted at benchmarking fuzzing based on machine learning, and present MLFuzz, a platform with GPU access for easy and reproducible evaluation of ML-based fuzzers. Neuzz++, MLFuzz, and all our data are public.
Maria-Irina Nicolae, Max Eisele, Andreas Zeller
2023-09-28T17:17:11
http://arxiv.org/abs/2309.16618v1
# Revisiting Neural Program Smoothing for Fuzzing ###### Abstract. Testing with randomly generated inputs (fuzzing) has gained significant traction due to its capacity to expose program vulnerabilities automatically. Fuzz testing campaigns generate large amounts of data, making them ideal for the application of machine learning (ML). _Neural program smoothing_, a specific family of ML-guided fuzzers, aims to use a neural network as a smooth approximation of the program target for new test case generation. In this paper, we conduct the most extensive _evaluation_ of neural program smoothing (NPS) fuzzers against standard gray-box fuzzers (>11 CPU years and >5.5 GPU years), and make the following contributions: (1) We find that the original performance claims for NPS fuzzers _do not hold_; a gap we relate to fundamental, implementation, and experimental limitations of prior works. (2) We contribute the first _in-depth analysis_ of the contribution of machine learning and gradient-based mutations in NPS. (3) We implement Neuzz++, which shows that addressing the practical limitations of NPS fuzzers improves performance, but that _standard gray-box fuzzers almost always surpass NPS-based fuzzers_. (4) As a consequence, we propose _new guidelines_ targeted at benchmarking fuzzing based on machine learning, and present MLFuzz, a platform with GPU access for easy and reproducible evaluation of ML-based fuzzers. Neuzz++, MLFuzz, and all our data are public. fuzzing, machine learning, neural networks, neural program smoothing + Footnote †: ccs: Information systems for neural program smoothing + Footnote †: ccs: Information systems for neural program smoothing the results from the original papers_. We explain this performance gap by outdated or incorrect experimental practices in prior work. 3. We reimplement Neuzz as a custom mutator for AFL++ and show that fixing practical limitations of NPS significantly improves fuzzing performance. Nevertheless, we find that neural program smoothing methods _are outperformed by state-of-the-art gray-box fuzzers_, despite their use of additional computation resources. 4. Based on our findings, we propose _better-suited guidelines_ for evaluating ML-enhanced fuzzing, and present _MLFuzz_, the first fuzzing benchmarking framework with GPU support dedicated to ML-based fuzzing. MLFuzz allows for easy, reproducible evaluation of fuzzers with or without machine learning, similar to standard practices used by FuzzBench (FuzzBench, 2018). The remainder of the paper is structured as follows. Section2 introduces prior work on coverage guided fuzzing and neural program smoothing, before tackling our main analysis on limitations of neural program smoothing in Section3. Section4 presents our implementation of NPS fuzzing and the benchmarking platform. Section5 covers experiments, followed by new experimental guidelines in Section6. We conclude this work in Section7. All our results and code are publicly available (Section8). ## 2. Background Coverage-guided fuzzing.Coverage-guided fuzzers explore the input space of a program starting from a few sample inputs called seeds. They mutate the seeds into new test cases based on a _fitness criterion_, which rewards reaching new code coverage obtained by gray-box access through binary instrumentation. Test cases that increase coverage are kept in the corpus to be evolved further. Over time, the input corpus and the total code coverage grow. During execution, the fuzzer checks the target program for unwanted behavior, notably crashes and hangs. Popular coverage-guided fuzzers are American Fuzzy Log (AFL) (Liang et al., 2017), its successor AFL++ (Liang et al., 2018), and libFuzzer (Liang et al., 2019). Alongside basic mutations, most gray-box fuzzers use the _havoc_ mutation strategy, where a fixed number of randomly chosen atomic mutations are chained to a more complex mutation (Liang et al., 2018). Motivated by the success of havoc in modern fuzzers, \(\text{Havoc}_{\text{MB}}\)(Liang et al., 2019) was designed to implement the havoc strategy as a two-layer multi-armed bandit (Beng et al., 2019). Despite the trivial reward function used by the bandit, \(\text{Havoc}_{\text{MB}}\) claims to significantly improve code coverage over random havoc in extensive benchmarks. Fuzzing with machine learning.ML has been applied to various tasks in the fuzzing loop. Neural byte sieve (Srivastava et al., 2017) experiments with multiple types of recurrent neural networks that learn to predict optimal locations in the input files to perform mutations. Angora (Krizhevsky et al., 2014) uses byte-level taint tracking and gradient descent to mutate test cases towards new coverage. FuzzerGym (Fuzzer et al., 2019) and Bortinger et al. (Bortinger et al., 2020) formulate fuzzing as a reinforcement learning problem that optimizes coverage. In parallel to mutation generation, machine learning is naturally fit for generating test cases directly. Skyfire (Srivastava et al., 2017) learns probabilistic grammars for seed generation. Learn&Fuzz (Liang et al., 2018) uses a sequence-to-sequence model (Liang et al., 2019) to implicitly learn a grammar to produce new test cases. GANFuzz (Srivastava et al., 2017) uses generative adversarial networks (GANs) (Krizhevsky et al., 2014) to do the same for protocols. DeepFuzz (Krizhevsky et al., 2014) learns to generate valid C programs based on a sequence-to-sequence model for compiler fuzz testing. The application of ML to fuzzing is covered more extensively in (Srivastava et al., 2017; Bortinger et al., 2020). Neural program smoothing.Program smoothing (Han et al., 2017; Han et al., 2017) was initially introduced as a way to facilitate program analysis and overcome the challenges introduced by program discontinuities. Among the uses of machine learning in fuzzing, neural program smoothing is one of the most recent and popular methods, due to its great performance in the original studies. Neuzz (Neuzz, 2018) trains a neural network to serve as a smooth approximation of the original program in terms of code coverage (Figure1). First, all test cases (2) from the corpus (1) are executed on the instrumented program (3) to obtain their individual code coverage (4), i.e. edge coverage from afl-showmap. The respective pairs of test case and coverage are then used to train a neural network (5), which learns to predict the coverage for each test case. Being smooth and differentiable, the neural network can be used for computing _gradients_, the values of derivatives of the program w.r.t. its inputs. These indicate the direction and rate of fastest increase in the function value and can be used to flip specific edges in the bitmap from zero to one (6). Each gradient corresponds to one byte in the input. The locations with the highest gradient values are mutated (7) to propose new test cases (8) that should reach the targeted regions of the code. This idea is inspired by adversarial examples, more precisely FGSM (Krizhevsky et al., 2014), where a change in the input in the direction of the sign of the gradient is sufficient to change the model outcome. MTFuzz (Nicolae et al., 2019) extends Neuzz with multitask learning (Nicolae et al., 2019): the neural network is trained against three types of code coverage instead of only edge coverage. _Context-sensitive coverage_(Krizhevsky et al., 2014; Srivastava et al., 2017) distinguishes between distinct caller locations for the same covered edge, while _approach-sensitive coverage_(Beng et al., 2019) introduces a third possible value in the coverage bitmap reflecting when an edge was nearly covered because the execution has reached a neighboring edge. The three types of coverage help learn a joint embedding that is used to determine interesting bytes for mutation in the test case. The bytes are ranked using a saliency score, which is computed as the sum of gradients for that byte in the learned embedding space. Each "hot byte" is mutated by trying out all possible values, without further relying on the gradients. Figure 1. Neural program smoothing for fuzzing. PreFuzz (Zhang et al., 2017) attempts to solve some limitations of Neuzz and MTFuzz by extending Neuzz in two ways. The program instrumentation is changed to include all neighboring edges of covered ones in the bitmap. This information is used to probabilistically choose which edge to target next for coverage, with the end goal of encouraging diversity in edge exploration. Additionally, the success of havoc mutations (Neuzz, 2017) is leveraged: after the standard Neuzz mutation, havoc is applied probabilistically to pre-defined segments of bytes in the test case, according to their gradient value. ## 3. Analyzing Neural Program Smoothing In this section, we provide our main analysis of neural program smoothing, covering both the concepts behind NPS, as well as existing fuzzer implementations. We tackle three orthogonal perspectives: (i) conceptual or fundamental, (ii) implementation and usability, and (iii) experimental considerations. ### Conceptual Limitations **(C1) Approximation errors of the neural network.** Being an empirical process, neural network training can suffer from errors introduced in the training process by, e.g., limited training data and training time, or sensitivity to hyperparameters. Even in the ideal case, _being a smooth approximation, the NPS model will always differ from the actual program exactly at the most interesting points_, i.e., discontinuities, branches, and jumps. This approximation error is intrinsic to a smoothing approach and, at the same time, what allows NPS methods to use gradients and numeric optimization towards producing new inputs. **(C2) Capacity to reach targeted edges.** Arguably, the most salient research question to elucidate about neural program smoothing is whether the gradient-guided mutation can indeed reach the targeted edges. As NPS is based on multiple components (Figure 1), the overall performance of the fuzzer critically depends on the effectiveness of its individual components: 1. The prediction accuracy of the neural network (5); 2. The capacity of the gradient-based mutations (7) to achieve the expected new coverage on the target program. The experiments we perform later in the paper show that the machine learning component as used by neural program smoothing has impaired performance. To the best of our knowledge, prior NPS studies have not assessed what the model was learning and whether it was reaching its objective. **(C3) Incomplete coverage bitmaps.** Another central limitation of neural program smoothing that we uncover relates to the incompleteness of the coverage bitmaps that the neural network receives. All NPS fuzzers retrieve covered edges through afl-showing, which only reports the edge IDs that are reached. When the coverage information from all seeds is put together for the overall bitmap used for training the neural network, it thus only contains edges that were reached at least once by any of the seeds. As such, unseen edges are not part of the bitmap and cannot be explicitly targeted and discovered by the model. In practice, if the neural network does discover new edges, it is rather inadvertently due to randomness. While having access to only an incomplete coverage bitmap is a conceptual limitation, it can be addressed on an implementation level. It is sufficient to change the instrumentation of the program to include uncovered edges to overcome this issue. Among existing NPS fuzzers, PreFuzz is the only one that considers information about neighbors of reached edges in the coverage bitmap, albeit not motivated by the limitation we uncover. Their goal is rather to be able to choose the next edge to target in a probabilistic fashion, depending on the degree of coverage of each edge and its neighbors. The fundamental limitations uncovered in this section, while some easier to solve than others, are what we see as main obstacle in the adoption of NPS-based fuzzing in practice. As will be confirmed in Section 5, the experiments are consistent with these limitations. ### Implementation and Usability Limitations We now turn to practical aspects that make existing approaches to neural program smoothing inconvenient to use, such that an independent evaluation requires major effort and code rewriting. **(11) Use of outdated components.** Existing implementations of neural program smoothing (Zhu et al., 2017; Wang et al., 2017; Zhang et al., 2017), along with HavocMAB (Zhang et al., 2017) are implemented as extensions of AFL instead of using the more recent, more performant AFL++ as base. Moreover, their dependency on outdated Python, TensorFlow and PyTorch versions impacts usability. For the purpose of experiments, we have patched the code and updated the dependencies of all these fuzzers, as even for the most recent ones, some of their used libraries were already not available at the time of their publication. **(12) Difficulty in building targets.** Prior NPS studies provided the binaries used in their own research, ensuring reproducibility. However, for a fuzzer to be practical, it is advisable to rather provide instructions on how to build new programs for its use. This is especially important when the fuzzer uses custom target instrumentation. MTFuzz (Zhu et al., 2017), for instance, compiles a target program in five different ways due to the introduction of three additional types of instrumentation. For this reason, we exclude MTFuzz from our empirical study as not being practical for real-world fuzzing. Moreover, we argue that the three types of coverage used by MTFuzz are to a large extent redundant (conceptual limitation) and could be grouped into a unified coverage, thus reducing the build effort for this fuzzer. **(13) Use of magic numbers.** The magic numbers programming antipattern (Mikolov et al., 2015) is frequently encountered in the implementations of neural program smoothing-based fuzzers. These values and other algorithmic changes are not mentioned in the original papers where each NPS fuzzer is introduced. It is thus difficult to establish whether the performance of each method is strictly linked to its proposed algorithm or rather to the implementation tweaks. E.g., the maximum number of mutation guiding gradients per seed is set to 500; this value is not a parameter of the algorithm presented in the paper. Our findings above show that the effort to set up existing NPS fuzzers and build targets for them is significantly higher than for standard gray-box fuzzers, such as AFL and its variants, or libFuzzer. ### Evaluation Limitations In this section, we highlight flaws and limitations of previous experimental evaluations of NPS fuzzers and HavocMAB, which have led to unrealistic performance claims. **(E1) Experimental protocol.** The more recent NPS publications (Wang et al., 2017; Zhang et al., 2018) _lack of comparisons with recent gray-box fuzzers_, such as AFL++ and libFuzzer--fuzzers that were available and confirmed as state-of-the-art long before their publication. HavocMAB (Zhang et al., 2018) has included Neuzz and MTFuzz in their evaluation alongside AFL++. However, we find that they use the same binary target for both AFL and AFL++, instead of building the program separately for AFL++. AFL++ runs on AFL instrumented binaries, but not efficiently. Moreover, the size of the coverage bitmap is usually larger for AFL++ than with AFL instrumentation; hence, code coverage as measured by the fuzzers is not directly comparable. This makes the conclusions in the HavocMAB evaluation (Zhang et al., 2018) questionable. **(E2) Fuzzer configuration for speed.** We note that prior studies benchmarking NPS methods compile their targets using afl-gcc, which results in slower targets and thus impacts fuzzing speed. The AFL++ documentation recommends using preferably afl-clang-fast or afl-clang-lto (Fang et al., 2018). Additionally, AFL-based fuzzers have multiple options for transferring fuzz data to the program. The most basic is to have AFL write test cases to file, and the target program executed with command line options to process the file as input. The more sophisticated and recommended _persistent_ mode uses a fuzzing harness that repeatedly fetches fuzz data from AFL via shared memory and executes the function with the test data as input without restarting the whole program. "_All professional fuzzing uses this mode_", according to the AFL++ manual (Beng et al., 2018). Depending on the target, the persistent mode can increase the throughput by 2-20\(\times\)(Fang et al., 2018). Previous neural smoothing papers seem to run all experiments by feeding inputs via files, which should considerably slow down all fuzzers. This is consistent with their results, where the more modern AFL++ consistently performs worse than AFL in the HavocMAB study (Zhang et al., 2018), and the targets are printed with command line arguments in the original Neuzz paper (Wang et al., 2018). We conjecture that this tips the scale in favor of ML-based fuzzers, which are themselves orders of magnitude slower than modern fuzzers (Zhang et al., 2018). This statement is validated experimentally in Section 5.7. ## 4. Implementing Neuzz++ and Mlruzz In this section, we introduce _Neuzz++_, our implementation of neural program smoothing that aims to solve some limitations identified in Section 3, as well as the new experimental platform for evaluating ML-based fuzzers. _Neuzz++._ We implement a variation of Neuzz as a custom mutator for AFL++, which we name Neuzz++ (see Figure 2). This allows our method to leverage most AFL++ features, like its standard mutations and power schedule. More importantly, it allows for machine learning-produced test cases and randomly mutated ones to evolve from each other. We choose AFL++ as base for our implementation for its state-of-the-art performance, thus addressing Issue I1. Being a custom mutator, Neuzz++ is modular, easy to build, and integrated with a default AFL++ installation. In practice, Neuzz++ consists of two parts: the main AFL++ process with the custom mutator implemented in C, and a Python extension that is called for machine learning operations. The two processes communicate using named pipes. We set a minimum requirement of \(T\) test cases in the corpus for the custom mutator to run. These are used to train the neural network for the first time; the model is retrained at most every hour if at least ten new test cases have been added to the corpus1. This allows to refine the model over time with new coverage information from recent test cases. In practice, we use \(T=200\); this value is tuned experimentally and aims to strike the balance across all targets between fuzzing with machine learning as early as possible, while waiting for enough data to be available for model training. Intuitively, a larger dataset produces a better performing model. afl-shomap is used to extract the coverage bitmap. We introduce a coverage caching mechanism for model retraining which ensures that coverage is computed only for new test cases that were produced since last model training. Each time the C custom mutator is called by AFL++, it waits for the Python component to compute and send the gradients of the test case. Based on these, the mutations are computed by the C mutator and returned to AFL++. In contrast to Neuzz, the gradients are not precomputed per test case, they are not saved to disk, the neural network is kept in memory, and the gradients are computed only on demand. These optimizations minimize the time spent on ML-related computations, keeping more time for fuzzing. Footnote 1: Neuzz and PreFuzz solve this issue by running AFL for the first hour of fuzzing, then use the collected data for model training (Figure 2). The neural network is a multi-layer perceptron (MLP) with the same structure as Neuzz (one hidden layer, 4096 neurons). As shown in the PreFuzz paper (Zhang et al., 2018), we also found that different neural network architectures do not improve fuzzing performance. In contrast to NPS fuzzers, we keep 10% of the test cases as validation set for evaluating the performance of the model. We use the Adam optimizer (Kingma et al., 2014), a learning rate of \(10^{-4}\), and cosine decay with restarts. It is easy to parallelize model training and the main AFL++ routine for improved fuzzing effectiveness when testing real targets. However, for experimental evaluation, we choose to have AFL++ wait for the neural network to train, similarly to previous implementations of neural program smoothing fuzzers. This allows for fair experimental comparison and computation resource allocation. The original Neuzz implementation applies four different mutation patterns on each byte selected according to the highest ranking gradients: incrementing the byte value until 255, decrementing the byte value down to 0, inserting a randomly sized chunk at the byte location, and deleting a randomly sized chunk starting at the given byte location. We apply the same mutation pattern for Neuzz++. Figure 2. Operation mode of previous NPS-guided fuzzers and our Neuzz++. MLFuzz.MLFuzz serves as a benchmarking framework for building test targets, running fuzzing trials in an isolated environment, and analyzing the findings. Its main features are: * Test targets from Google Fuzzer Test Suite (Fuzzer, 2017) are compiled with the recommended and most recent compiler of the appropriate fuzzer; the build scripts are made available (addressing Issue I2 and issue E2). * Targets are compiled with AddressSanitizer (Kumar et al., 2019) to detect memory errors. * Six fuzzers are currently included in MLFuzz: AFL v2.57b, AFL++ v3.15a, HavocMAB, Neuzz, PreFuzz and our Neuzz++. * The implementation is containerized via Docker (Docker, 2017). Python dependency specification is handled via virtual environments and Poetry (Docker, 2018). * Each fuzzing trial runs on one dedicated CPU and optionally one GPU for fuzzers that support it. * All supported fuzzers have been modified to accept seeding of their random number generator for reproducible results. * For all fuzzers, coverage is measured by replaying the corpus at the end of a run. We use binaries instrumented with AFL to ensure we do not disadvantage the AFL-based fuzzers, and af1-showmap from AFL++, since it has a larger bitmap with less hash collisions. * Test cases are transmitted to fuzzers via shared memory, with the option to switch to slow transmission of test cases via the file system (addresses Issue E2). ## 5. Experiments This section introduces our experiments and practical analysis, complementing the main findings from previous sections. After presenting our setup (Section 5.1), we assess the performance of the components of NPS-based fuzzers in Section 5.2. We compare our Neuzz++ to prior neural program smoothing fuzzers and standard gray-box fuzzers in an extensive benchmark in Section 5.3. Sections 5.4 to 5.6 explore the added benefit of machine learning to NPS fuzzers, while Section 5.7 sheds light on experimental protocol differences with previous NPS publications and their impact on fuzzing results. Finally, we report bugs found in Section 5.8. ### Experimental Setup All experiments are performed on a server running Ubuntu 20.04 with four Nvidia Titan Xp GPUs. Our study includes the six fuzzers from MLFuzz: AFL and AFL++ as standard gray-box fuzzers, HavocMAB as recent fuzzer claiming state-of-the-art performance, and NPS fuzzers Neuzz, PreFuzz, and our own Neuzz++. We use the original implementation and parameters provided by the authors for all baselines, except when stated otherwise. We patch the code of Neuzz and PreFuzz to port them to Python 3.8.1, CUDA 11.5, TensorFlow 2.9.1 (Abadi et al., 2017) and PyTorch 1.4 (Krizhevsky et al., 2017), as the original implementations are based on outdated libraries that are not available anymore or incompatible with our hardware. We choose Google Fuzzer Test Suite (Fuzzer, 2017) and FuzzBench (Krizhevsky et al., 2017) as standard, extensive benchmarks for our experimental evaluation. We make use of 23 targets, summarized in Table 1. These are selected for being accessible, having dependencies available on Ubuntu 20.04, and being non-trivial to cover through fuzz testing. Note that we only include targets from FuzzBench if they are not already included in Fuzzer Test Suite. All results are reported for 24 hours of fuzzing. We repeat each experiment 30 times to account for randomness, unless stated otherwise. Each standard gray-box fuzzer is bound to one CPU core, while NPS fuzzers are allotted one CPU and one GPU per trial. The main metrics used for evaluation are code coverage and number of bugs found. For code coverage, we use edge coverage as defined by the AFL family of fuzzers. However, we emphasize that AFL and AFL++ compute edge coverage differently. In order to avoid the measuring errors introduced when ignoring this aspect, we count coverage by replaying the corpus using af1-showmap from AFL++ on the same binary, independently of which fuzzer was used in the experiment. The setup we use fixes all experimental limitations we highlighted in Section 3.3 (Issues E1 and E2). ### Performance of Machine Learning Models We now investigate the quality of coverage predictions by the neural network and gradient-based mutations, in relation to concerns about the fundamental principle of neural program smoothing (Section 3.1). We tackle the following questions: * Can the neural network learn to predict edge coverage? * Can gradient-based mutations reach targeted edges? To this end, we propose quantitative and qualitative analyses of the performance of the neural network in neural program smoothing fuzzers. Without loss of generality, we investigate these based on Neuzz++ as a proxy for all neural program smoothing fuzzers \begin{table} \begin{tabular}{l l l l} \hline \hline Target & Format & Seedsa & LOCb \\ \hline \multicolumn{4}{l}{**Source: Fuzzer Test Suite**} \\ \hline boringsl-2016-02-12 & SSL private key & 107 & 102793 \\ freetype-2017 & TTF, OTF, WOFF & 2 & 95576c \\ guetelli-2017-3-30 & JPEG & 2 & 6045 \\ harfbuzz-1.3 & TTF, OTF, TTC & 58 & 21413 \\ json-2017-02-12 & JSON & 1 & 23328 \\ lcms-2017-03-21 & ICC profile & 1 & 33920 \\ libarchive-2017-01-04 & archive formats & 1 & 141563 \\ libjpeg-turbo-07-2017 & JPEG & 1 & 35922 \\ libpng-1.2.56 & PNG & 1 & 24621 \\ libxml2-v2.9.2 & XML & 0 & 203166 \\ openssl-1.0.2d & DER certificate & 0 & 262547 \\ perc2-10.00 & PERI. regex & 0 & 67333 \\ proj4-2017-08-14 & custom & 44 & 6156 \\ re2-2014-12-09 & custom & 0 & 21398 \\ sqlite-2016-11-14 & custom & 0 & 122271 \\ vorbis-2017-12-11 & OGG & 1 & 17584 \\ woff2-2016-05-06 & WOFF & 62 & 2948 \\ \hline \multicolumn{4}{l}{**Source: FuzzBench**} \\ \hline bloaty & ELF, Mach-O, etc. & 94 & 690642 \\ curl & comms. formats & 41 & 153882 \\ libpcap & PCAP & 1287 & 56663 \\ openh264 & H.264 & 174 & 97352 \\ stb & image formats & 467 & 71707 \\ zib & zlib compressed & 1 & 30860 \\ \hline \hline \end{tabular} * Targets that do not have seeds use the default from Fuzzbench. * Retrieved with cloc (Kumar et al., 2019). \end{table} Table 1. Target programs from Google Fuzzer Test Suite (Fuzzer, 2017) and FuzzBench (Krizhevsky et al., 2017). included in our study. As all these methods use the same neural network architecture, loss function, method of training, etc., it is to be expected that their models will achieve the same performance when trained on the same dataset. The results of the analyses can be summarized as follows and are detailed subsequently: * Table 2 quantifies the model performance for all targets in terms of standard machine learning metrics; * Figure 3 provides a qualitative analysis of model predictions for a given target, opposing them to correct labels. * Lastly, Figure 3 also assesses the capacity of the neural network to reach edges through gradient-based mutations. _ML performance metrics._ To assess one factor of difficulty of the machine learning task, we evaluate dataset imbalance for the training corpus. This measures the percentage of the positive class (covered edges, in our case the minority) in the coverage bitmap of the training set. Recall that the bitmap is produced by af1-showmap and accounts for the coverage obtained by the corpus before training; the coverage was not necessarily achieved based on a neural network, but rather by AFL++ mutations. Note that this value is averaged across test cases and edges; rare edges might have much smaller coverage ratios, resulting in more difficulty in training an accurate model for those edges. When facing class imbalance, the model tends to prefer the majority class, thus making wrong predictions. For this reason, the performance of the neural network is assessed using precision, recall, F1-score, and precision-recall (PR) trade-off as performance metrics for the neural network. Accuracy is also computed for completeness, but keep in mind that this metric is misleading for imbalanced datasets2. We measure the area-under-the-curve (AUC) of the PR metric to evaluate all the operational points of the neural network. Similar to accuracy, PR-AUC saturates at one, but is more sensitive to wrong predictions in the positive class. The learning setup of neural program smoothing is a multi-label binary classification task, i.e., for each \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Target & \%covered edges & Acc & Prec & Recall & F1 & PR-AUC \\ \hline bloaty & 17.1\% & 0.53 & 0.17 & 0.18 & 0.17 & 0.15 \\ boringssl & 19.3\% & 0.90 & 0.18 & 0.17 & 0.17 & 0.20 \\ curl & 15.2\% & 0.89 & 0.15 & 0.15 & 0.15 & 0.23 \\ freetype2 & 8.6\% & 0.89 & 0.09 & 0.09 & 0.09 & 0.10 \\ guetli & 18.5\% & 0.84 & 0.18 & 0.18 & 0.18 & 0.19 \\ hardbuz & 6.9\% & 0.93 & 0.07 & 0.07 & 0.07 & 0.07 \\ json & 12.7\% & 0.88 & 0.11 & 0.08 & 0.09 & 0.10 \\ lcms & 20.9\% & 0.84 & 0.19 & 0.19 & 0.19 & 0.21 \\ libarchive & 6.9\% & 0.94 & 0.07 & 0.06 & 0.06 & 0.07 \\ libjpeg & 17.8\% & 0.84 & 0.17 & 0.09 & 0.17 & 0.18 \\ libpcap & 6.4\% & 0.92 & 0.06 & 0.06 & 0.06 & 0.07 \\ libpng & 28.8\% & 0.86 & 0.28 & 0.27 & 0.27 & 0.29 \\ libxml2 & 10.5\% & 0.92 & 0.10 & 0.09 & 0.09 & 0.11 \\ openh264 & 21.4\% & 0.81 & 0.22 & 0.30 & 0.21 & 0.22 \\ openssl & 31.2\% & 0.79 & 0.30 & 0.30 & 0.29 & 0.31 \\ perc2 & 4.3\% & 0.96 & 0.04 & 0.03 & 0.03 & 0.04 \\ proj4 & 8.2\% & 0.95 & 0.08 & 0.07 & 0.07 & 0.08 \\ re2 & 16.2\% & 0.87 & 0.15 & 0.13 & 0.13 & 0.16 \\ sqlite & 16.3\% & 0.91 & 0.12 & 0.12 & 0.12 & 0.17 \\ stb & 6.0\% & 0.92 & 0.06 & 0.05 & 0.05 & 0.06 \\ vorbis & 29.6\% & 0.81 & 0.30 & 0.30 & 0.30 & 0.30 \\ woff2 & 22.8\% & 0.85 & 0.22 & 0.22 & 0.21 & 0.13 \\ zlib & 16.1\% & 0.85 & 0.14 & 0.10 & 0.11 & 0.16 \\ \hline \hline \end{tabular} \end{table} Table 2. Dataset properties and neural network evaluation. Figure 3. Predicted and actual edge coverage on _libpng_ for the entire corpus. Top: ML-predicted coverage (pink) is trivial and almost constant over test cases. When each edge is targeted by mutations, predicted coverage (orange) increases for certain edges, but many code edges remain unattainable. Bottom: Coverage extracted with af1-showmap shows that all edges present have been covered at least once by the corpus. test case, multiple binary predictions are made, one per edge; in consequence, the metrics are computed for each edge in the bitmap independently, then averaged over all edges, and finally averaged over trial repetitions. Table 2 reports the model performance metrics, along with the percentage of the positive class in the dataset as imbalance metric. All model metrics are computed on a 10% holdout set of test cases that were not used for training. As Neuzz++ retrains the model multiple times, all measurements are performed on the last trained neural network using the state of the corpus at that time. The precision, recall, F1-score, and PR-AUC values in Table 2 indicate that the neural network has low performance. These metrics are particularly low when the class imbalance is stronger, i.e., for small values of "%covered edges". The dataset imbalance is quite extreme for seven targets, where the positive class represents less than 10% of the dataset, making predictions particularly difficult. To provide an intuition into what the neural network learns, we design a qualitative evaluation of its predicted coverage. This experiment uses the target _libpng_ and the test cases generated in a 24-hours run of Neuzz++. Figure 3 shows two coverage plots for this target for the entire corpus, where each "column" in the plot represents one test case, while each "row" is a program edge. We compare the coverage predicted by a trained ML model for the same test cases and edges (Figure 3 top) to the true coverage extracted with afl-showm (bottom). The bottom plot is the coverage bitmap extracted with afl-showm for the corpus and used for model training by Neuzz, PreFuzz, and Neuzz++. A reduction (deduplication) operation is applied to it, which for _libpng_ reduces the number of edges from 900 to the 293 present in the plot; this operation also explains any visual artifacts present in the image, as the edges are reordered. The pink areas of the two plots differ significantly, with the model predictions being almost constant over all test cases: the model only predicts trivial coverage and fails to capture rare edges. While this is a consequence of the difficulty of the machine learning tasks (small dataset, class imbalance, too few samples w.r.t. the size of the test cases and bitmaps, see Table 2), it results in large approximation errors in the neural network, as outlined in Issue C1. Moreover, recall that Neuzz, PreFuzz and Neuzz++ use the same ML model type and structure, with minor differences in the training procedure and similar model performance. Our findings thus extend to all NPS methods. Finally, we investigate the effectiveness of gradient-based mutations as essential component of NPS fuzzers. In the same setup on _libpng_ from the previous section, we apply Neuzz++ mutations to the corpus generated by a 24-hours fuzzing run as follows. For each edge in the bitmap, we consider the case when it is explicitly targeted and generate all mutations with a maximum number of iterations in the mutation strategy. Figure 3 (top) plots the predicted coverage for each test case and edge before the mutations, as well as the increment of coverage after mutation. Each edge (row) is considered covered by one test case (column) if at least one of the few thousand mutations generated to target it reaches the code location. The results represent coverage estimated by the ML model, not run on the program. However, the coverage the model predicts is an optimistic estimate of the one actually achieved on the target, as the model dictated the mutations. Note that the mutations are generated in the same way for Neuzz, PreFuzz and Neuzz++; our analysis thus applies to all methods and targets. Figure 3 (top) indicates that some locations are more readily reachable through mutations. The harder to reach edges overall match the rarer edges of the corpus, as measured by afl-showmap in the bottom plot. Most importantly, _none of the edges targeted or covered by the mutations in the top plot represent new coverage_. Recall that, by NPS methods' design, a code edge is only present in the bitmap only if it has already been covered by the initial corpus used for training (Issue C3). This becomes evident in the bottom plot of Figure 3: all edges have been covered by at least one test case. As will be shown later, this fundamental flaw of NPS methods translates to a limited practical capacity of reaching new coverage. _The model predicts trivial edge coverage (Issue C1), and gradient mutations cannot target new edges (Issue C3)._ ### Comparing Code Coverage We present the main experiment comparing the achieved code coverage of available neural program smoothing approaches to AFI, AFL++ and the recent HavocMAB in Table 3 (average coverage) and Figure 4 (coverage over time). This experiment alone requires a total computation time of over 11 CPU years and 5.5 GPU years. Overall, AFL++ obtains the best performance on ten targets, followed by HavocMAB with eight targets, and Neuzz++ on par with AFI, winning two targets each. In view of AFL++ performance w.r.t. AFL, it is clear that not including AFL++ as a baseline in all prior neural program smoothing works leads to overly optimistic conclusions about their capacities. After AFL++, HavocMAB is the second most performant fuzzer in terms of code coverage. However, we find that it does not reach the expected ranking advertised in the HavocMAB paper (HavocMAB, 2018). We observe that Neuzz and PreFuzz are never in the top two fuzzers. Moreover, although they were designed to improve AFL performance, their coverage is in most cases lower than that of AFL. AFL wins on 20 out of 23 targets over Neuzz, and 18 out of 23 over PreFuzz. PreFuzz outperforms Neuzz on most targets, however this difference is significant only on six targets (see confidence intervals in Figure 4). This finding is also at odds with original PreFuzz results (HavocMAB, 2018), where the performance gap is significantly wider. Section 5.7 is dedicated to further explaining the difference in performance with the initial papers. Neuzz++ obtains higher coverage than Neuzz and PreFuzz on 21 programs, proving that our improvements over these methods are effective. Targets _libarchive, libxml2, proj4_, and _woff2_ exhibit the most variability among fuzzers. Neuzz and PreFuzz exhibit large standard deviation on _woff2_, where coverage varies depending if the fuzzers reach plateau or not. For the other targets, it seems AFL-based fuzzers do not perform as well as AFL++-based ones. _Overall, AFL++ achieves the highest code coverage. Among NPS fuzzers, Neuzz++ achieves the highest code coverage._ ### Code Coverage from Machine Learning After presenting total coverage for 24-hour runs in Table 3, we now measure how much of the total coverage can be attributed to the machine learning component for each NPS fuzzer. On one hand, the goal is to discount the coverage produced strictly by AFL in the first hour of Neuzz and PreFuzz runs (recall that they use AFL for data collection, see Figure 2) and only measure the NPS fuzzers' contribution. On the other hand, we wish to do the same for Neuzz++, and separate its contribution from that of the base fuzzer AFL++. As Neuzz++ is a custom mutator for AFL++, its seeds usually alternate with regular AFL++ seeds. To this end, we measure edge coverage by corpus replaying, this time only taking into account the seeds obtained by Neuzz, PreFuzz and Neuzz++, respectively. For Neuzz and PreFuzz, this is equivalent to excluding the first hour of coverage, as done by the original authors. In practice, this will include ML-based mutations, but also other hard-coded mutations that the methods apply, such as havoc in the case of PreFuzz. Table 4 summarizes the comparison of edge coverage obtained by the ML components of Neuzz, PreFuzz, and Neuzz++. Program names are aligned with Table 3. The second analysis studies whether NPS fuzzers explore code areas that are harder to reach by standard fuzzers. In that case, neural program smoothing fuzzers could be used in an ensemble of diverse fuzzers, opening the path for all fuzzers to rare parts of the code (Fang et al., 2018). To measure the rarity of edges reached by Neuzz++, we compare the edge IDs that Neuzz++ and AFL++ reach on each program, all trials joint. The edge IDs are obtained by replaying all the test cases with afl-showmap. We summarize the results in Table 6 as follows: Neuzz++ (denoted N+) reveals less than 0.5% additional edges that AFL++ (denoted A+) in 16 out of 23 targets. Neuzz++ does not find any such exclusive edges for eight programs; it is most successful on _lcms_, with 8.2% exclusive edges. On the other hand, AFL++ finds up to 16.4% exclusive edges, lacking exclusive edges on only two programs (_json_ and _sqlite_). We can therefore conclude that NPS-guided fuzzers explore essentially the same code areas as traditional fuzzers. ### NPS-based Fuzzing without GPUs Due to their increased performance for linear algebra and data throughput, GPUs are the _de facto_ standard for machine learning. All NPS methods studied in this paper leverage GPU access to train machine learning models and compute gradients for test case mutations. In practice, this means that they use more computational resources than state-of-the-art gray-box fuzzers, and that practitioners are required to invest in additional hardware. In this section, we wish to assess the performance of NPS methods in the absence of GPUs. Model training with only CPU access should be slower, Figure 4. Average edge coverage over time with 95% confidence interval. \begin{table} \begin{tabular}{l r r r} \hline \hline Target & \%ML seeds & \%MLcov+ & \%derived \\ \hline bloaty & 4.72\% & 28.3\% & 8.78\% \\ boringssl & 27.7\% & 3.3\% & 27.5\% \\ curl & 18.6\% & 26.1\% & 33.1\% \\ freetype2 & 2.2\% & 31.9\% & 3.8\% \\ guetzli & 9.9\% & 9.8\% & 13.8\% \\ harfuzz & 6.6\% & 30.2\% & 15.2\% \\ json & 13.7\% & 37.3\% & 25.4\% \\ lcms & 1.6\% & 57.1\% & 1.1\% \\ libarchive & 18.3\% & 30.2\% & 34.9\% \\ libjpeg & 11.8\% & 10.7\% & 15.9\% \\ libpcap & 13.8\% & 40.0\% & 20.8\% \\ libpng & 19.6\% & 13.8\% & 41.0\% \\ libxml2 & 15.1\% & 23.9\% & 30.1\% \\ openh264 & 10.2\% & 8.0\% & 8.5\% \\ openssl & 30.6\% & 5.6\% & 28.7\% \\ pcre2 & 18.3\% & 17.4\% & 29.88\% \\ proj4 & 5.5\% & 49.7\% & 7.4\% \\ re2 & 23.3\% & 22.8\% & 34.7\% \\ sqlite & 8.1\% & 20.2\% & 6.2\% \\ stb & 14.8\% & 15.3\% & 19.9\% \\ vorbis & 6.0\% & 8.3\% & 7.9\% \\ woff2 & 3.8\% & 19.8\% & 5.2\% \\ zlib & 16.9\% & 20.7\% & 18.1\% \\ \hline \hline \end{tabular} \end{table} Table 5. Statistics for ML-generated test cases of Neuzz++. “%ML seeds” and “%derived” are computed over the total size of the corpus. “%MLcov+” is relative to “%ML seeds”. but it should not impact the performance of the trained model. As such, any loss in fuzzing performance comes from spending more time training and less fuzzing. For this small experiment, we select four targets that operate on a varied range of input formats for diversity. We perform ten trials of all NPS fuzzers with and without GPU access (Table 7). ## 6. Benchmarking ML-based FUZZers Fuzzer evaluation is an open research topic abundantly studied in recent works (Fuzzer et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). A common guideline is that each fuzzer must be tested on multiple programs, using multiple repetitions to account for randomness. The recommended number of repetitions revolves around 10-20 trials. Besides the average performance, indicators of variability (i.e., confidence intervals, statistical tests) are necessary to assess the significance of the results. The main goal of fuzzers is to find bugs, which suggests that unique bugs found in fixed time should be the evaluation metric. However, since bugs are rather rare, the performance of fuzzers is often measured in code coverage over time. This may be justified by observations that more code coverage correlates with more bugs found (Fuzzer et al., 2017). To complement these principles, we propose the following practices when evaluating novel machine learning-based fuzzing methods: 1. **Analyze each new component in the fuzzing loop**. Both performance evaluations and ablation studies of ML models are critical. Metrics specific to the task solved should be used (e.g., accuracy, or precision and recall for classification, mean absolute error or mean squared error for regression, etc.). These complement the view on the overall system performance, i.e., coverage or bugs found in the case of fuzzing. ML evaluation should employ a validation set distinct from the training data to avoid an overly optimistic estimates (Zhu et al., 2018). 2. **Use state-of-the-art fuzzers and configurations as baselines**. Lacking strong baselines prevents one from claiming novel state-of-the-art accomplishments in terms of code coverage and bugs found. All fuzzers in an experiment should be configured for performance (e.g., appropriate compiler, compilation options, harness, input feeding mode). We also recommend introducing new scientific or technical contributions based on recent fuzzers and evaluation platforms, as opposed to their older counterparts. 3. **Use comparable metrics for fuzzing performance.** As not all fuzzers measure the same type of coverage, we encourage the use of one common evaluation metric between multiple fuzzers. In practice, this is easiest done by replaying the corpus at the end of a fuzzing trial, as implemented by FuzzBench (Fuzzer et al., 2017; Wang et al., 2018) and MLFuzz. 4. **Repeat trials often enough to account for variance.** We propose to use 30 trials for fuzzing evaluation, resulting in tight confidence intervals. This sample size is commonly used in statistics and deemed sufficient for the central limit theorem (Fuzzer et al., 2017) to hold. As shown in Figure 4, ML-based fuzzers can have higher coverage variability than gray-box fuzzers, thus requiring more trials for stable baselining. 5. **Ensure reproducible results by fixing and serializing parameters.** While it is difficult to control all sources of randomness when training ML models on GPUs, it remains a good practice in both machine learning and software testing to control possible sources of randomness by seeding random number generators and reusing the same seeds. Experimental configurations and, in the case of ML, hyperparameters should be documented for reproducibility. 6. **Ensure usability of proposed fuzzers.** It should be possible to run a newly proposed fuzzer on programs outside the original publication study. Providing a containerized environment can sustainably decrease setup efforts. We also support integration of new fuzzers with existing benchmarking platforms, such as FuzzBench and now MLFuzz. ## 7. Conclusion and Consequences Neural program smoothing for fuzzing neither reaches its advertised performance, nor does it surpass older fuzzing techniques that are still state-of-the-art. In our in-depth analysis of NPS fuzzers, we analyzed conceptual limitations of previously published approaches, as well as implementation and evaluation issues. Our comprehensive benchmark showed that NPS-guided fuzzers were by far unable to reach their stated performance. Addressing the implementation issues did not suffice to outperform state-of-the-art gray-box fuzzers. The reason for the limited fuzzing performance lies in the difficulty of the machine learning task, which yields trivial models on the data available during fuzzing. To guide future fuzzing research and practical validation, we developed improved experimental guidelines targeting fuzzing with machine learning. Our MLFuzz framework for ML-based fuzzers includes patched and containerized versions of the investigated fuzzers to help with additional benchmarking. We encourage researchers to perform ablation studies and provide deeper insights into the components they introduce in fuzzing. While we highlight fundamental limitations of neural program smoothing, whether and how much this technique can enhance fuzzing remains an open topic for future research. We hope that this work contributes to fair and comprehensive evaluations of future fuzzers, be they ML-based or not. ## 8. Data Availability The open-source implementation of Neuzz++ and MLFuzz, the evaluation setup, and raw results are available at [https://github.com/boschresearch/mlfuzz](https://github.com/boschresearch/mlfuzz) [https://github.com/boschresearch/neuzzplusplus](https://github.com/boschresearch/neuzzplusplus). ## Acknowledgements Thanks are due to Josselin Feist and the anonymous reviewers for valuable discussions that have led to paper improvements. This work was supported by the German Federal Ministry of Education and Research (BMBF, project CPSec - 16KIS1565 and 16KIS1564K). \begin{table} \begin{tabular}{l r r r r r r} \hline \hline Target & AFL & AFL++ & HavecMah & Neuzz & PreFuzz & **Neuzz**\(\leftrightarrow\) \\ \hline blady & 1 & 1 & 2 & 0 & 0 & 1 \\ guetzli & 8 & 264 & 5 & 0 & 0 & 170 \\ harbuzz & 0 & 355 & 1 & 0 & 0 & 12 \\ json & 20 & 11 & **22** & 18 & 16 & 10 \\ lems & 0 & **16** & 0 & 0 & 0 & 8 \\ libarchive & 0 & **1** & 0 & 0 & 0 & 0 \\ libxml2 & 0 & **648** & 1 & 0 & 0 & 289 \\ opensal & 138 & **1324** & 409 & 37 & 40 & 721 \\ perze & 87 & **4174** & 262 & 40 & 35 & 1371 \\ rez & 0 & **172** & 1 & 0 & 0 & 2 \\ vorbis & 0 & 2 & 1 & 0 & 0 & 0 \\ woff2 & 20 & 361 & **671** & 1 & 1 & 172 \\ \hline \hline \end{tabular} \end{table} Table 9. Bugs found after stack trace deduplication.
``` ランダム生成された入力でテスト(Fuzzing)が自動的にプログラムの脆弱性を暴露することができることから、注目を集めています。Fuzzテストキャンペーンは大量のデータを生成するため、機械学習(ML)の適用に最適です。神経プログラム平滑化(NPS)は、MLをガイドにしたFuzzersの一種で、神経ネットワークをプログラムのターゲットの滑らかな近似として、新しいテストケースの生成に用いることを目的としています。この論文では、NPS Fuzzersを標準のグレーボックスFuzzers(>11 CPU年と>5.5 GPU年)に対する最も包括的な評価を実施し、以下の貢献をいたします: (1) NPS Fuzzersの元の性能 claimed は、根底的な、実装、そして過去の研究の経験的限界により、不正確であることが判明しました。 (2) NPSの機械学習と勾配ベースの変異の貢献に関する第一の深層分析を行いました
2305.19826
Hidden covalent insulator and spin excitations in SrRu$_2$O$_6$
The density functional plus dynamical mean-field theory is used to study the spin excitation spectra of SrRu$_2$O$_6$. A good quantitative agreement with experimental spin excitation spectra is found. Depending on the size of the Hund's coupling $J_H$ the systems chooses either Mott insulator or covalent insulator state when magnetic ordering is not allowed. We find that the nature of the paramagnetic state has negligible influence on the charge and spin excitation spectra. We find that antiferromagnetic correlations hide the covalent insulator state for realistic choices of the interaction parameters.
D. Csontosová, J. Chaloupka, H. Shinaoka, A. Hariki, J. Kuneš
2023-05-31T13:07:35
http://arxiv.org/abs/2305.19826v2
# Hidden covalent insulator and spin excitations in SrRu\({}_{2}\)O\({}_{6}\) ###### Abstract The density functional plus dynamical mean-field theory is used to study the spin excitation spectra of SrRu\({}_{2}\)O\({}_{6}\). A good quantitative agreement with experimental spin excitation spectra is found. Depending on the size of the Hund's coupling \(J_{\rm H}\) the systems chooses either Mott insulator or covalent insulator state when magnetic ordering is not allowed. We find that the nature of the paramagnetic state has negligible influence on the charge and spin excitation spectra. We find that antiferromagnetic correlations hide the covalent insulator state for realistic choices of the interaction parameters. Competition between kinetic and interaction energy is the corner stone of the correlated electrons physics. In the paradigmatic bandwidth control scenario of Hubbard model at half filling, increasing the interaction-to-bandwidth ratio suppresses the charge fluctuations and eventually drives the system to a Mott insulator (MI) state [1]. Real materials provide variations on this theme [2; 3], but also alternative mechanisms of correlation driven metal-insulator transition (MIT) such as site-selective Mott transition [4], spin-state crossover [5; 6], Kondo insulator [7], or gapping the ligand bands [8] to name a few. Often the paramagnetic (PM) MIT is hidden by a magnetic long-range order, which raises the question how much about the nature of PM phase can be learned from the properties of the ordered phase. The studies of single-band Hubbard model [9; 10] found rather subtle differences in anti-ferromagnetic (AFM) phase on the two sides of the Mott transition, which can be difficult or even impossible to identify in multi-orbital setting of real materials. A weakly correlated state does not have to be metallic in order to exhibit charge fluctuations. A covalent insulator (CI) [11], with a gap between bonding and anti-bonding states does as well. As pointed out by Mazin _et al._[12] CI can be realized in layered transition metal oxides with honeycomb lattice structure such as Na\({}_{2}\)IrO\({}_{3}\), \(\alpha\)-RuCl\({}_{3}\), Li\({}_{2}\)RuO\({}_{3}\) or SrRu\({}_{2}\)O\({}_{6}\). The pattern of dominant hopping amplitudes, which traps the \(t_{2g}\) electrons in the hexagonal structural units, gives rise to molecular orbitals clearly visible in the electronic spectra. At half filling the Fermi level falls into the band gap between the molecular peaks [13], which stabilizes the CI state. On the other hand, the tendency to form a high-spin MI is maximal also at half filling [14], which leads to a competition without an _a priori_ winner. This scenario is realized in SrRu\({}_{2}\)O\({}_{6}\) with nominally \(t_{2g}^{3}\) configuration. An antiferromagnetic insulator with high Neel temperature \(T_{\rm N}\) of 563 K [16], it does not exhibit the Curie-Weiss susceptibility in the PM phase. Instead, the susceptibility increases up to the highest reported temperature of about 730 K [17]. Classification of SrRu\({}_{2}\)O\({}_{6}\) based on numerical studies has been controversial. Streltsov _at al._[13] performed density functional plus dynamical mean-field theory (DFT+DMFT) calculations for Hund's coupling \(J_{\rm H}=0.3\) eV. Pointing out the discrepancy between the theoretical ionic moment of 3 \(\mu_{\rm B}\), a value essentially reproduced by their DFT+DMFT, and the observed ordered moment of 1.4 \(\mu_{\rm B}\) they argued that the electronic structure of SrRu\({}_{2}\)O\({}_{6}\) is dominated by molecular orbitals. Hariki _et al._[18] using a similar DFT+DMFT approach found a crossover between CI and MI in the PM phase for \(J_{\rm H}\) between \(0.16-0.19\) eV, depending on temperature. They also found that in the AFM phase the size of the ordered moment is essentially the same on both sides of the CI/MI crossover and agrees well with experimental as well as the DFT value, when the overlaps of Wannier orbitals are properly accounted for. The uncertainty in the value of the Hund's exchange \(J_{\rm H}\) thus left the ques Figure 1: The unit cell of SrRu\({}_{2}\)O\({}_{6}\): Ru (blue), O (red) and Sr (green) atoms, visualized using VESTA3 [15]. The arrows mark the local orbital coordinates. Path in the reciprocal space used for plotting magnon dispersions. tion of electronic structure of SrRu\({}_{2}\)O\({}_{6}\) open. Using resonant inelastic x-ray scattering (RIXS) to map out the magnon dispersion Suzuki _et al._[19] concluded that SrRu\({}_{2}\)O\({}_{6}\) is a Mott insulator because the magnon spectrum can be well described by \(S=3/2\) Heisenberg model with parameters obtained by strong-coupling expansion with first principles hopping parameters. They pointed out the difference between a large paramagnetic Neel temperature \(\Theta\), proportional to the inter-atomic exchange \(J\) and reflected in the magnon bandwidth, and the smaller ordering temperature \(T_{N}\), determined by the spin gap. They argued that the observed absence of Curie-Weiss behavior above \(T_{N}\) is consistent with the behavior of 2D Heisenberg model, for which it is expected first for \(T>\Theta\). We compute the spin excitation spectra [20; 21] using DFT+DMFT [22]. We pursue two objectives (i) apply the DMFT approach to dynamical susceptibilities based of Bethe-Salpeter equation (BSE) [23; 24] to an ordered state of a real material and assess its quantitative accuracy, (ii) analyze the connection between the character of the PM phase, MI vs CI, and the properties of the AFM phase. The DMFT BSE approach has been successfully applied to antiferromagnetic magnons in up to 3-orbital model [24]. Here we focus on quantitative comparison with experiment, the role of spin-orbit coupling (SOC), the relationship between single-ion anisotropy and the spin gap, and other spin excitations beyond magnon. In order to address (ii), we vary \(J_{\rm H}\) across the CI-MI crossover. _The method._ We study the '\(t_{2g}\)-only' model of Ref. [18] with Slater-Kanamori interaction obtained by wannierization [25; 26] from density functional calculation [27]. Unlike in Ref. [18] we use the basis of \(xy\), \(yz\) and \(xz\) Wannier orbitals in the coordinates shown in Fig. 1, see Supplemental Material (SM) for details [28]. In order to reduce the computational effort, the calculations were done for C-type (2 atoms) rather than the experimental G-type (4 atoms) structure. This approach is justified by the miniscule inter-layer coupling [17]. Throughout this study we keep the interaction parameter \(U=2.7\) eV fixed and vary \(J_{\rm H}=0.16-0.22\) eV as well as temperature. In PM calculation we enforce the spin symmetry of the self-energy in each DMFT iteration. The DMFT [29] calculations were performed with a multiorbital implementation [30] of the continuous-time hybridization expansion Monte Carlo method [31] based on ALPS core libraries [32]. Some of the DMFT calculations were benchmarked against results obtained with DCore [33]. The BSE with local particle-hole irreducible vertex [34] was solved for the lowest 10 bosonic Matsubara frequencies in the Legendre representation [35]. The desired dynamical susceptibilities \(\langle O_{-\mathbf{q}}O_{\mathbf{q}}\rangle_{\omega}\) were obtained by sandwiching the general 2-particle susceptibility with the corresponding vertices followed by analytic continuation [36; 37], see SM [28] for details. The reciprocal space operators are related to local observable by the Fourier transform \[O_{\mathbf{q}}=\sum_{\mathbf{R},s}e^{-i\mathbf{q}\cdot(\mathbf{R}+\mathbf{r}_ {s})}O_{\mathbf{R}s}\quad\mathbf{r}_{s}=\begin{cases}\left(\frac{2}{3},\frac{ 1}{3},0\right)\,s\!=\!\mathrm{A}\\ \left(\frac{1}{3},\frac{2}{3},0\right)\,s\!=\!\mathrm{B},\end{cases} \tag{1}\] where the index \(s\) refers to the two Ru sites in the unit cell. In the following we study the transverse spin susceptibility with \(O\equiv S^{x}\), and \(S=3/2\to 1/2\) excitations, for which we choose a representative operator \(O\equiv X\) below, generating \(\Delta S^{z}=\pm 1\) transitions between \(S=3/2\) and \(S=1/2\) manifolds \[S^{x}_{\mathbf{R}s} =\sum_{\alpha=1}^{3}\mathrm{d}^{\dagger}_{\mathbf{R}s\alpha\uparrow }\,\mathrm{d}_{\mathbf{R}s\alpha\downarrow}\!+\!H.c. \tag{2}\] \[X_{\mathbf{R}s} =\left(\mathrm{d}^{\dagger}_{\mathbf{R}s1\uparrow}\,\mathrm{d}_{ \mathbf{R}s1\downarrow}-\mathrm{d}^{\dagger}_{\mathbf{R}s2\uparrow}\, \mathrm{d}_{\mathbf{R}s2\downarrow}\right)+H.c. \tag{3}\] Figure 2: Imaginary part of the dynamical susceptibility along the \(\Gamma^{\prime}\) - \(\Gamma\) - \(M\) path shown in Fig. 1 for \(J_{\rm H}=0.16\) eV at \(T=464\) K. Grey dots denote the maxima of the corresponding RIXS features [19]. Top (linear color scale): \(\langle X_{-\mathbf{q}}X_{\mathbf{q}}\rangle_{\omega}\) representing the \(S=3/2\to 1/2\) transitions. Bottom (logarithmic color scale): \(\langle S^{x}_{-\mathbf{q}}S^{x}_{\mathbf{q}}\rangle_{\omega}\) corresponding to magnon. The operator \(X\) is chosen to be representative of a set of closely spaced transitions, see SM [28]. _Magnon dispersion._ The DMFT calculations lead to AFM with out of plane orientation of the local moment for temperatures below 1500 K. Since the magnetism of SrRu\({}_{2}\)O\({}_{6}\) is essentially 2D [17; 19] this overestimation by DMFT is expected. The DMFT does not obey the Mermin-Wagner theorem and the calculated ordering temperature represents \(\Theta\) rather than \(T_{N}\). This does not mean that the DMFT AFM solution should not be able to capture the ordered state of the real material. Fig. 2 shows a comparison of the dynamical susceptibilities \(\langle X_{-\mathbf{q}}X_{\mathbf{q}}\rangle_{\omega}\) and \(\langle S_{-\mathbf{q}}^{x}S_{\mathbf{q}}^{x}\rangle_{\omega}\) calculated in the AFM phase at 464 K to the experimental RIXS data [19]. The magnetic moments at this temperature are essentially saturated [28; 18] and thus no significant change in the computed spectra is expected upon further cooling. Rather than computing the full RIXS spectra, calculation of which would require evaluation of transition amplitudes [38; 39] with the possibility of multi-particle excitations [40; 41] and is not possible with the present methods, we compare the dispersions of specific spectral features. We find a very good match of the magnon dispersion including the bandwidth, the spin gap and the distribution of spectral weight. The magnon bandwidth of 183 meV corresponds to the effective nearest-neighbor exchange \(JS=61\,\mathrm{meV}\) between \(S=3/2\) local moments. A straightforward strong-coupling calculation with the same parameter setup yields a remarkably similar value \(JS\approx 66\,\mathrm{meV}\)[28], essentially unaffected by SOC. However, by inspecting the exact solution of our Hubbard model on a single bond [28], we found the spin \(S=3/2\) picture to be significantly disturbed by a large involvement of higher multiplet states at energies \(\gtrsim 3J_{\mathrm{H}}\)[28]. In such situation, the DMFT approach covering the entire spectrum of multiplet states is highly advantageous. The spin gap of approximately 45 meV is related to the single-ion anisotropy \(\Delta_{\mathrm{SIA}}=E_{\pm 1/2}-E_{\pm 3/2}=6.6\,\mathrm{meV}\), defined as the difference between the atomic states belonging to the \(S=3/2\) multiplet [42]. The strong-coupling evaluation of SIA suggests that the above ionic value is actually strongly renormalized by exchange processes [28]. Within the linear spin-wave theory of Heisenberg antiferromagnet, the large gap is easily explained even for small SIA, as it is given by \(S\sqrt{6J\Delta_{\mathrm{SIA}}}\)[19]. Nevertheless, it is not self-evident that the present numerical approach must capture it accurately. We have also carefully checked the out-of-plane orientation of the ordered moments, see SM [28], and verified its origin in SOC by performing calculations with SU(2)-symmetric Hamiltonian without SOC. As expected we find two gapless linear Goldstone modes with divergent spectral weights in this case, see Fig. 3. The experimental RIXS spectra [19] exhibit a prominent low-energy feature associated with \(S=3/2\to 1/2\) transitions. Our calculations, Fig. 2, reproduce the position of this feature fairly well, although the SOC induced mixing with the low energy magnon limits the resolution Figure 4: Spectral functions and corresponding imaginary parts of self-energies on the real axis in the constrained PM solution (left half) and AFM solution (right half). The calculations were performed for \(T=290\,\mathrm{K}\). Red and blue color in the figures with AFM self-energies distinguish between spin up and spin down component. The grey lines in the spectral function with \(J_{\mathrm{H}}=0.16\,\mathrm{eV}\) show DFT band structure squeezed by factor 2.2. Figure 5: Uniform susceptibility for \(J_{\mathrm{H}}=0.16\,\mathrm{eV}\) and \(J_{\mathrm{H}}=0.19\,\mathrm{eV}\) in the PM state. The dashed line shows the Curie-Weis susceptibility \(\chi\propto(T+\Theta)^{-1}\) with \(\Theta=1480\,\mathrm{K}\). Magnitude of the calculated \(\chi(T)\) is about 30% smaller than the experimental one [17]. Inset: a cartoon picture of different temperature scales in SrRu\({}_{2}\)O\({}_{6}\). of the higher energy structures. _Mott vs covalent insulator._ In calculations performed in the PM state, the authors of Ref. [18] observed a crossover between the low-temperature CI and high temperature MI at a scale \(T^{\star}\), which strongly depends on \(J_{\rm H}\). For \(J_{\rm H}=0.16\) eV the scale \(T^{\star}\) lies in the 600-800 K range, while for \(J_{\rm H}\gtrsim 0.19\) eV only MI was observed. SrRu\({}_{2}\)O\({}_{6}\) exists in the PM phase below 800 K, however, since DMFT exaggerates its ordering temperature [43]. we enforce the PM solution by constraint, in order to study it at lower temperatures. The different temperature scales discussed below are summarized in the inset of Fig. 5. The paramagnetic Neel temperature \(\Theta\), which we identify with the DMFT ordering temperature, is estimated from the present study [28] and 18. The CI/MI crossover temperature \(T^{\star}\) is estimated from 18 and the uniform susceptibility from \(J_{\rm H}=0.16\) eV of this study. Finally, \(T_{N}\) is the experimental ordering temperature, the weak \(J_{\rm H}\)-dependence of which may deduced from the behavior of the spin gap as a function of \(J_{\rm H}\). Next, we discuss the properties of the constraint PM solutions. At high temperatures (\(T>T^{\star}\)) CI and MI behave similarly. At low temperatures (\(T<T^{\star}\)) they are distinguished by several characteristics. The self-energy of CI has a Fermi liquid character with vanishing imaginary part at the chemical potential. The self-energy of MI possesses a peak in the vicinity of the chemical potential. This gives rise to distinct band structures shown in Fig. 4. For evolution of the self-energy with temperature see SM [28]. The CI and MI respond differently to a magnetic field. The magnetic susceptibility \(\chi(T)\) of MI, in Fig. 5, exhibits the usual Curie-Weiss decrease with increasing temperature. The high-temperature susceptibility of CI follows the same trend. However, once the Fermi liquid behavior sets in below \(T^{\star}\)[44] the susceptibility starts to drop, which gives rise to a broad maximum. A positive slope of the experimental \(\chi(T)\) above the transition temperature was pointed out by the authors of Ref. [17]. How is the different character of PM phase reflected in the AFM phase? Upon magnetic ordering the self-energy is dominated by the spin-dependent Hartree shift and electronic spectra for large and small \(J_{\rm H}\) in Fig 4 resemble one another. In Fig. 6 we compare the magnon spectra obtained at 464 K for \(J_{\rm H}\) values on both sides of CI/MI crossover. A difference is hardly noticeable. There is a discernible trend of decreasing spin gap with \(J_{\rm H}\), which follows from the behavior of the single-ion anisotropy. Overall the parameters extracted using strong-coupling theory describe the magnons equally well on CI and MI side in the parameter space. Can the behavior of the CI susceptibility explain the experimentally observed behavior of \(\chi(T)\) in the PM phase? Is it plausible that an improved theory, which pushes the calculated \(T_{N}\) to its experimental value below \(T^{\star}\), uncovers the CI susceptibility? We argue that it is not. The key problem of the DMFT description in the present context is, rather than quantitatively overestimating the magnitude of \(T_{N}\), that it does not qualitatively distinguish between the paramagnetic Neel temperature \(\Theta\) and the ordering temperature \(T_{N}\). In fact the size of \(\Theta\), which describes the onset of strong AFM correlations, is likely to be correctly given by DFT+DMFT as suggested by the correct magnon bandwidth obtained in the calculation. Thefore DMFT does not exaggerate the temperature at which the AFM correlations set in, but that it describes them as static (AFM order), while in the 2D reality they remain dynamical down to much lower temperature \(T_{N}\) determined by a spin gap or an inter-layer coupling. The CI physics can be realized if the crossover temperature \(T^{\star}\) is above the onset of AFM correlations \(\Theta\). In the present case for smaller \(J_{\rm H}\) we get \(T_{N}<T^{\star}<\Theta\) and thus the increase of \(\chi(T)\) above \(T_{N}\) represents the physics of 2D Heisenberg magnet rather than that of CI. We would like to point out the analogy of the present physics with the Kondo lattice model [45]. In both cases a local moment disappears below a certain temperature, \(T^{\star}\) in CI or Kondo temperature in case of the Kondo lattice, if not correlated to other moments on the lattice. In both case, inter-site correlations between the local moments can preclude their disappearance if sufficiently strong, which we conjecture to mean \(T^{\star}<\Theta\) in the present case. These are examples of a situation when inter-site interaction between the local excited states (carrying the local Figure 6: Comparison of magnon spectra in covalent insulator (\(J_{\rm H}=0.16\) eV) and Mott insulator (\(J_{\rm H}=0.19\) eV and \(J_{\rm H}=0.22\) eV) phases. The calculations were performed for \(T=464\) K. The spin gaps for \(J_{\rm H}=\{0.16,0.19,0.22\}\) eV are \(\Delta_{\rm m}=\{45,36,35\}\) meV, respectively. The white line is a spectral weight \(\Omega_{\bf q}\).(The same color scale as Fig. 2b) moments), eliminates the (non-magnetic) local ground states from the set of global low-energy states. _Conclusions._ We have calculated the spin excitation spectra of SrRu\({}_{2}\)O\({}_{6}\) using DFT+DMFT approach and found a quantitative match with the experimental observations [19], notably for the spin gap due to the spin-orbit coupling. The paramagnetic state of SrRu\({}_{2}\)O\({}_{6}\), depending on the strength of the Hund's coupling \(J_{\rm H}\), exhibits either covalent insulator or Mott insulator characteristics below \(T^{\star}\approx 580\) K. Once in the AFM ordered state the magnon and electron excitation spectra are essentially the same for \(J_{\rm H}\) on both sides of the covalent insulator / Mott insulator crossover. Our calculations for realistic \(J_{\rm H}\) on both sides of the CI/MI crossover lead to the conclusion that \(T^{\star}\) is substantially below the temperature \(\Theta\) at which the AFM correlations set in and therefore the covalent insulator state remains always 'hidden'. The authors thank H. Suzuki for providing the experimental data of Fig. 2, A. Kauch for critical reading of the manuscript, and K.-H. Ahn for valued discussions in the early stage of this work. This work has received funding from QUAST-FOR5249 project I 5868-N (D.C., J.K.) of the Austrian Science Fund (FWF), Czech Science Foundation (GACR) project No. GA22-28797S (D.C., J.C.), JSPS KAKENHI Grant Numbers 21K13884, 23K03324 (A.H.), 21H01003, 23H03816, 23H03817 (H.S., A.H.), Austrian Federal Ministry of Science, Research and Economy through the Vienna Scientific Cluster (VSC) Research Center and by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254). H.S was supported by JSPS KAKENHI Grant Number 21H01041. H. S. thanks the Supercomputer Center, the Institute for Solid State Physics, and the University of Tokyo for the use of their facilities.
``` 密度泛函 plus ダイナミック Mean-field 理論を使用して、SrRu$_2$O$_6$のスピン励振スペクトルを研究しています。実験的スピン励振スペクトルとの高い相関性を示しています。 Hund’s coupling $J_H$ の大きさに応じて、システムは磁気的秩序が禁止されている場合、 MottInsulator または Covalent Insulator 状態を選んでいきます。パラマガネティックな状態の性質が、電荷とスピン励振スペクトルに極めて小さな影響を与えます。実用的な相互作用パラメータを選択した場合、反アフィニ磁気的相関は、コvalent Insulator 状態を隠します。 ```
2301.03351
Classifying Mental-Disorders through Clinicians Subjective Approach based on Three-way Decision
In psychiatric diagnosis, a contemporary data-driven, manual-based method for mental disorders classification is the most popular technique; however, it has several inevitable flaws. Using the three-way decision as a framework, we propose a unified model that stands for clinicians' subjective approach (CSA) analysis consisting of three parts: quantitative analysis, quantitative analysis, and evaluation-based analysis. A ranking list and a set of numerical weights based on illness magnitude levels according to the clinician's greatest degree of assumptions are the findings of the qualitative and quantitative investigation. We further create a comparative classification of illnesses into three groups with varying important levels; a three-way evaluation-based model is utilized in this study for the aim of understanding and portraying these results in a more clear way. This proposed method might be integrated with the manual-based process as a complementary tool to improve precision while diagnosing mental disorders
Huidong Wang, Md Sakib Ullah Sourav, Mengdi Yang, Jiaping Zhang
2022-12-21T17:22:02
http://arxiv.org/abs/2301.03351v4
# Classifying Mental-Disorders through Clinicians' Subjective Approach based on Three-way Decisions ###### Abstract In psychiatric diagnosis, a contemporary data-driven, manual-based method for mental disorders classification is the most popular technique. However, it has several inevitable flaws, namely, misdiagnosis of a complex phenomenon, comorbidities etc. Using the three-way decisions (3WD) as a framework, we propose a unified model that stands for clinicians' subjective approach (CSA) consisting of three parts: quantitative analysis, quantitative analysis, and evaluation-based analysis. A ranking list and a set of numerical weights based on illness magnitude levels according to the clinician's greatest degree of assumptions are the findings of the qualitative and quantitative investigation. We further create a comparative classification of illnesses into three groups with varying important levels; a three-way evaluation-based model is utilized in this study for the aim of understanding and portraying these results in a more clear way. This proposed method might be integrated with the manual-based process as a complementary tool to improve precision while diagnosing mental disorders. Mental disorder classification Psychiatric diagnosis Three-way decisions ## 1 Introduction In this modern era, where technology is at its peak, with endless amusement and entertainment scopes, still, a substantial amount of people, mostly young adults, are suffering from depression and other mental disorders [1]. Prevalence can be seen as having a lack of motivation to live, losing interest in everything among common people. Hence, they are frequently thriving towards psychiatric diagnosis than in the past days. Therefore, improper diagnosis of mental health disorders may lead to even more vulnerable consequences in a greater sense from an individual to a social perspective [38]. The traditional form of psychiatric diagnosis is much pretentious nowadays as few recent studies demonstrate several shortcomings within the widely established systems used for classifying mental disorders, namely, bipolar disorder, anxiety disorders, phobias, substance use disorder, mood disorders, and many others [2, 3]. More often these recognized tools, such as DSM-5 [7] and ICD-11 [8], fails to distinguish between the proper and correct disorder diagnosis of a complex phenomenon in individual cases. Patients with the same disorder exhibit diverse symptom profiles during diagnosis [35] and comorbidities or co-occurring conditions creating numerous clinical and research challenges as well [36]. In such a situation, the pragmatic and expertise-oriented judgment from the clinicians (psychiatrists and psychologists) should be reinforced to avoid an improper diagnosis of a mental disorder and restrict its consequences. While three-way classification has emerged as a prominent problem-solving and decision-making paradigm, we intend to integrate its theory into the classification process of mental disorders in order to help the clinicians' diagnosis process in a more accurate and confident manner. "Psychiatric nosology" or "psychiatric taxonomy" is terms used to describe how mental diseases are classified. There are presently two commonly used instruments or methods for defining mental disorders: the World Health Organization's (WHO) International Classification of Diseases (ICD-11) and the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM-5). In contrast to the American Psychiatric Association's (APA) Diagnostic and Statistical Manual of Mental Disorders (DSM), Research Domain Criteria (RDoC) was launched in 2009 with the goal of addressing the heterogeneity in current nosology by providing a biologically-based, rather than symptom-based, a framework for understanding mental disorders [33]. The Chinese Society of Psychiatry (CSP) [9] produced the Chinese Classification of Mental Diseases (CCMD), a clinical reference for diagnosing mental disorders in China. It is now working on the CCMD-3, a third edition was written in both Chinese and English. It is designed to be structurally and categorically identical to the International Classification of Diseases (ICD) and the Diagnostic and Statistical Manual (DSM). One of the most fundamental flaws in the DSM-5 and other manuals is that they lack culture-specific meaning and do not include the cultural context of a certain nation (for example, Bangladesh). Common people's habits, tastes, life expectations, social behavior is much more distinct and unique in different parts of the world and these changes rapidly. After the emergence of COVID-19 amidst the imposed restrictions of various sorts, the mental health circumstances is in big threat; the symptoms are relapsing in normal population, university students, clinical workers, patients with pre-psychiatric disorders, and others in such a way that makes the situation more complex [39, 40, 41]. In addition, these taxonomies or guides are mostly based on various statistical analyses and information theory, with some cultural representations thrown in for good measure yet backfire to provide us a timely, holistic and unified view on a deeper scale. On the other hand, the broadening of diagnostic criteria in DSM-5, according to critics, may increase the number of'mentally ill' people and/or pathologies 'normal behavior', thus exposing millions of additional patients to pharmaceuticals that may do more damage than good [4]. What is more, the different manual-guided psychiatric diagnoses follow approaches like- categorical, dimensional, and others, those also have their controversy in terms of their validity in many cases [5, 6]. Prior to the introduction of manual-based diagnostic systems (about 1980), the clinician's subjective experience was highly regarded. Although the effectiveness of the method may have increased since the DSM/ICD was introduced, the limitations of this technique are now evident [2, 3, 5, 6, 42, and 43]. A study [44] on clinician's subjective experience supports the resurrection of growing potential on clinician's subjectivity and its promising role in diagnostic process. Other recent studies [39, 40] show evidence that the clinician's subjective experience might play a useful role in the diagnostic process as well. The term "diagnosis" refers to both a phrase and a procedure that is closely linked to concerns of classification [45]. In conventional psychiatric diagnosis process, a doctor or clinician classify among listed mental disorders by referring to the outlined and data-driven manuals (DSM-5/ ICD-11) that include descriptions, symptoms, and so forth; and by following other diagnostic criteria. This is an objective approach that implies internal information-based analysis and it has been put much of the importance comparatively. Simultaneously, similar importance should be imposed on practitioners' external analysis, namely, culture-specific knowledge along with domain knowledge, through attained expertise and experience with a subjective approach during diagnosis process that has been focused on in the current study, this is shown in Fig.1. **Fig.1.** A unified framework of mental disorder classification consisting clinicians' subjective approach (CSA). The primary purpose of this paper is to provide a general framework and adopt concrete methods to analyze clinicians' subjective approach (CSA) in mental disorder classification or psychiatric nosology. The content of this paper is generally arranged into three parts, qualitative analysis of CSA using binary relations, quantitative analysis of CSA using the eigenvector method, and evaluation-based analysis. **2. Three-Way Decision in Clinicians' Subjective Approach (CSA)** Yao [10] created the three-way decisions (3WD) theory, which tries to provide a cohesive framework for thinking, problem-solving, and information processing in three dimensions. It gives us a useful foundation for simulating real-world challenges. 3WD has been applied to a variety of fields, including three-way conflict analysis [11, 12], three-way clustering [13, 14, 15, 16], three-way recommender systems [17, 18, 19], three-way concept analysis [20, 21], three-way granular computing [10, 22], three-way face recognition [24], and so on. We use 3WD theory as our fundamental framework to examine clinicians' subjective approach (CSA) to classify mental disorders in psychiatric diagnosis. CSA is studied using three models: the Trisecting-Acting-Outcome (TAO) model, the three-level computing model, and the evaluation-based approach. As shown in Fig. 1, we undertake a qualitative and quantitative investigation of CSA. We use CSA to rate a group of mental disorders in the qualitative study. The ranking is based on the order relationships between illness pairings. To model the structure and assess CSA, we apply TAO mode in three-way choices. From the standpoint of a clinician, we compare and assess the relative preference of all disorders in pairs first. Following that, we divide all of these couples into three categories: preferred, neutral, and less favored. Finally, we rank illnesses/disorders in order by using a binary relationship. The eigenvector approach is used in quantitative analysis to calculate disease weights. A disadvantage of the eigenvector technique is that when the number of items exceeds 9, a large mistake in the computation might occur [25]. By constructing a three-level structure, the three-level computing paradigm is employed to solve this problem. The eigenvector approach is then used to calculate weights from top to bottom numerous times, allowing us to get a large number of disease weights without sacrificing too much precision. The findings of the qualitative and quantitative study are a ranking list and a set of numerical weights based on the magnitude levels of illnesses based on the clinician's most extreme assumptions. We also built a comparative classification of illnesses into three groups with varying important levels, three-way evaluation-based model is utilized in this study for the aim of comprehending and portraying these results in a more straightforward way. ## 3 Three-Way Qualitative Clinicians' Subjective Approach (CSA) Analysis Order relations, which is an intuitive sense of ordering things against one another, are a significant implication of binary relations. For example, given that (x, y) is an ordered pair of two elements, we may derive order relations between x and y, such as x being greater than y, x being poorer than y, or x being a component of y in various instances. Order relations are a frequent representation of user preference in decision theory, we write an order relation as \(\;\geq\;\) or \(>\). If x \(\;\geq\;\) y, we say x is at least as good as y, if x \(>\) y, we say x is strictly better than y. We solely focus on the strict order relation of "\(>\)" in this study to develop a clinician's preference-based approach (CPA) later on in a more clear way based on the property of trichotomy. ### Clinicians' Subjective Approach (CSA) and the Property of Trichotomy The idea of user preference has been intensively investigated in several user-oriented research domains, such as information retrieval [26; 27], economics [28], and social sciences [29]. In qualitative CSA analysis, the concept of user preference theory may be employed, and the feature of trichotomy is crucial. This trait makes order relations useful for modeling a CSA towards a set of illnesses. Humans are skilled at establishing relative comparisons between numbers, goods, methods, and other things in our daily lives. Given two arbitrary real numbers n and m, we may easily argue that one of nm, n = m, or n \(>\) m must hold in number theory; this is known as the trichotomy property of real numbers. Similarly, a person can identify the ordering relation between x and y as one of the following: x is preferred over y, x is indifferent to y, or x is less favored than y by comparing a pair of things x and y under a specified criterion. Obviously, a person's preferred preference for a pair of things is three. This concept can easily be expanded to include order relations. If we use an order relation \(>\) to represent the meaning "preferred", the indifference relation \(\sim\) is defined as an absence of \(>\), which is defined as: \[\mathrm{x}\sim\mathrm{y}\;\Leftrightarrow\;\neg\;\mathrm{(x}>\mathrm{y)}\; \Lambda^{\neg}\;\mathrm{(y}>\mathrm{x)} \tag{1}\] Give an ordered pair (x, y), if an order relation \(>\) expresses the first element is preferred than the second element. Its converse relation which is written as \(\;\stackrel{{\mbox{\tiny$\blackrightarrow$}}}{{\mbox{\tiny$ \blackrightarrow$}}}\;\), is called a less preferred relation, which is defined as: \[\mathrm{x}\;\stackrel{{\mbox{\tiny$\blackrightarrow$}}}{{ \mbox{\tiny$\blackrightarrow$}}}\;\mathrm{y}\;\Leftrightarrow\;\mathrm{(y}> \mathrm{x)} \tag{2}\] We usually write as < if it does not cause any ambiguity. **Definition 1._An order relation \(>\) on a disorder set \(D\) is called trichotomous if \(\forall\)(x, y), x, y \(\in\) D, exactly one of x \(>\) y, x \(\sim\) y, or x \(<\) y holds._ The purpose of user preference-related research, from the perspective of a decision-maker, is to identify optimum options by examining the order relations among members of a nonempty set, which is characterized as preference relation. The method of user preference theory may be described as first establishing reasonable axioms based on the decision maker's preferences, and then assessing a user's preferring behavior based on those preferences [28]. The mathematical properties of trichotomy and transitivity are used to construct a preference relation. **Definition 2.**A preference relation, denoted as \(>\), is a special type of binary relation on the set of elements D that satisfies the following two rationality properties. \(\forall\)x, y, z \(\in\) D, \[\text{Trichotomous: (x > y) }\forall\text{ (x \sim y) }\forall\text{ (x < y)},\] \[\text{Transitive: x > y }\wedge\text{y > z }\Rightarrow\text{ x > z} \tag{3}\] If we use an order relation \(>\) as a preference relation, user preference is represented as: \[\text{x > y }\Longleftrightarrow\text{x is preferred than y}\] \[\text{x \sim y }\Longleftrightarrow\text{x is indifferent with y}\] \[\text{x < y }\Longleftrightarrow\text{x is less preferred than y} \tag{4}\] For a disorder set Ds, we divide all disorder attribute pairs into three classes. Based on this trisection, disorder ranking can be induced. This process is shown in Fig. 2: \(\{(x,y)\mid x>y\}\)\(\{(x,y)\mid x>y\}\)\(\{(x,y)\mid x-y\}\)\(\{(x,y)\mid x<y\}\)\( Linear orders, weak orders, and semiorders are the three types of order relations that all have the properties of trichotomy and transitivity. These three order relations are employed in this article to describe the clinician's choice for CSA analysis. ### Modeling of CSA as Linear Order Given a disorder set Ds, a linear order \(>\) enables us to arrange diseases in the form Ds = {d\({}_{\rm l}\), d\({}_{\rm 2}\),..., d\({}_{\rm n}\)}, such that d\({}_{\rm i}>\) d\({}_{\rm j}\) if and only if I \(<\) j, for this reason, a linear order is also called a chain. **Definition 3.**Given a set Ds, a binary relation \(>\) is a linear order on Ds, if it satisfies for any x, y, z \(\in\) Ds: \[\text{Asymmetric: x $>$ y $\Rightarrow$ \neg$ (y $>$ x)},\] \[\text{Transitive: x $>$ y $\wedge$ y $>$ z $\Rightarrow$ x $>$ z},\] \[\text{Weakly Complete: x $\neq$ y $\Rightarrow$ (x $>$ y) $\vee$ (y $>$ x)} \tag{5}\] The asymmetric feature precludes the circumstance in which d\({}_{\rm i}\) is better than d\({}_{\rm j}\) and d\({}_{\rm j}\) is better than d\({}_{\rm i}\) at the same time. Reasonable inference may be applied thanks to the transitive property. Weak completeness assures that all illnesses are comparable to one another. **Example 1**.Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinicians' preference in accordance of the assumptions on a patient having a potential disorder on Ds is defined by a linear order \(>\). Suppose the ordering between disorders is specified by a clinician as: \[\text{d1 }\text{ }\text{ }\text{ }\text{ }\text{ d5, d1 }\text{ }\text{ }\text{ }\text{ }\text{ d4, d1 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d2, d3 }\text{ }\text{ }\text{ }\text{ }\text{ d1, d3 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d2,}\] \[\text{d3 }\text{ }\text{ }\text{ }\text{ }\text{ }\text{ d4, d3 }\text{ }\text{ }\text{ }\text{ }\text{ d5, d5 }\text{ }\text{ }\text{ d4, d5 }\text{ }\text{ }\text{ }\text{ d2, d4 }\text{ }\text{ }\text{ }\text{ }\text{ d2}.\] Then, disorders are ranked as: \[\text{d3 }\text{ }\text{ }\text{ d1 }\text{ }\text{ }\text{ }\text{ d5 }\text{ }\text{ }\text{ }\text{ }\text{ d4 }\text{ }\text{ }\text{ }\text{ }\text{ d2}.\] ### Modeling of CSA as Weak Order to Illustrate Comorbidity in Mental Disorder Weak orders are commonly utilized in several disciplines to indicate user preference relations [26, 27, 28, and 29]. A weak order enables ties in the ranking results, as opposed to a linear order that places items in a chain, which is quite powerful in representing real-world issues. To put it another way, some properties in a collection may be regarded as indifferent. In mental disorder classifications, comorbidity of psychiatric illnesses is a widespread issue with major consequences for health-care delivery [36]. Depression, anxiety, and drug dependency disorders are the most common comorbid mental illnesses [37]. Here, we used \(\sim\) to denote comorbidity of mental disorders and weak ordered relation \(>\) to denote ranking of clinician's preference among disorders. **Definition 4.** A weak order \(>\) is a binary relation on set Ds, if it satisfies for any x, y \(\in\) Ds: \[\text{Asymmetric: x $>$ y $\Rightarrow$ \neg$ (y $>$ x)},\] **Example 2.**Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinician's preference on Ds is defined by a weak order \(>\). Suppose the ordering between disorders is specified as: \[d1\ \ >\ d3,\ d1\ \ >\ d4,\ d1\ \ >\ d5,\ d2\ \ >\ d3,\ d2\ \ >\ d4,\ d2\ \ >\ d5,\ d3\ \ >\ d4,\ d3\ \ >\ d5.\] Because the clinician neither preferences d1 to d2, nor prefer d2 to d1, so d1 must be in comorbid condition or indifferent with d2, written d1 \(\sim\) d2. That means the clinician suspects the particular patient has disease d1 and d2 at the same time. Similarly, d4 \(\sim\) d5. By considering the above ordering, we can rank disorder attributes like: \[d1\sim d2\ \ >\ d3\ >\ d4\sim\ d5.\] ### Modeling of CSA as Semiorder In fact, a transitive indifference relationship isn't always the case. A reader may assume that books C and D are equally good, as are books D and E, after reading three novels, yet he can know that he prefers C to E based on his intuition after reading three books. To put it another way, the individual's preferring attitude cannot discriminate neither between C and D, nor between D and E, but he can distinguish between C and E. To model this type of situation, Luce [31] proposed semiorders. **Definition 5**. A semiorder \(>\) on a set Ds is a binary relation which satisfies for any x, x', x", y, y' \(\in\) Ds: \[\text{Asymmetric: x $>$ y}\Rightarrow\neg\ (y>x),\] \[\text{Ferrers: (x $>$ x') \land(y $>$ y')}\Rightarrow(x>y')\ \lor(y>x'),\] \[\text{Semi transitive: (x $>$ x') \land(x'>$ x'')}\Rightarrow(x>y)\ \lor(y>x'') \tag{7}\] **Example 3**. Given a set of disorder Ds = {d1, d2, d3, d4, d5}, a clinician's preference on Ds is defined by a semiorder \(>\). Suppose the ordering between disorders is specified as: \[d1\ \ >\ d2,\ d1\ \ >\ d3,\ d1\ \ >\ d4,\ d1\ \ >\ d5,\ d2\ \ >\ d4,\ d2\ \ >\ d5,\ d3\ \ >\ d5,\ d4\ \ >\ d5.\] The clinician neither prefers d2 to d3, nor prefer d3 to d2, so d2 \(\sim\) d3, similarly we can get d3 \(\sim\) d4, however, the indifference is intransitive, because d2 \(>\) d4. So, we cannot rank all disorders in one order but several, like below: \[d1\ \ >\ d2\ >\ d4\ >\ d5,\] \[d1\ \ >\ d2\sim d3\ >\ d5,\] \[d1\ \ >\ d3\sim d4\ >\ d5.\] ## 4 Three-Way Quantitative Clinicians' Subjective Approach (CSA) Analysis Mathematically, quantitative CSA analysis can be considered as a process of mapping each disorder to a numerical value, (7) where Ds is a set of disorders, R is a real number set, and w is a mapping function that calculates or assigns a numerical value to each disorder. For a disorder d \(\in\) Ds, w(d) represents its weight from the perspective of a clinician. ### Formulating a Three-Level Structure This study offered two methods for calculating or assigning numerical weights to each ailment. The first is calculating weights using the eigenvector approach, which is covered in Sect. 4.2. The second method is to assign weights. To be more explicit, we first use the eigenvector method to construct an important scale with numerical weights, and then we compare each disease to this scale to determine its weight; this methodology is detailed in Sect. 4.3. Obviously, the eigenvector technique is vital in both approaches; however, it has a limitation in that it is not suitable when the number of objects is greater than 9 since large mistakes in the computation would be introduced [25]. We use the 3WD theory to solve this problem. More specifically, the issue is divided into three levels, after which the eigenvector approach is used to calculate weights from top to bottom. The three-level structure allows us to limit the number of items in the computation of the weights to no more than 9, allowing us to compute weights using the eigenvector approach without sacrificing too much precision. ### Three-Way Quantitative Disease Weighting Based on Eigenvector Method Figure 3 depicts the framework of the quantitative illness weighting model. Assume we have a disorder set Ds, where d\({}_{\rm ij}\) denotes a disorder at the lowest level. We create a three-level framework by categorizing illnesses into various groups based on semantic significance. The second step is to use the eigenvector approach to calculate from top to bottom after we have this three-level structure. We create cluster weights based on clinician choice, and then we determine the weights of illnesses inside each cluster based on cluster weight. The following is a description of how to calculate weights using the eigenvector approach. Assume that a disorder collection Ds has been divided into n clusters, n 9, with no more than 9 illnesses Figure 3: The structure of the three-level disease weighting method. in each cluster. We establish a comparison matrix M as specified in Definition 6 to produce a weight vector w = (w\({}_{1}\), w\({}_{2}\), \(\cdots\),w\({}_{n}\)) for clusters, where element m\({}_{ij}\) reflects the relative significance of a cluster Di compared to a cluster Dj. **Definition 6**. A comparison matrix M is a square matrix of order n, whose elements are m\({}_{ij}\), M is a positive reciprocal matrix if M is: \[\begin{array}{l}\mbox{Positive: $\forall$i, j <n, m${}_{ij}>0$,}\\ \mbox{Reciprocal: $\forall$i, j <n, m${}_{ij}=1/m${}_{ji}$.}\end{array} \tag{9}\] Where, \(\forall i,j\) (i, j=1, 2\(\ldots\) n). M is a comparison matrix that looks like below, and in a perfect situation, m\({}_{ij}\) should exactly be the weights ratio of a cluster Di compared with Dj. \[\mbox{M}=\left(\begin{array}{cccc}m_{11}&m_{12}&m_{1n}\\ m_{21}&m_{22}&m_{2n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ \mbox{$m_{n1}$}&m_{n2}&m_{m}\\ \end{array}\right)=\left(\begin{array}{cccc}w_{1}\mbox{ }/\ w_{1}&w_{1}/w_{2}\...w_{1}/w_{n}\\ w_{2}\mbox{ }/\ w_{1}&w_{2}/w_{2}\...w_{2}/w_{n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ \mbox{$w_{n}$}\mbox{ }/\ w_{1}&w_{n}/w_{2}\...w_{n}/w_{n}\\ \end{array}\right) \tag{10}\] In practice, the values of components in a comparison matrix are determined by the user's preference and flexibility. We use the 9-point rating scale established by Saaty [25] to precisely determine the weight ratio w1/w2 between two clusters (see Table 1). Table 1 shows that the number 1 denotes that two clusters are equally essential. An arbitrary cluster should be equally significant to itself, hence the value mii of the major diagonal in a comparison \begin{table} \begin{tabular}{c|l|l} \hline Intensity of & Definition & Explanation \\ Importance & & \\ \hline 1 & Equal importance & Two activities contribute equally to the objective \\ \hline 3 & Weak importance of one over & Experience and judgment slightly favor one activity over another \\ \hline 5 & Essential or strong importance & Experience and judgment strongly favor one activity over another \\ \hline 7 & Demonstrated importance & An activity is strongly favored and its dominance demonstrated in practice \\ \hline 9 & Absolute importance & The evidence favoring one activity over another is of the highest possible order of affirmation \\ \hline 2,4,6,8 & Intermediate values between the two adjacent judgments & When compromise is needed \\ \hline \end{tabular} \end{table} Table 1: The Saaty’s 9-points rating scale [25] matrix must be 1. Furthermore, for two clusters a and b, the weight ratio wa/wb should be larger than 1 if an is favored over b, else it should be equal to or less than 1. We may get the matrix equation as follows under ideal conditions: \[\mathrm{Mw}=\left(\begin{array}{ccc}w_{1}\ /\ w_{1}&w_{1}/w_{2}\...w_{1}/w_{n}\\ w_{2}\ /\ w_{1}&w_{2}/w_{2}\...w_{2}/w_{n}\\.&.&.\\.&.&.\\.&.&.\\.&.&.\\ w_{n}\ /\ w_{1}&w_{n}/w_{2}\...w_{n}/w_{n}\end{array}\right)_{\mathbf{X}} \left(\begin{array}{c}w_{1}\\ w_{2}\\.\\.\\.\\.\\.\\.\\.\\.\\ \mathbf{x}\ \ Because C.R.\(\leq 10\%\), which satisfies consistency checking, the eigenvector of comparison matrix can be used as the weights of {D1, D2, D3, D4, D5, D6}, that is: \[\text{w}=(0.140,\,0.041,\,0.290,\,0.038,\,0.071,\,0.420)\] We utilize the weights of clusters as a starting point and use the same method to compute the weights of illnesses in each cluster. The weights of illnesses are then normalized by dividing them by the weights of respective clusters. Finally, we may calculate illness weights. ### A Quantitative Disease Weighting Method Using an Importance Scale It's a simple approach for a doctor to assign numerical numbers to illnesses as weights depending on his or her own viewpoint. When the number of suspected illnesses is enormous, however, fluctuation in judgment is unavoidable, resulting in low accuracy in the conclusion. In light of this, an importance scale is employed to solve the problem [25]. The disease weighting approach employing a significance scale may be broken down into the three parts below. First, intensities of the preference degree of illness orders are grouped into several degrees from a clinician's perspective, such as significantly matched, matched, moderately matched, weakly matched, and not matched. We can then calculate weights for each intensity degree using the eigenvector approach described in Sect. 4.2. A three-level structure is required when the number of intensity degrees exceeds 9. As a result, we create a importance scale to aid our judgment. Finally, the weights of illnesses are calculated using this scale. **Example 5.**Suppose a clinician sets five intensities of the preferential degree of suspected disorders, which are A: significantly matched, B: matched, C: moderately matched, D: weakly matched, E: not matched. A clinician builds a comparison matrix of these intensities, and the weights calculation of intensities is described as (Table 4) [34]: \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline & A & B & C & D & E & Weight \\ \hline A & 1 & 2 & 3 & 5 & 9 & 0.450 \\ \hline B & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 4 & 6 & 0.277 \\ \hline C & 1/3 & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 3 & 0.147 \\ \hline D & 1/5 & \(\nicefrac{{1}}{{4}}\) & \(\nicefrac{{1}}{{2}}\) & 1 & 2 & 0.081 \\ \hline E & 1/9 & 1/6 & 1/3 & \(\nicefrac{{1}}{{2}}\) & 1 & 0.046 \\ \hline \multicolumn{3}{l}{\(\lambda\)_max_= 5.024} \\ \multicolumn{3}{l}{C.R = 0.533\%} \\ \hline \end{tabular} \end{table} Table 4: A pairwise comparison matrix of intensity levels \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} \hline & D1 & D2 & D3 & D4 & D5 & D6 & Weight \\ \hline D1 & 1 & 3 & 1/2 & 4 & 2 & 1/3 & 0.140 \\ \hline D2 & 1/3 & 1 & 1/7 & 1 & \(\nicefrac{{1}}{{2}}\) & 1/9 & 0.041 \\ \hline D3 & 2 & 7 & 1 & 9 & 5 & \(\nicefrac{{1}}{{2}}\) & 0.290 \\ \hline D4 & \(\nicefrac{{1}}{{4}}\) & 1 & 1/9 & 1 & \(\nicefrac{{1}}{{2}}\) & 1/9 & 0.038 \\ \hline D5 & \(\nicefrac{{1}}{{2}}\) & 2 & 1/5 & 2 & 1 & 1/6 & 0.071 \\ \hline D6 & 3 & 9 & 2 & 9 & 6 & 1 & 0.420 \\ \hline \multicolumn{3}{l}{\(\lambda\)_max_= 6.048} \\ \hline \multicolumn{3}{l}{C.R = 0.762\% \(<10\%\)} \\ \hline \end{tabular} \end{table} Table 3: Weights calculation of six clusters Because the consistency check is complete, the weights of these intensities are used to construct an important scale. Then, using this scale, we compare each property one by one, assigning various weights to each disorder attribute from the clinician's perspective. ## 5 Three-Way Evaluation based CSA Analysis The 3WD [30] is based on dividing the universe into three zones and employing various tactics in each. The result of a qualitative or quantitative CPA analysis is a ranking list or a set of numerical weights that are significant but difficult for a physician to use in making a choice. These findings will be processed and classified into three pair-wise disjoint classes with varying levels of relevance, namely high importance, medium importance, and low importance, in this part. We'll refer to these three classes as H, M, and L for the rest of this study. We chose three classes because human cognition and problem-solving rely on a three-way divide, which allows us to convert complexity into simplicity in a variety of scenarios [23]. ### Trisecting a Disorder Set based on Thresholds Research Domain Criteria (RDoC), considering the possibility of increasing need of constructing various thresholds for different purposes, is trying to gather information consistently to set thresholds in diagnostic systems of mental disorder where this is relevant; especially in particular research purpose or applications in clinical settings or in health policymaking [33]. Our current study acknowledges this concern and suggests insightful paths on formulating thresholds in mental disorder classification while in the diagnosis process. Using two percentiles is one method for trisecting a disorder set. The first step is to convert a linear order \(>\) from a qualitative or quantitative analytical result. This phase can be bypassed if the outcome of the qualitative method is based on linear order. To identify the three areas, the second step is to use a pair of thresholds, based on the percentiles. There are various methods for linearly transforming qualitative and quantitative findings. The first is topological sorting, which states that an element will not appear in a ranking list until all other items that are preferable to it have been listed [32]. We can generate a decreasing ranking list by utilizing topological sorting. Another option is to use an assessment function to convert qualitative and quantitative analytical results into a set of diseases' evaluation status values (ESVs). The ESV of disease d is defined as follows: \[\text{v(d)}=\frac{|\{x\in Ds|d>x\}|}{|\,Ds|} \tag{14}\] Illnesses will be sorted in decreasing order depending on their ESVs, with diseases with the same ESV being listed in any order. Now, we have a list of ESVs, which is in the form of v\({}_{1}\), v\({}_{2}\)..., v\({}_{n}\) where v\({}_{1}\) is the largest value and v\({}_{n}\) is the smallest value. Using the ranking lists of the above two methods, we then adopt two ESVs at \(a^{\text{th}}\) and \(\beta^{\text{th}}\) percentiles with 0 \(<\)\(\beta\)\(<\)\(a\)\(<\) 100 to calculate a pair of thresholds \(h\) and \(l\) as: \[\begin{array}{l}h=\text{v}_{\text{ [{\rm{fm}}/100]}}\\ l=\text{v}_{\text{ [{\rm{fm}}/100]}}\end{array} \tag{15}\] Where the ceiling function gives the smallest integer that is not less than x, and the floor function gives the largest integer that not greater than x. The floor and ceiling functions are necessary for the reason that \(\alpha\)m/100 and \(\beta\)m/100 may not be integers [30]. Three regions, H, M, and L, may be created using the descending ranking list and two thresholds. Disorders in the H region are of high priority, disorders in the M region are of moderate priority, and disorders in the L zone are of low priority. ### Trisecting a Disease Set Based on a Statistical Method Yao and Gao [30] examined the statistical procedure of building and evaluating three areas. Mean and standard deviation are statistical methods for examining numerical numbers that may be applied to the findings of a quantitative CPA study. Suppose w(d\({}_{1}\)),w(d\({}_{2}\)),...,w(dn) are the weights of disorders in Ds, n is the cardinality of Ds, the mean and standard deviation is calculated by: \[\mu=\frac{1}{n}\sum_{i=1}^{n}w(a_{i})\quad, \tag{16}\] \[\sigma=\left(\frac{1}{n}\sum_{i=1}^{n}\left(w(a_{i})-\mu\right)^{2}\right)^{ \frac{1}{2}}\quad, \tag{17}\] We use two non-negative numbers k\({}_{1}\) and k\({}_{2}\) to represent the position of thresholds away from the mean, then a pair of thresholds is determined as [30]: \[\mathrm{h}=\mu+k_{1}\sigma,\,k_{1}\geq 0,\] \[\mathrm{l}=\mu-k_{2}\sigma,\,k_{2}\geq 0. \tag{18}\] Based on thresholds h and l, three regions of a disorder set can be constructed as: \[\mathrm{H}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\geq\mathrm{h}\right\}\] \[=\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\geq\mu+k_{1}\sigma\right\},\] \[\mathrm{M}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{l}<\mathrm{w(x)}<\mathrm{h}\right\}\] \[=\left\{x\in\mathrm{Ds}|\mu-k_{2}\sigma<\mathrm{w(x)}<\mu+k_{1} \sigma\right\},\] \[\mathrm{L}_{\left(k1,\,k2\right)}(\mathrm{w}) =\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\leq\mathrm{l}\right\} \tag{19}\] \[=\left\{x\in\mathrm{Ds}\ |\mathrm{w(x)}\leq\mu-k_{2}\sigma\right\}\] Disorders can be categorized into three regions H, M, and L considering their weights. ## 6 Discussion and Future Directions We essentially emphasized that the mental symptoms and indicators are passively observed subjects in the paradigm that this research proposes. However, in terms of phenomenological psychopathology, clinicians can also examine patient symptoms and signs using emphatic techniques [46]. Additionally, mental disorders are defined with an operational manner in mainstream diagnostic systems (such as ICD-11 and DSM-5) but are not based on biological indicators. The psychiatric diagnoses therefore correspond to the practical or fuzzy kinds but not the natural kinds. What is more, the operational definitions are relevant to language games in terms of Wittgenstein's philosophy of language [47]. Specifically, the instances with a single diagnosis might be linked by a chain of meanings rather than being supported by a single biological foundation in such disease essentialism. With the addition of psychiat-patient interaction (i.e., psychiatist's emphatic approach) and/or its influences on the classification as future work or extension of this current study, the proposed classification model of mental disorders through clinicians' subjective approach on 3WD can be further modified. Different paradigms, such as pointing graphs, can be used to develop the disorder ranking procedure. To calculate the weight value of each disorder cluster, analytic hierarchy process can be adopted in lieu of eigenvector method and the results can be compared. The quantification of the weight values for diseases using a three-level structure is required in future by clinicians when the number of intensity degrees exceeds 9. For the time being, this current study is offering the theoretic approach to solve this complex problem related to psychiatric diagnosis. Practical implications in both the qualitative and quantitative perspectives should be explored in further studies to ensure the proposed method is better than other existing methods. ## 7 Conclusions The most widely used method of mental disorder classification is a data-driven manual-based method, yet it has a number of drawbacks. We offer a three-part unified model for clinicians' subjective approach (CSA) analysis, based on the three-way choice. In the qualitative research, we use binary relations and the TAO model to rank mental diseases based on their criteria-based preferences. The quantitative analysis employs the three-level computing paradigm, and the eigenvector technique is utilized to assign numerical weights to mental diseases. Finally, we categorize the results of the qualitative and quantitative investigations into three groups depending on their relative importance.
精神病の診断において、現代的なデータに基づく、手作業による精神疾患分類法は最も普及している手法であり、しかし、いくつかの避けられない欠点がある。三つの判断基準を基盤とした、私たちは、臨床医の主観的なアプローチ(CSA)分析という統一的なモデルを提案した。CSA分析は、以下の3つの部分から成り立っている。 * 数量的分析 * 数量的分析 * 評価に基づく分析 このモデルの結論として、臨床医が最も高いレベルで想定している病気の大きさを基準にしたランク付けリストと数値ウェイトが得られた。 質的および定量的研究の結果として、この研究では、3つのレベルの重要性を示す病気の分類を3つのグループに分類しました。この研究では、この3つの判断基準に基づいて、その結果をより明確に理解して表現するための3つの判断基準の評価
2309.07715
Logical implications between fundamental properties of relativistic quantum theories
A mathematical consistency condition constraining any relativistic quantum theory is formulated. It turns out to be equivalent to the locality of physics as well as, in the context of quantum field theory, microcausality, thereby revealing that these are actually two redundant hypotheses. It also promotes an epistemic interpretation of the wavefunction collapse, helps address unsolved problems related to nonlocal measurements and provides a new proof of the non-measurability of fermionic fields.
Antoine Soulas
2023-09-14T13:47:48
http://arxiv.org/abs/2309.07715v1
# Logical implications between fundamental properties ###### Abstract A mathematical consistency condition constraining any relativistic quantum theory is formulated. It turns out to be equivalent to the locality of physics as well as, in the context of quantum field theory, microcausality, thereby revealing that these are actually two redundant hypotheses. It also promotes an epistemic interpretation of the wavefunction collapse, helps address unsolved problems related to nonlocal measurements and provides a new proof of the non-measurability of fermionic fields. **Keywords:** relativistic quantum mechanics, relativistic measurements, locality, microcausality, axiomatic QFT. ## Acknowledgements I would like to gratefully thank my PhD supervisor Dimitri Petritis for the great freedom he grants me in my research, while being nevertheless always present to guide me. I also thank my friend Dmitry Chernyak for careful rereading and sharp remarks. ## Statements and Declarations **Funding:** no funding was received for conducting this study. **Competing Interests:** the author has no competing interests to declare that are relevant to the content of this article. **Ethics approval, Consent, Data, Materials and code availability, Authors' contribution statements:** not applicable.
数学的整合条件が、相対論的量子論のどの理論にも適用される制約条件として設定されています。これは、物理の相対性原理と、量子場理論における微小因果性と等価であり、これらの仮説は実際には重複していることが明らかになりました。この仮説も、波動関数の崩壊に対する経験的解釈を促進し、非局所的な測定に関する未解決の問題に対処し、 fermionic field の測定不能性を証明するための新しい証明を提供します。 Please let me know if you need anything else from me.
2306.17383
Spin Swapping for an Exchange Magnon Spin Current
We propose the spin swapping effect for an exchange magnon spin current in a perpendicularly magnetized ferromagnetic medium with in plane anisotropy on the surface. The excitation of a magnon current flowing along an in-plane direction with an out-of-plane spin polarization leads to the generation of a secondary exchange spin current propagating along the out-of-plane direction, characterized by an in-plane spin polarization. The resulting exchange magnon spin current can induce an inverse spin Hall voltage of micro-volts. The exchange coupling at the interface between regions with different magnetic anisotropies plays a crucial role in generating the spin swapping effect. This is in contrast to the recently reported spin swapping for an exchange spin current in a canted antiferromagnet due to the Dzyaloshiskii-Moriya interaction.
Shuting Cui, Peng Yan, Wei Luo, Xiaofei Yang, Yue Zhang
2023-06-30T03:12:31
http://arxiv.org/abs/2306.17383v1
# Spin Swapping for an Exchange Magnon Spin Current ###### Abstract We propose the spin swapping effect for an exchange magnon spin current in a perpendicularly magnetized ferromagnetic medium with in plane anisotropy on the surface. The excitation of a magnon current flowing along an in-plane direction with an out-of-plane spin polarization leads to the generation of a secondary exchange spin current propagating along the out-of-plane direction, characterized by an in-plane spin polarization. The resulting exchange magnon spin current can induce an inverse spin Hall voltage of micro-volts. The exchange coupling at the interface between regions with different magnetic anisotropies plays a crucial role in generating the spin swapping effect. This is in contrast to the recently reported spin swapping for an exchange spin current in a canted antiferromagnet due to the Dzyaloshiskii-Moriya interaction. The spin current is a key concept in spintronics, and the investigation of spin current paves a way for developing spintronic devices without serious Joule heat [1]. A spin current tensor comprises two vector components: spin polarization and the flow of (quasi-)particles carrying spins [2]. In 2009, M. B. Lifshitz and M. I. Dyakonov predicted the spin swapping effect in a ferromagnetic metal (FM)/nonmagnetic metal (NM) bilayer with spin-orbit coupling at the FM/NM interface, which involves the interchange of the directions for the two components of a spin current tensor [3]. This prediction has inspired extensive theoretical investigations in the spin swapping for transporting electrons in metallic systems over the past decade [3, 4, 5, 6, 7]. Besides electrons, magnons can also transport spins due to the conservation of angular momentum [1, 2, 8]. The magnon flow carrying a spin (magnetization in equilibrium) is referred to as an exchange magnon spin current, since it originates from the exchange coupling between neighboring magnetic moments [1]. Very recently, Lin et al. observed magnonic spin swapping for an exchange spin current in a canted antiferromagnetic (AFM) insulator and ascribed it to the Dzyaloshiskii-Moriya interaction (DMI) [9]. While the mechanism for the magnonic spin swapping remains controversial, there is a consensus that it predominantly appears in canted AFM insulators, such as LaFeO\({}_{3}\) and LuFeO\({}_{3}\)[9, 10]. In other magnetic medium like a ferrimagnetic YIG, the spin swapping is negligible [10]. It remains an open issue whether the magnonic spin swapping is indeed a unique for a canted AFM medium. Experimental verification of magnonic spin swapping was achieved by measuring the inverse spin Hall voltage (U\({}_{\text{ISHE}}\)) which is proportional to the spin current density on the _surface_ of a magnetic medium [11]. It is known that the magnetic structure near the surface can differ from the interior due to various factors, such as surface domain, dead magnetization layer, surficial anisotropy, and interfacial DMI [12, 13, 14, 15, 16]. This spatially inhomogeneous magnetic structure may enhance an exchange magnon spin current that is proportional to the spatial derivation of magnetization [1, 2]. Therefore, in a medium exhibiting an inhomogeneous surficial magnetic structure, magnonic spin currents with interchangeable flow directions and spin polarizations are possible. In this letter, we propose magnonic spin swapping in a perpendicularly magnetized _ferromagnetic_ medium with surficial in-plane anisotropy (IPA). Our study demonstrates that because of the exchange coupling at the interface between two regions featuring different magnetic anisotropies, an exchange magnon spin current flowing along the out-of-plane direction with an in-plane spin polarization can be triggered by the inner spin wave flowing along an in-plane direction with an out-of-plane spin polarization, exhibiting typical spin swapping characteristics (Fig. 1). This surficial exchange magnon spin current can give rise to a micro-volt U\({}_{\text{ISHE}}\), comparable to that in a canted AFM medium [9]. However, the spin swapping in this work is different in that it originates from exchange coupling instead of DMI. The micromagnetic simulation was carried out by using a MUMAX software. To controllably modify surficial magnetic properties, we consider a ferromagnetic medium (saturation magnetization \(M_{\text{S}}=6.9\)\(\times\)\(10^{5}\) A/m) with inner perpendicular magnetic anisotropy (PMA) and a thin surficial layer with IPA. The easy axis of surficial IPA is along the \(x\)-axis direction with an anisotropy constant of \(1\times 10^{4}\) J/m\({}^{3}\), and that of PMA is \(1\times 10^{6}\) J/m\({}^{3}\). The dimension of the PMA layer is 400 nm \(\times\) 200 nm \(\times\) 100 nm, and the IPA layer (\(t_{\text{IP}}\)) has a thickness ranging from 0 and 12 nm. The cell dimension is 0.5 nm, which is smaller than the exchange length (\(\sim\) 10 nm) as calculated by \(\ l_{ex}=\sqrt{\frac{2A}{\mu_{0}M_{\text{S}}}}\) and in close proximity to the lattice parameter. This ensures precise calculation of the spatial derivation of magnetization. Figure 1: Schematic of magnonic spin swapping for an exchange spin current in a perpendicularly magnetized medium with a surficial in-plane-anisotropy. A spin wave was excited by a localized alternating magnetic field \(\widetilde{H}_{ac}=H_{\text{max}}\sin(2\pi ft)\widetilde{e}_{x}\) in the middle of the medium. Here \(f\) and \(H_{\text{max}}\) are the linear frequency and amplitude, respectively. Figures 2 (a) and (b) exhibit the magnetization oscillation inner the PMA medium (\(z=0\)) and on the surface (\(z=104\) nm) for a 4-nm thick IPA layer. The magnetization direction transits from inner \(z\)-axis direction to surficial \(x\)-axis direction. Since the alternating field is along the \(x\)-axis direction, parallel to the IPA easy axis, we will demonstrate that the magnetic oscillation on the surface was not predominantly excited by the external field but triggered by the magnetization oscillation inner the PMA layer. The density of an exchange magnon spin current flowing along the \(x(z)\) axis was quantified as \(J_{sw}^{x(z)}=A[\widetilde{m}\times\frac{\partial\widetilde{m}}{\partial x (z)}]_{x(z)}\)[1]. Here the subscript sw is short for spin swapping. The spatial distribution of \(m_{\text{x}}\) and \(m_{\text{z}}\) in equilibrium and magnon spin current density were indicated in Figs. 2(c) and (d), respectively. Here we considered the spin current density at \(t=6\) ns when the magnetic oscillation is sufficiently stable. Inner the PMA layer (\(z=0\) nm), the magnon spin current composed by \(m_{z}\) and \(J_{s}^{x}\) [denoted by \((m_{s},J_{s}^{x})\)] is dominant over \((m_{\text{x}},J_{s}^{z})\). On the surface and in the vicinity of the PMA/IPA interface (\(z=100\sim 104\) nm), the magnon spin current \((m_{\text{x}},J_{s}^{z})\) assumes a prominent role. This interchange of the directions for two spin-current components indicates typical spin swapping characteristics. Figure 3: (a) \(m_{\text{x}}\) in equilibrium as a function of the thickness of the IPA surficial layer (\(tr\)). (b) Average surficial magnon spin current (\(<J_{sw}^{z}>\)) as a function of the thickness of \(t_{\text{IP}}\). (c) \(<J_{sw}^{z}>\) under the exciting field acting on different ranges along the thickness direction from 0 to \(z\). (d) Figure 2: Temporal magnon spin current density (a) inner the PMA layer (z = 0 nm) and (b) on the surface (z = 104 nm). (c) Changing of \(m_{\text{x}}\) and \(m_{\text{z}}\) at the equilibrium state and (d) the exchange magnon spin current density \(J_{sw}^{x}\) and \(J_{sw}^{z}\) along the thickness direction. \(<J_{sw}^{z}>\)**under different exchange constants at the PMA/IPA interface.** In experiments, the magnonic spin swapping can be characterized by measuring a DC \(U_{\rm ISHE}\) that is proportional to the average surficial spin current density along the \(z\)-axis (\(<J_{sw}^{z}>\)). Here \(<J_{sw}^{z}>\)was calculated by averaging \(J_{sw}^{z}\)over a sufficiently long period to ensure a fixed \(<J_{sw}^{z}>\). The equilibrium \(m_{\rm x}\) and the \(<J_{sw}^{z}>\) on the surface were calculated at different \(t_{\rm IP}\) [Fig. 3(a) and (b)]. When the \(t_{\rm IP}\) is 4 nm or smaller, the \(m_{\rm x}\) significantly increases, which is accompanied with the obvious enhancement of \(<J_{sw}^{z}>\), and it reaches \(1.7\times 10^{-10}\) J/m\({}^{2}\) at \(t_{\rm IP}\)= 4 nm. At a higher \(t_{\rm IP}\), the \(m_{\rm x}\) approaches 1, and \(<J_{sw}^{z}>\) gradually decreases, but remains higher than \(5\times 10^{-11}\) J/m\({}^{2}\). Since \[<J_{sw}^{z}>=A(<m_{x}\,\frac{\partial m_{y}}{\partial z}>-<m_{y}\,\frac{ \partial m_{x}}{\partial z}>)\quad,\quad\mbox{the}\quad\mbox{difference}\quad \mbox{between}\quad<m_{x}\,\frac{\partial m_{y}}{\partial z}>\quad\mbox{and}\] \[<m_{y}\,\frac{\partial m_{x}}{\partial z}>\,\,\mbox{determines the magnitude of }<J_{sw}^{z}>.\] When the IPA layer is sufficiently thick, the surficial magnetization stably aligns along the \(x\)-axis direction [Fig. 3(a)], and \(m_{\rm x}\) is significantly larger than \(m_{\rm y}\), giving rise to the domination of \(<m_{x}\,\frac{\partial m_{y}}{\partial z}>\,\,\mbox{over}\,<m_{y}\,\frac{ \partial m_{x}}{\partial z}>\). To confirm that the surficial spin current is generated by spin swapping instead of the excitation under external field, we restricted the spatial range of the alternating field from 0 to \(z\) [Fig. 3(c)]. When the field is acting on the PMA layer (\(z=0\sim 100\) nm), the surficial \(<J_{sw}^{z}>\)is approximately \(4.2\times 10^{-11}\) J/m\({}^{2}\). However, when the range of the field extended to cover a 1-nm IPA layer (\(z=0\sim 101\) nm), the \(<J_{sw}^{z}>\) has become almost the same as when the field covering the entire bilayer (\(z=0\sim 104\) nm). This indicates that the surficial spin current is transmitted from the magnetic oscillation near the PMA/IPA interface. To unravel the role of the exchange coupling at the PMA/IPA interface on the spin swapping, we calculated the surficial \(<J_{sw}^{z}>\)for different exchange constants (\(A_{\rm ex}\)) at the PMA/IPA interface [Fig. 3(d)]. Here the spatial range of excitation field is \(0\sim 100\) nm and \(0\sim 100.5\) nm. When this interfacial exchange coupling is absent (\(A_{\rm ex}=0\) J/m), the surficial \(<J_{sw}^{z}>\)was significantly depressed. At a small \(A_{\rm ex}\), the surficial \(<J_{sw}^{z}>\)obviously increases for both ranges of excitation field. This indicates that moderate interfacial exchange coupling allows the magnon flowing along an in-plane direction with an out-of-plane spin polarization to excite the secondary exchange magnon spin current flowing along the thickness direction with surficial in-plane spin polarization, thereby verifying the spin swapping behavior. However, a larger \(A_{\text{ex}}\) leads to suppress of the surficial \(<J_{sw}^{z}>\) since strong interfacial exchange coupling drags the magnetic moments in the IPA surficial region away from the out-of-plane direction, hindering the spatial variation of magnetization. \(<J_{sw}^{z}>\) also exhibits non-monotonous changing with the frequency of excitation field, and maximum \(<J_{sw}^{z}>\) appears at 40 GHz [Fig. 4(a)]. To obtain an understanding, we calculate the dispersion relationship curves of the bilayer for the exchange spin current flowing along the \(x\)-axis direction. It consists of two branches, a high-frequency PMA branch and a low-frequency IPA one [Fig. 4(b)]. It is noticed that the high-frequency branch exhibits a wide frequency range for a fixed wavelength, which is due to the exchange coupling at the PMA/IPA interface [17]. When the frequency is around the ferromagnetic resonance frequency (\(f_{\text{FMR}}\)) of the PMA layer, the \(<J_{sw}^{z}>\) reaches its maximum value. This indicates that the excitation of a magnetostatic spin wave in the PMA layer can excite strong spin swapping. The \(<J_{sw}^{z}>\) at a higher frequency is much smaller due to the difficulty for exciting an exchange wave. On the other hand, when the frequency is far below \(f_{\text{FMR}}\), the \(<J_{sw}^{z}>\) is also negligible even though this frequency is sufficiently high for the spin-wave transportation in the IPA layer. This further verifies that the \(<J_{sw}^{z}>\) results from the spin swapping by the spin wave in the PMA layer Figure 4: **(a) Average \(<J_{sw}^{z}>\) as a function of the excitation field frequency for a bilayer with \(t_{\text{IP}}\) = 4 nm. (b) Dispersion relationship curve for a bilayer with \(t_{\text{IP}}\) = 4 nm. (c) and (d) Dispersion relationship for the exchange magnon spin current flowing along the \(z\) direction in the PMA layer for a single PMA medium and a bilayer with \(t_{\text{IP}}\) = 4 nm, respectively.** instead of excitation under the external field. Similarly, by applying the same field for exciting spin wave propagation along the \(x\)-axis direction, we derived the dispersion relationship of the exchange spin current flowing along the \(z\)-axis direction inner the PMA layer. In the single PMA medium, negligible exchange magnon spin current along the \(z\)-axis can be detected [Fig. 4(c)]. Conversely, in the bilayer with a 4-nm thick IPA layer, a weak exchange magnon spin current along the \(z\)-axis was generated [Fig. 4(d)]. This vertical spin wave exhibits a large wavelength, and exhibits a large strength in the frequency range between 30 and 50 GHz. To experimentally characterize spin current, a heavy metal (HM) layer (like Pt) should be deposited above the magnetic layer, so that the spin current penetrating in the HM layer (\(\vec{J}_{s}^{z}\)) can be converted into a transversal electrical current (\(\vec{J}_{c}\)) through the ISHE effect and contributes to \(U_{\text{ISHE}}\). On the other hand, the magnetization precession on the surface and at the PMA/IAP interface can also pump a spin current, but this spin pumping effect is still quite controversial [18; 19; 20; 21; 22]. Therefore, in the subsequent analysis, we focus on the estimation of the \(U_{\text{ISHE}}\) resulting from spin swapping, and considered possible contribution from the spin pumping in the Supplementary Materials. We show that when the IPA layer reaches sufficient thickness, the spin current excited by spin pumping can be safely disregarded (S1 in the Supplementary Materials). We estimated the \(U_{\text{ISHE}}\) based on the equation [11]: \[U_{\text{ISHE}}=\frac{2e}{\hbar}\frac{\lambda}{d}\theta_{\text{SH}}\rho wm_{x} <J_{sw}^{z}>\tanh(\frac{d}{2\lambda})\,.\] Here \(e\) is the electron charge. We consider a Pt layer with a thickness \(d=10\) nm, a length \(w=3\) mm, and a width \(\rho=400\) nm, located 50 nm away from the excitation source. Spin diffusion length \(\lambda=7\) nm, and the spin Hall angle \(\theta_{\text{SH}}=0.08\)[9]. The \(U_{\text{ISHE}}\) indicates that for a surficial IPA layer with \(t_{\text{IP}}\)= 2, 4, and 8 nm, the \(U_{\text{ISHE}}\) is 1.04, 3.3, and 2.4 \(\upmu\)V, respectively, close to the value reported by Lin et al [9]. In summary, we proposed magnonic spin swapping in a PMA/IPA bilayer. This spin swapping generates an exchange magnon spin current due to a moderate exchange coupling at the PMA/IPA interface. The spin swapping can be enhanced when the orientation of surficial magnetization gradually transits from the \(z\) to \(x\) axis and the frequency of the excitation field is within the range of a magnetostatic wave for the PMA phase. Additionally, this spin current can result in a \(U_{\text{ISHE}}\) of several micro-volts. This work opens the door for realizing a magnonic spin swapping effect in a ferromagnetic medium with a mechanism that is distinct from that in canted AFM. ## Acknowledgment The authors acknowledge financial support from the National Key Research and Development Program of China (Grants No. 2022YFE0103300 and No. 2022YFA1402802) and the National Natural Science Foundation of China (Grants No. 51971098 and No. 12074057).
We propose the spin swapping effect for an exchange magnon spin current in an aperpendicularly magnetized ferromagnetic medium with in-plane anisotropy on the surface. The excitation of a magnon current flowing along an in-plane direction with an out-of-plane spin polarization leads to the generation of a secondary exchange spin current propagating along the out-of-plane direction, characterized by an in-plane spin polarization. The resulting exchange magnon spin current can induce an inverse spin Hall voltage of micro-volts. The exchange coupling at the interface between regions with different magnetic anisotropies plays a crucial role in generating the spin swapping effect. This is in contrast to the recently reported spin swapping for an exchange spin current in a canted antiferromagnet due to the Dzyaloshiskii-Moriyainteraction.
2309.10269
Using an Uncrewed Surface Vehicle to Create a Volumetric Model of Non-Navigable Rivers and Other Shallow Bodies of Water
Non-navigable rivers and retention ponds play important roles in buffering communities from flooding, yet emergency planners often have no data as to the volume of water that they can carry before flooding the surrounding. This paper describes a practical approach for using an uncrewed marine surface vehicle (USV) to collect and merge bathymetric maps with digital surface maps of the banks of shallow bodies of water into a unified volumetric model. The below-waterline mesh is developed by applying the Poisson surface reconstruction algorithm to the sparse sonar depth readings of the underwater surface. Dense above-waterline meshes of the banks are created using commercial structure from motion (SfM) packages. Merging is challenging for many reasons, the most significant is gaps in sensor coverage, i.e., the USV cannot collect sonar depth data or visually see sandy beaches leading to a bank thus the two meshes may not intersect. The approach is demonstrated on a Hydronalix EMILY USV with a Humminbird single beam echosounder and Teledyne FLIR camera at Lake ESTI at the Texas A&M Engineering Extension Service Disaster City complex.
Jayesh Tripathi, Robin Murphy
2023-09-19T02:46:17
http://arxiv.org/abs/2309.10269v1
Using an Uncrewed Surface Vehicle to Create a Volumetric Model of Non-Navigable Rivers and Other Shallow Bodies of Water ###### Abstract Non-navigable rivers and retention ponds play important roles in buffering communities from flooding, yet emergency planners often have no data as to the volume of water that they can carry before flooding the surrounding. This paper describes a practical approach for using an uncrewed marine surface vehicle (USV) to collect and merge bathymetric maps with digital surface maps of the banks of shallow bodies of water into a unified volumetric model. The below-waterline mesh is developed by applying the Poisson surface reconstruction algorithm to the sparse sonar depth readings of the underwater surface. Dense above-waterline meshes of the banks are created using commercial structure from motion (STM) packages. Merging is challenging for many reasons, the most significant is gaps in sensor coverage, i.e., the USV cannot collect sonar depth data or visually see sandy beaches leading to a bank thus the two meshes may not intersect. The approach is demonstrated on a Hydronalix EMILY USV with a Humminbird single beam echosounder and Teledyne FLIR camera at Lake ESTI at the Texas A&M Engineering Extension Service Disaster City\({}^{\text{\textregistered}}\) complex. ## I Introduction Prediction of floods is hampered by a low-cost system to generate volumetric models of non-navigable rivers, retention ponds, and other small bodies of water. The lack of volumetric models means that emergency managers cannot estimate the carrying or holding capacity of the body of water or at what point rainfall would result in a flood. There has been considerable work in using small uncrewed marine surface vehicles (USV) with low-cost depth finders to conduct bathymetric mapping [1, 2, 3, 4, 5, 6, 7, 8, 9]. Unfortunately, a bathymmetric depth profile is not sufficient because the elevation profile of the banks of the body of water is needed to complete a useful 3D volumetric model. One approach to acquiring the above-waterline elevation data is to use an uncrewed aerial system (UAS). However, a low-cost UAS using only a camera may not be able to generate a useful digital elevation map (DEM) of the banks due to the presence of trees, piers, etc. An additional disadvantage is that the addition of a UAS increases the workforce to field both a USV and UAS for the same area. Our approach, similar to Mancini et. al [10, 11], is to use a USV with sonar for bathymetry plus a camera mounted on top to collect the above-waterline imagery for the creation of a DEM. This approach goes further than [10] by merging the two 3D meshes into a single 3D mesh which can then be imported into geographical information system packages. This method is low-cost, relying on open-source software whenever possible and minimizes the workforce needed. It was demonstrated on a Hydronalix EMILY USV [12] with a Humminbird single beam echosounder and a Teledyne FLIR Duo Pro R camera for a small pond (See Fig. 1). The approach is expected to extend to other applications such as detecting changes over time in sediment deposits[13] and erosion in such bodies of water[14], and glacial lake outbursts[3]. ## II Prior Work Prior work in designing and using a USV to collect a complete volumetric model of shallow bodies of water appears extremely limited. McLaren Engineering [15], appears to use small USV and UAS in generating digital reconstruction of marine sea beds and coastal structures. However, the robots appear to use expensive multibeam sonar and Lidar rather than single-beam sonar and there is no indication of whether the Fig. 1: EMILY USV surveying Lake ESTI. digital surveys are merged and if so how much of the process is automated. An Italian team Mancini et. al [10] demonstrated a small USV with a SonarMite v3 Echo Sounder for bathymmetric data collection and a GoPro Hero for above-waterline data, and a real-time kinematic (RTK) positioning system for GPS. The team experimented with a Kinect V2 depth camera but river banks with trees generated noisy data [11]. The imagery from the GoPro was manually edited to remove the water and sky, and then Agisoft Photoscan was used to generate a DSM. The two data sets were not fused into a unified model. ## III Approach The approach taken by this project was to use a single USV configured with both a depth finder, camera, and GPS with RTK, similar to [10]. As shown in Fig. 2, the depth finder produces a sparse bathymmetric 3D mesh, while the camera produces a DSM from structure from motion which is converted to a 3D mesh. The conversion of both below- and above-waterline readings into a 3D mesh provides approximate surfaces which are more amenable to volumetric reasoning. The two meshes are then aligned into a single 3D mesh. ### _Generating the Below-Waterline 3D Mesh_ As shown in Fig. 3, the process of generating the below-waterline 3D mesh has three steps. In Step 1, the USV performs a complete coverage survey of the body of water [16], producing a.txt file of depth readings with the associated GPS latitude and longitude, essentially a sparse point cloud. The Hummibmild data reports latitude and longitude in degrees (measured in WGS84) and depth in meters. The latitude and longitude are converted into the UTM Zone projection, which measures each coordinate in meters, to maintain uniformity. It is not necessary but recommended, to visualize the depth readings as a depth map, see Fig. 4; that visualization serves to verify the correctness of the 3D map in Step 3. The custom script also estimates the centre of the convex hull of the depth map to simplify locating the mesh when working with Blender in Step 3. In Step 2, the Python script applies the Open3D Library Poisson Reconstruction algorithm[17] to create the 3D mesh. Note that the Open3D Ball Pivoting algorithms were experimented with. The Ball Pivoting algorithm was discarded because it did not generate meshes that matched the depth map. In addition, the Ball Pivoting algorithm had more parameters to tune, while the Poisson was more straightforward for a partially automated workflow. Step 3 is to repair or remove unnecessary faces using software for manipulating 3D meshes, in this case Blender. Fig. 5(a) shows the 3D mesh of the point cloud in Fig. 4 using the Poisson Reconstruction algorithm. The orange borders indicate adjacent vertices that should not have been connected. If the 3D mesh has minor errors, those can be corrected manually with results in Fig. 5(b). However, if the mesh is notably erroneous, it may be worthwhile to make sure that the data has been properly pre-processed or to apply another reconstruction algorithm. For example, if the latitude and longitude are not converted from the WGS84 system to the UTM coordinate system, the resulting mesh will be a single beam-like structure with points at different depths looking as Fig. 4: Visualization of the depth readings. Fig. 3: Steps in generating the Below-Waterline Mesh Fig. 2: Overview on How to Generate 3D Mesh. if they are super-imposed. ### _Generating the Above-Waterline 3D Mesh_ Fig. 6 summarizes the steps in generating the above-waterline mesh. Step 1 is to partition the collected images. The above-waterline dataset contains a densely populated dataset of the banks of the body of water. The images are partitioned into different banks, either left or right of a river or roughly linear stretches of a pond. Partitioning reduces the work time for Pix4DMapper significantly. It also removes the complexity of dealing with curved banks. This allows for the processing of each bank in a different instance of the software and then stitching all the above-waterline meshes in post-processing. Step 2 generates the mesh for each partitioned dataset. It should be noted that each image should be geotagged in order to reliably produce an accurate mesh. Although the approach does not depend on specific photogrammetric packages, Pix4DMapper was used. For the sake of replicability, the settings for the Pix4DMapper are given here: Geolocation Accuracy should be Standard; the Processing Options Template should use 3D Models; and Processing should have Initial Processing; Point Cloud and Mesh; DSM, Orthomasoic, and Index checkboxes checked. If there is GPS data associated with the images, each image will be shown as a data point on the Map View screen of the software, during the processing stage. After the initial processing in Step 2 has been completed, the software will produce a point cloud of the 3D Mesh which is representative of the final mesh that is going to be created. An example is shown in Fig. 7, with a bank in front of a building. Note the artifacts from reflections in the water below the bank. Step 3 is to repair the mesh. As seen in Fig. 7, the photogrammetric package may recognize water surface and cloud surface as points of interest, and create vertices correlated to the clouds and water surface. As with Step 3 in the below-waterline process, it is helpful to use the point clouds from Step 2 to identify the vertices that are not representative of the bank, and correspondingly remove them from the mesh. ### _Merging the Below- and Above-Waterline Meshes_ Once all the meshes have been individually repaired, the meshes can be aligned and merged using the Blender software Fig. 5: Example of Step 3, repairing 3D meshes by removing unnecessary faces. Fig. 6: Steps in generating the Above-Waterline mesh. package following Fig. 8. Blender produces a resulting point cloud and a mesh that can be exported in the.PLY or.OBJ File Format. The merged point cloud can be used to check for errors. In order for Blender to merge the meshes, all meshes should be georeferenced in the same coordinate system. It is possible to drag-and-drop meshes to manually align them, but having to the meshes geolocated automates this. If the above-waterline meshes are created with Pix4DMapper, those meshes will require intermediary steps to geolocate them in the WGS84 coordinate system. These intermediary steps consist of extracting the offset information capturing the distance from the GPS location of the camera found in the Pix4DMapper project folder, adding the offset to each vertex of the corresponding above-waterline 3D Mesh, and then converting the coordinates of each vertex from UTM to WGS84. ## IV Demonstration: Lake ESTI In order to demonstrate this approach, data was collected using a Hydronalix EMILY USV with a Humminbird single beam echosounder and a Teledyne FLIR Duo Pro R camera at Lake ESTI, a \(18,580m^{2}\) naturally occurring pond at the Texas A&M Engineering Extension Service Disaster City(r) complex. Lake ESTI has a triangular shape with three well-defined banks, with one bank covered with trees, another side a flat field, and the third, a levee and a building. Prior work Fig. 8: Overview of How to Generate the Final Mesh Fig. 7: Point cloud of a bank showing a building and reflections in the water. Fig. 9: Data collection at Lake ESTI. had established the accuracy of the bathymetric map generated by the echosounder for Lake ESTI and structure from motion for above-waterline mapping is well known. Thus, the purpose of the field exercise was not to show accuracy but rather to illustrate the workflow, and challenges, in collecting the data from the USV. Fig. 9(a) shows the outline of the polygon bounding the survey area superimposed on a satellite image of Lake ESTI. The boundary was set to cover the pond without the USV running aground; note this left a gap between the bank and the boat. EMILY planned and autonomously executed a complete coverage path [16] collecting underwater depth data. The path required multiple batteries to be swapped out resulting in the polygon being covered in three sections. At the completion of each section, EMILY then followed the bank side of the polygon and collected camera images for the above-waterline mesh. Unfortunately, it was discovered in post-processing that the camera and RTK readings used to geotag camera images were only intermittently recorded for the first section, though apparently the problem resolved itself after the battery change and it worked correctly for the remaining two sections. This meant that the above-waterline mesh could not be successfully constructed for the first section. The echosounder data was still usable as it had a built-in GPS system, so the bathymetric map was complete for all three sections of the polygon. Fig. 9(b) indicates the two banks that were correctly captured. Fig. 10 shows two rotations of the final combination of the three meshes (below-waterline, bank 1, bank 2). Notice the gaps between the below and above the waterline mesh in Fig. 10(b). ## V Conclusions This project confirms that volumetric models of non-navigable rivers and shallow bodies of water can be created from sonar and camera data captured by a small uncrewed surface vehicle. The below-waterline bathymetry and the above-waterline point cloud can be converted to georeferenced meshes and merged using the popular Blender software. It should be noted that successfully aligning meshes without manually translating or rotating meshes to compensate for obvious errors requires the GPS errors between sensors to be minimized. Therefore it is recommended that both the echomapper and camera system use a common RTK system to guarantee congruence. The disadvantage of the approach taken by this paper is that it may require manual intervention to complete the surface for volumetric analyses during the repair steps. An alternative method is to turn the mesh into voxels and artificially fill up the hole by increasing the size of the voxels. Once the voxels are large enough, the mesh can be wrapped with an outer layer, essentially a surface smoothing. The expectation is that the mesh that will be produced will have the least amount of distortion as compared to the original mesh, and the holes will be covered up by the outer layer of the mesh. Ongoing work is targeting deployments in non-navigable rivers. In those bodies of water, it should be easier to collect high-quality above-waterline DEM because the banks are closer together than in a lake or pond. However, it is more challenging to specify the polygon to survey because the rivers may have changed course and they usually contain navigational hazards such as fallen tree limbs, debris, and sandbars. Work is experimenting with using drones to autonomously identify open water where the USV can navigate without running aground and generate the polygon. ## Acknowledgments The authors thank Clint Arnett at Disaster City and Valeria Heredia, Thomas Manzini, Itzel Rodriguez, Yashas Salanki-matt, and Trey Smith for their assistance.
navigable rivers and retention pondsが洪水緩衝の役割を担う重要な役割を担う一方で、緊急対応担当者は洪水させる水量に関するデータを持っておらず、周辺の洪水になるまでを予測することが難しい。本稿では、無人海洋水面車 (USV) を利用して、浅い水域の岸線のデジタル表面図と測量図を統合した体積モデルを作成するための実用的なアプローチを紹介します。水底面は、稀なソナーの深度値を適用したポソンの表面再構築アルゴリズムによって開発されます。岸線の上面の密度の高い網は、商用の構造 from motion (SfM) パッケージで作成されます。メッシュの統合は、多くの理由で困難です。最も重要なのは、センサーのCoverage不足です。つまり、USVはソナーの深度値を収集したり、海岸線を視覚的に見ることができないため、2つのメッシュは交差しない
2309.08426
Classical shadows meet quantum optimal mass transport
Classical shadows constitute a protocol to estimate the expectation values of a collection of M observables acting on O(1) qubits of an unknown n-qubit state with a number of measurements that is independent of n and that grows only logarithmically with M. We propose a local variant of the quantum Wasserstein distance of order 1 of [De Palma et al., IEEE Trans. Inf. Theory 67, 6627 (2021)] and prove that the classical shadow obtained measuring O(log n) copies of the state to be learned constitutes an accurate estimate with respect to the proposed distance. We apply the results to quantum generative adversarial networks, showing that quantum access to the state to be learned can be useful only when some prior information on such state is available.
Giacomo De Palma, Tristan Klein, Davide Pastorello
2023-09-15T14:29:57
http://arxiv.org/abs/2309.08426v1
# Classical shadows meet quantum optimal mass transport ###### Abstract Classical shadows constitute a protocol to estimate the expectation values of a collection of \(M\) observables acting on \(O(1)\) qubits of an unknown \(n\)-qubit state with a number of measurements that is independent of \(n\) and that grows only logarithmically with \(M\). We propose a local variant of the quantum Wasserstein distance of order 1 of [De Palma _et al._, IEEE Trans. Inf. Theory 67, 6627 (2021)] and prove that the classical shadow obtained measuring \(O(\log n)\) copies of the state to be learned constitutes an accurate estimate with respect to the proposed distance. We apply the results to quantum generative adversarial networks, showing that quantum access to the state to be learned can be useful only when some prior information on such state is available. ## 1 Introduction Quantum tomography consists in finding a classical estimate of an unknown quantum state by measuring a given number of independent copies of the state and plays a key role in quantum information science [1]. The number of parameters that are required to describe a generic state of \(n\) qubits grows exponentially with \(n\). Therefore, if no information on the state is known a priori, any estimate that is accurate with respect to the trace distance or the fidelity requires an exponential number of copies of the state and becomes quickly unfeasible even for moderately large \(n\)[2, 3, 4]. However, the situation becomes radically different if we weaken the metric employed to measure the quality of the estimate. Let us consider the scenario where we are only interested in the expectation values of some observables. Then, the shadow tomography protocol can estimate such expectation values with a number of copies of the state that scales linearly with the number of qubits and polylogarithmically with the number of observables [5]. This result can be further improved if the observables are tensor products of \(O(1)\) Pauli matrices. Indeed in this case, by measuring a number of copies of the state that scales logarithmically with the number of observables and that is independent on the number of qubits, the classical shadow protocol can generate a classical estimate of the state (the classical shadow) from which the expectation values of the observables can be estimated [6, 7]. The striking property of this protocol is that the classical shadow does not depend on the observables to be estimated, and the protocol works also if such observables are revealed only after the measurements have been performed. A further improvement of the classical shadow protocol allows to estimate the expectation value of the tensor products of any number of Pauli matrices [8]. In this paper, we propose a distance on the set of the states of \(n\) qubits that metrizes the convergence of the classical shadow to the state to be estimated, _i.e._, such that the classical shadow obtained by measuring \(O\left(\frac{1}{\epsilon^{2}}\log\frac{1}{\epsilon}\log n\right)\) copies of the state to be estimated achieves with high probability distance \(\epsilon\) from the state. This distance, which we call the local quantum \(W_{1}\) distance, is built upon the quantum theory of optimal mass transport and is a variant of the quantum Wasserstein distance of order 1 (or quantum \(W_{1}\) distance) for qubits proposed in [9] (several other approaches to quantum optimal mass transport have been proposed, the most relevant are summarized in Appendix A). The local quantum \(W_{1}\) distance is induced by the norm dual to the local quantum norm, _i.e._, the local quantum \(W_{1}\) distance between the states \(\rho\) and \(\sigma\) is the maximum of the difference between the expectation values on \(\rho\) and on \(\sigma\) of an observable with local quantum norm at most one. The local quantum norm is inspired to the norm employed in the study of quantum spin systems on infinite lattices to turn the space of the local interactions into a Banach space [10, 11], and is built to have low values on observables which are the sum of operators acting on few qubits. As the quantum \(W_{1}\) distance of [9], the local quantum \(W_{1}\) distance coincides with the trace distance for \(n=1\) and is an extensive quantity, _i.e._, it is superadditive with respect to the composition of quantum systems and additive for product states. In particular, the local quantum \(W_{1}\) distance recovers the Hamming distance for the states of the computational basis. The local quantum \(W_{1}\) distance is always upper bounded by the quantum \(W_{1}\) distance. However, the local norm imposes a much stronger constraint on the observables than the quantum Lipschitz norm dual to the quantum \(W_{1}\) distance, and the local quantum \(W_{1}\) distance between states that are locally indistinguishable can be exponentially smaller than the quantum \(W_{1}\) distance. Our results are complementary to the results of [12, 13], which provide a protocol to estimate with respect to the quantum \(W_{1}\) distance of [9] any quantum state satisfying a transportation-cost inequality with a number of copies that grows polylogarithmically with the number of qubits. Quantum transportation-cost inequalities provide an upper bound to the quantum \(W_{1}\) distance in terms of the quantum relative entropy, and have been proved for Gibbs states of local Hamiltonians with a sufficiently strong decay of correlations [13, 14]. This paper removes any assumption on the state to be learned by weakening the metric employed to measure the quality of the estimate. However, we prove that when restricted to a set of Gibbs states of local Hamiltonians that can be learned efficiently in the quantum \(W_{1}\) distance, the local quantum \(W_{1}\) distance proposed in this paper is equivalent to the quantum \(W_{1}\) distance. We apply our results to the Quantum Wasserstein Generative Adversarial Network (QWGAN) proposed in [15], which provides an algorithm to train a variational quantum circuit to learn an unknown quantum state. We show that, if no a priori information on the state is available, the QWGAN can be equivalently trained on a classical shadow of the state and does not gain any advantage from having quantum access to the state. The paper is structured as follows. In section2 we define the local norm and the local quantum \(W_{1}\) distance and prove some of their properties. In section3, we present the classical shadow protocol of [6] and its improvement of [8]. In section4, we determine the convergence rate of the classical shadow protocol with respect to the local quantum \(W_{1}\) distance. In section5, we prove the equivalence between the quantum \(W_{1}\) distance and the local quantum \(W_{1}\) distance for the Gibbs states of local Hamiltonians satisfying a transportation-cost inequality. In section6, we discuss the application to QWGANs. We conclude in section7. Appendix A summarizes the main approaches to quantum optimal mass transport, Appendix B presents some related works on classical shadows, and Appendix C contains the proofs of the auxiliary lemmas. ## 2 The local quantum \(W_{1}\) distance In this section, we recall the definition of the quantum Wasserstein distance of order \(1\), we introduce the local quantum \(W_{1}\) distance and prove its basic properties. ### Notation Let us start by setting the notation for the paper: **Definition 2.1**.: For any \(k\in\mathbb{N}\) we define \[[k]=\left\{1,\,\ldots,\,k\right\}. \tag{2.1}\] We consider a quantum system made by \(n\) qubits, which we label with the integers from \(1\) to \(n\). Each qubit is associated with the Hilbert space \(\mathbb{C}^{2}\), such that the Hilbert space of the system is \(\left(\mathbb{C}^{2}\right)^{\otimes n}\). **Definition 2.2**.: For any subset of the qubits \(\Lambda\subseteq[n]\), let \[\mathcal{H}_{\Lambda}=\bigotimes_{x\in\Lambda}\mathbb{C}^{2} \tag{2.2}\] be the Hilbert space associated with the qubits in \(\Lambda\), let \(\mathcal{O}_{\Lambda}\) be the set of the self-adjoint linear operators acting on \(\mathcal{H}_{\Lambda}\), let \(\mathcal{O}_{\Lambda}^{T}\) be the set of the traceless operators in \(\mathcal{O}_{\Lambda}\), and let \(\mathcal{S}_{\Lambda}\) be the set of the quantum states acting on \(\mathcal{H}_{\Lambda}\). ### The quantum \(W_{1}\) distance and the quantum Lipschitz constant In this subsection, we briefly present the quantum \(W_{1}\) distance and the quantum Lipschitz constant of [9]. The quantum \(W_{1}\) distance is based on the notion of neighboring quantum states. Two states of \(n\) qubits are neighboring if they coincide after discarding a suitable qubit. We define the quantum \(W_{1}\) norm \(\|\cdot\|_{W_{1}}\) as the maximum norm that assigns distance at most one to any couple of neighboring states. The quantum \(W_{1}\) distance is then the distance induced by the quantum \(W_{1}\) norm. More formally, we have the following: **Definition 2.3** (Quantum \(W_{1}\) norm).: For any \(\Delta\in\mathcal{O}_{[n]}^{T}\) we define \[\|\Delta\|_{W_{1}}=\min\left\{\frac{1}{2}\sum_{x\in[n]}\left\|\Delta^{(x)} \right\|_{1}:\Delta^{(x)}\in\mathcal{O}_{[n]}^{T}\,,\;\mathrm{Tr}_{x}\Delta^{ (x)}=0\;\forall\,x\in[n]\,,\;\Delta=\sum_{x\in[n]}\Delta^{(x)}\right\}\,. \tag{2.3}\] The quantum \(W_{1}\) distance can be thought as a quantum version of the Hamming distance, since it exactly recovers the Hamming distance for the states of the computational basis. We define the dependence of the observable \(H\) on the qubit \(x\) as twice the minimum operator norm of the difference between \(H\) and any observable that does not act on \(x\): **Definition 2.4** ([16]).: For any \(x\in[n]\) and any \(H\in\mathcal{O}_{[n]}\) we define \[\partial_{x}H=2\min_{K\in\mathcal{O}_{x^{c}}}\left\|H-K\right\|_{\infty}\,, \tag{2.4}\] where \(x^{c}=[n]\setminus\{x\}\) We then define the quantum Lipschitz constant of the observable \(H\) as the maximum dependence of \(H\) on a qubit: **Definition 2.5** (Quantum Lipschitz constant).: For any \(H\in\mathcal{O}_{[n]}\) we define \[\|H\|_{L}=\max_{x\in[n]}\partial_{x}H\,. \tag{2.5}\] The quantum \(W_{1}\) norm on \(\mathcal{O}_{[n]}^{T}\) and the quantum Lipschitz constant on \(\mathcal{O}_{[n]}\) are mutually dual: **Proposition 2.1** ([9]).: _For any \(\Delta\in\mathcal{O}_{[n]}^{T}\) we have_ \[\|\Delta\|_{W_{1}}=\max_{H\in\mathcal{O}_{[n]}:\|H\|_{L}\leq 1}\mathrm{Tr} \left[\Delta\,H\right]\,. \tag{2.6}\] Despite the fact that the Lipschitz constant seems to constrain the maximization in (2.6) to local observables, the quantum \(W_{1}\) distance between states that are locally indistinguishable can be large. This is a consequence of the continuity of the von Neumann entropy \(S(\rho)=-\mathrm{Tr}\left[\rho\ln\rho\right]\) with respect to the quantum \(W_{1}\) distance: **Theorem 2.1** ([16]).: _For any two states of \(n\) qubits \(\rho,\,\sigma\in\mathcal{S}_{[n]}\),_ \[\frac{|S(\rho)-S(\sigma)|}{n}\leq h_{2}\left(\frac{\left\|\rho-\sigma\right\|_{ W_{1}}}{n}\right)+\frac{\left\|\rho-\sigma\right\|_{W_{1}}}{n}\ln 3\,, \tag{2.7}\] _where \(h_{2}\) is the binary entropy function_ \[h_{2}(x)=-x\ln x-(1-x)\ln\left(1-x\right)\,,\qquad 0\leq x\leq 1\,. \tag{2.8}\] Indeed, on the one hand Theorem 2.1 implies that any pure state is far from the maximally mixed state: **Proposition 2.2**.: _For any \(n\in\mathbb{N}\) and any pure state of \(n\) qubits \(\rho\in\mathcal{S}_{[n]}\) we have_ \[\left\|\rho-\frac{\mathbb{I}}{2^{n}}\right\|_{W_{1}}>0.189\,n\,. \tag{2.9}\] Proof.: Setting \[\left\|\rho-\frac{\mathbb{I}}{2^{n}}\right\|_{W_{1}}=n\,w\,,\qquad 0\leq w \leq 1\,, \tag{2.10}\] Theorem 2.1 implies \[h_{2}(w)+w\ln 3\geq\ln 2\,, \tag{2.11}\] from which \(w>0.189\). On the other hand, pure states that are locally indistinguishable from the maximally mixed state do exist: **Proposition 2.3** ([17]).: _For any \(n\) sufficiently large, there exists a pure state of \(n\) qubits \(\rho\in\mathcal{S}_{[n]}\) such that for any region \(\Lambda\subset[n]\) of size \(\left|\Lambda\right|\leq\left\lfloor 0.189\,n\right\rfloor\), the marginal of \(\rho\) on \(\Lambda\) is maximally mixed._ The local quantum \(W_{1}\) distance will capture the property of local distinguishability, and will take an exponentially small value for any two locally indistinguishable states. ### The local quantum norm and the local quantum \(W_{1}\) distance In analogy to Proposition 2.1, we wish to define the local quantum \(W_{1}\) norm as the dual of a local quantum norm for observables. Since we want the local quantum \(W_{1}\) distance to capture the property of local distinguishability, we require the local quantum \(W_{1}\) norm of a sum of operators acting on few qubits to be small. We consider all the decompositions of an observable \(H\) as a sum of local operators. In analogy to Definition 2.4, we define the dependence of any such decomposition on a qubit \(x\) as the sum of the operator norm of each local operator that acts on \(x\) weighted by a penalty \(c_{k}\) that grows with the locality \(k\) of the operator (_i.e._, \(k\) is the number of qubits on which the operator acts). We then define the local norm of such decomposition as the maximum dependence on a qubit, and the local norm of \(H\) as the minimum local norm of all its possible decompositions: **Definition 2.6** (Local quantum norm).: Let \(1=c_{1}\leq\ldots\leq c_{n}\,.\) For any \(H\in\mathcal{O}_{[n]}\) we define \[\|H\|_{\mathrm{loc}}=2\min\left\{\max_{x\in[n]}\sum_{\Lambda\ni x}c_{|\Lambda|} \left\|H_{\Lambda}\right\|_{\infty}:H=\sum_{\Lambda\subseteq[n]}H_{\Lambda}\,, \;H_{\Lambda}\in\mathcal{O}_{\Lambda}\right\}\,. \tag{2.12}\] _Remark 2.1_.: The local quantum norm depends on the choice of the penalties \(\{c_{k}\}_{k\in\mathbb{N}}\). Such norm is analog to the norm defined in [10, 11] in the context of quantum spin systems on infinite lattices to turn the set of interactions into a Banach space. [10, 11] define an interaction through its decomposition as a sum of local operators, so their norm does not involve the minimization over the decompositions. [10, 11] choose the penalties to grow exponentially with the size and eventually with the diameter of the region over which the operator acts, but at this stage we prefer to keep the freedom in the choice of the penalties. We can now define the local quantum \(W_{1}\) norm as the dual of the local quantum norm: **Definition 2.7** (Local quantum \(W_{1}\) norm).: We define the local quantum \(W_{1}\) norm as the norm on \(\mathcal{O}_{[n]}^{T}\) that is dual to the local norm on \(\mathcal{O}_{[n]}\): For any \(\Delta\in\mathcal{O}_{[n]}^{T}\), \[\|\Delta\|_{W_{1}\mathrm{loc}}=\max\left\{\operatorname{Tr}\left[\Delta\,H \right]:H\in\mathcal{O}_{[n]}\,,\|H\|_{\mathrm{loc}}\leq 1\right\}\,. \tag{2.13}\] The local quantum \(W_{1}\) norm can be computed with a linear program. (2.13) constitutes the dual program, while the primal program is provided by the following: **Proposition 2.4**.: _For any \(\Delta\in\mathcal{O}_{[n]}^{T}\) we have_ \[\|\Delta\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x\in[n]}a_{x}:\frac{\left\| \operatorname{Tr}_{\Lambda^{c}}\Delta\right\|_{1}}{2\,c_{|\Lambda|}}\leq\sum_{ x\in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[n]\right\}\,. \tag{2.14}\] Proof.: We have \[\|\Delta\|_{W_{1}\mathrm{loc}} =\max\left\{\sum_{\Lambda\subseteq[n]}\operatorname{Tr}\left[ \Delta\,H_{\Lambda}\right]:H_{\Lambda}\in\mathcal{O}_{\Lambda}\,,\;2\sum_{ \Lambda\ni x}c_{|\Lambda|}\left\|H_{\Lambda}\right\|_{\infty}\leq 1\; \forall\,x\in[n]\right\}\] \[=\max\left\{\sum_{\Lambda\subseteq[n]}\left\|\operatorname{Tr}_{ \Lambda^{c}}\Delta\right\|_{1}t_{\Lambda}:t_{\Lambda}\geq 0\;\forall\, \Lambda\subseteq[n]\,,\;2\sum_{\Lambda\ni x}c_{|\Lambda|}\,t_{\Lambda}\leq 1 \;\forall\,x\in[n]\right\}\,. \tag{2.15}\] The last maximization in (2.15) is the dual program of the primal linear program (2.14). The claim follows. _Remark 2.2_.: For \(n=1\), the local quantum \(W_{1}\) norm coincides with one half times the trace norm. ### Properties of the local quantum \(W_{1}\) distance In this subsection we prove some basic properties of the local quantum \(W_{1}\) distance. * The local quantum \(W_{1}\) distance always lies between the trace distance divided by the maximum penalty and the quantum \(W_{1}\) distance: **Proposition 2.5**.: _We have_ \[\|\cdot\|_{L}\leq\|\cdot\|_{\rm loc}\leq 2\,c_{n}\,\|\cdot\|_{\infty}\,,\qquad \frac{\|\cdot\|_{1}}{2\,c_{n}}\leq\|\cdot\|_{W_{1}{\rm loc}}\leq\|\cdot\|_{W_{ 1}}\,.\] (2.16) Proof.: Let \(H\in\mathcal{O}_{[n]}\). Choosing \(H_{[n]}=H\) in (2.12) we get \[\|H\|_{\rm loc}\leq 2\,c_{n}\,\|H\|_{\infty}\.\] (2.17) Let \[H=\sum_{\Lambda\subseteq[n]}H_{\Lambda}\,,\qquad H_{\Lambda}\in\mathcal{O}_ {\Lambda}\,.\] (2.18) We have for any \(x\in[n]\) \[\partial_{x}H\leq 2\left\|\sum_{\Lambda\ni x}H_{\Lambda}\right\|_{\infty} \leq 2\sum_{\Lambda\ni x}\left\|H_{\Lambda}\right\|_{\infty}\leq 2\sum_{ \Lambda\ni x}c_{|\Lambda|}\left\|H_{\Lambda}\right\|_{\infty}\,,\] (2.19) therefore \[\|H\|_{L}=\max_{x\in[n]}\partial_{x}H\leq 2\max_{x\in[n]}\sum_{\Lambda\ni x}c_{ |\Lambda|}\left\|H_{\Lambda}\right\|_{\infty}\] (2.20) and \[\|H\|_{L}\leq\|H\|_{\rm loc}\,.\] (2.21) The inequality \[\frac{\|\cdot\|_{1}}{2\,c_{n}}\leq\|\cdot\|_{W_{1}{\rm loc}}\leq\|\cdot\|_{W_{ 1}}\] (2.22) follows by duality. * The local quantum \(W_{1}\) norm can be upper bounded by the trace norm of the partial traces: **Proposition 2.6**.: _For any \(\Delta\in\mathcal{O}_{[n]}^{T}\) we have_ \[\|\Delta\|_{W_{1}{\rm loc}}\leq\sum_{x\in[n]}\max_{\Lambda\ni x}\frac{\left\| {\rm Tr}_{\Lambda^{c}}\Delta\right\|_{1}}{2\left|\Lambda\right|c_{|\Lambda|} }\leq n\max_{\Lambda\subseteq[n]}\frac{\left\|{\rm Tr}_{\Lambda^{c}}\Delta \right\|_{1}}{2\left|\Lambda\right|c_{|\Lambda|}}\,.\] (2.23) Proof.: The claim follows by setting in (2.14) \[a_{x}=\max_{\Lambda\ni x}\frac{\left\|\operatorname{Tr}_{\Lambda^{c}}\!\Delta \right\|_{1}}{2\left|\Lambda\right|c_{\left|\Lambda\right|}}\qquad\forall\,x \in[n]\,.\] (2.24) * As promised, the quantum \(W_{1}\) distance between any two locally indistinguishable states is suppressed by the penalties: **Corollary 2.1**.: _Let \(\rho,\,\sigma\in\mathcal{S}_{[n]}\) such that \(\rho_{\Lambda}=\sigma_{\Lambda}\) for any region \(\Lambda\subset[n]\) with size \(\left|\Lambda\right|<k\). Then,_ \[\left\|\rho-\sigma\right\|_{W_{1}\mathrm{loc}}\leq\frac{n}{k\,c_{k}}\,.\] (2.25) Proof.: We have from Proposition 2.6 \[\left\|\rho-\sigma\right\|_{W_{1}\mathrm{loc}}\leq n\max_{\Lambda\subseteq[n] }\frac{\left\|\rho_{\Lambda}-\sigma_{\Lambda}\right\|_{1}}{2\left|\Lambda \right|c_{\left|\Lambda\right|}}\leq n\max_{\Lambda\subseteq[n]:\left| \Lambda\right|\geq k}\frac{1}{\left|\Lambda\right|c_{\left|\Lambda\right|}}= \frac{n}{k\,c_{k}}\,.\] (2.26) The claim follows. * As the quantum \(W_{1}\) distance, also the local quantum \(W_{1}\) distance is superadditive with respect to the composition of quantum systems and additive with respect to the tensor product: **Proposition 2.7**.: _For any \(\Delta\in\mathcal{O}_{m+n}^{T}\) we have, for any region \(\Lambda\) of size \(m\),_ \[\|\Delta\|_{W_{1}\mathrm{loc}}\geq\|\mathrm{Tr}_{\Lambda^{c}}\Delta\|_{W_{1} \mathrm{loc}}+\|\mathrm{Tr}_{\Lambda}\Delta\|_{W_{1}\mathrm{loc}},\] (2.27) _and for any \(\rho,\sigma\in\mathcal{S}_{[m+n]}\) we have_ \[\|\rho-\sigma\|_{W_{1}\mathrm{loc}}\geq\|\rho_{\Lambda}-\sigma_{\Lambda}\|_{W _{1}\mathrm{loc}}+\|\rho_{\Lambda^{c}}-\sigma_{\Lambda^{c}}\|_{W_{1}\mathrm{ loc}}.\] (2.28) _Moreover, equality is achieved when \(\rho=\rho_{1}\otimes\rho_{2}\), \(\sigma=\sigma_{1}\otimes\sigma_{2}\), \(\rho_{1},\sigma_{1}\in\mathcal{S}_{[m]},\rho_{2},\sigma_{2}\in\mathcal{S}_{[n]}\)._ Proof.: Without loss of generality, we can consider the case where \(\Lambda\) is made by the first \(m\) qubits. Using (2.14), we can then write \[\|\mathrm{Tr}_{m+1,\ldots,m+n}\Delta\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x \in[m]}a_{x}:\frac{\left\|\mathrm{Tr}_{\Lambda^{c}}\Delta\right\|_{1}}{2\,c_{ \left|\Lambda\right|}}\leq\sum_{x\in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[ m]\right\} \tag{2.29}\] and \[\|\mathrm{Tr}_{1,\ldots,m}\Delta\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x=m+1 }^{m+n}a_{x}:\frac{\left\|\mathrm{Tr}_{\Lambda^{c}}\Delta\right\|_{1}}{2\,c_{ \left|\Lambda\right|}}\leq\sum_{x\in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[ m+1,\ldots,m+n]\right\}. \tag{2.30}\] Therefore, \[\|\mathrm{Tr}_{1,\ldots,m}\Delta\|_{W_{1}\mathrm{loc}}+\|\mathrm{Tr}_{ m+1,\ldots,m+n}\Delta\|_{W_{1}\mathrm{loc}}\] \[=\min\left\{\sum_{x\in[m+n]}a_{x}:\frac{\left\|\mathrm{Tr}_{\Lambda^ {c}}\Delta\right\|_{1}}{2\,c_{|\Lambda|}}\leq\sum_{x\in\Lambda}a_{x}\;\forall \,\Lambda\subseteq[n]\;\mathrm{or}\;\Lambda\subseteq[m+1,\ldots,m+n]\right\}\] \[\leq\min\left\{\sum_{x\in[m+n]}a_{x}:\frac{\left\|\mathrm{Tr}_{ \Lambda^{c}}\Delta\right\|_{1}}{2\,c_{|\Lambda|}}\leq\sum_{x\in\Lambda}a_{x} \;\forall\,\Lambda\subseteq[m+n]\right\}\] \[\leq\|\Delta\|_{W_{1}\mathrm{loc}},\] (2.31) where the first inequality is a result of the first linear program being the same as in (2.14), with less constraints, which proves the inequality part of the claim. For the equality case, we just showed \[\|\rho_{1}\otimes\rho_{2}-\sigma_{1}\otimes\sigma_{2}\|_{W_{1}\mathrm{loc}} \geq\|\rho_{1}-\sigma_{1}\|_{W_{1}\mathrm{loc}}+\|\rho_{2}-\sigma_{2}\|_{W_{ 1}\mathrm{loc}}\,.\] (2.32) We use Lemma C.1 to prove the other inequality as such \[\|\rho_{1}\otimes\rho_{2}-\sigma_{1}\otimes\sigma_{2}\|_{W_{1} \mathrm{loc}} \leq\|(\rho_{1}-\sigma_{1})\otimes\rho_{2}\|_{W_{1}\mathrm{loc}}+ \|\sigma_{1}\otimes(\rho_{2}-\sigma_{2})\|_{W_{1}\mathrm{loc}}\] \[\leq\|\rho_{1}-\sigma_{1}\|_{W_{1}\mathrm{loc}}+\|\rho_{2}-\sigma _{2}\|_{W_{1}\mathrm{loc}}\,,\] (2.33) which concludes the proof. * In particular, the local quantum \(W_{1}\) distance recovers the Hamming distance for the states of the computational basis: **Corollary 2.2**.: _For any \(x,\,y\in\{0,1\}^{n}\),_ \[\||x\rangle\langle x|-|y\rangle\langle y|\|_{W_{1}\mathrm{loc}}=h(x,y)=|\{i\in [n]:x_{i}\neq y_{i}\}|\.\] (2.34) Proof.: We have from Proposition 2.7 \[\||x\rangle\langle x|-|y\rangle\langle y|\|_{W_{1}\mathrm{loc}} =\sum_{i\in[n]}\||x_{i}\rangle\langle x_{i}|-|y_{i}\rangle \langle y_{i}|\|_{W_{1}\mathrm{loc}}=\frac{1}{2}\sum_{i\in[n]}\||x_{i}\rangle \langle x_{i}|-|y_{i}\rangle\langle y_{i}|\|_{1}\] \[=h(x,y)\,.\] (2.35) * The local quantum \(W_{1}\) norm is contracting with respect to the action of single-qubit quantum channels: **Proposition 2.8**.: _For any \(\Delta\in\mathcal{O}_{n}^{T}\), \(\Phi\) a quantum channel acting on a single qubit, we have_ \[\|\Phi(\Delta)\|_{W_{1}\mathrm{loc}}\leq\|\Delta\|_{W_{1}\mathrm{loc}}.\] (2.36) Proof.: Using (2.14), we can write \[\|\Phi(\Delta)\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x\in[n]}a_{x}:\frac{\left\| \mathrm{Tr}_{\Lambda^{c}}\Phi(\Delta)\right\|_{1}}{2\,c_{|\Lambda|}}\leq\sum_{x \in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[n]\right\}. \tag{2.37}\] Without loss of generality, \(\Phi\) acts on qubit \(i\), and there are two cases: 1. If \(i\in\Lambda^{c}\), \(\left\|\mathrm{Tr}_{\Lambda^{c}}\Phi(\Delta)\right\|_{1}=\left\|\mathrm{Tr}_{ \Lambda^{c}}\Delta\right\|_{1}\). 2. Otherwise, \(\left\|\mathrm{Tr}_{\Lambda^{c}}\Phi(\Delta)\right\|_{1}=\left\|\Phi(\mathrm{ Tr}_{\Lambda^{c}}\Delta)\right\|_{1}\leq\left\|\mathrm{Tr}_{\Lambda^{c}} \Delta\right\|_{1}\), since \(\Phi\) is a trace-preserving operation. Therefore, \[\|\Phi(\Delta)\|_{W_{1}\mathrm{loc}}\leq\min\left\{\sum_{x\in[n]}a_{x}:\frac{ \left\|\mathrm{Tr}_{\Lambda^{c}}\Delta\right\|_{1}}{2\,c_{|\Lambda|}}\leq \sum_{x\in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[n]\right\}=\|\Delta\|_{W_{ 1}\mathrm{loc}}. \tag{2.38}\] ## 3 Classical shadows The _classical shadow_ is a notion introduced to formulate tomographic protocols for extracting information from unknown quantum states with very few measurements (see Appendix B for further details and related works). Let \(\mathcal{U}\) be an ensemble of unitary operators over a \(n\)-qubit Hilbert space (i.e. any \(U\in\mathcal{U}\) has a statistical weight attached). **Definition 3.1**.: \(\mathcal{U}\) is said to be **tomographically complete** if for each \(\rho\neq\sigma\) there are \(U\in\mathcal{U}\) and \(|b\rangle\) element of the computational basis \(\{|b\rangle\,:\,b\in\{0,1\}^{n}\}\) such that: \[\langle b|U\rho U^{\dagger}|b\rangle\neq\langle b|U\sigma U^{\dagger}|b\rangle. \tag{3.1}\] This definition requires that if two states are different, there is always an evolution \(U\in\mathcal{U}\) such that the evolved states can be distinguished by a measurement in the computational basis. Given a tomographically complete ensemble \(\mathcal{U}\) and an unknown state \(\rho\) acting on \((\mathbb{C}^{2})^{\otimes n}\), consider the following elementary protocol [6]: 1. Sample \(U\in\mathcal{U}\) and evolve the state \(\rho\); 2. Measure the state \(U\rho U^{\dagger}\) in the computational basis. 3. Obtained the outcome \(\hat{b}\in\{0,1\}^{n}\), apply the inverse evolution to \(|\hat{b}\rangle\) obtaining \(U^{\dagger}|\hat{b}\rangle\langle\hat{b}|U\) which can be saved as classical information. Repeating the three steps above, one can consider the density matrix given by the expectation over \(\mathcal{U}\) and over the possible outcomes: \[\mathbb{E}_{U,\hat{b}}[U^{\dagger}|\hat{b}\rangle\langle\hat{b}|U]=\mathbb{E}_{U \sim\mathcal{U}}\left[\sum_{b\in\{0,1\}^{n}}\langle b|U\rho U^{\dagger}|b \rangle\ U^{\dagger}|b\rangle\langle b|U\right]=:\mathcal{M}(\rho), \tag{3.2}\] The linear map \(\mathcal{M}\) is completely positive and trace preserving, therefore it is a quantum channel. Moreover, the requirement of tomographic completeness implies that \(\mathcal{M}\) is invertible, though the inverse is in general not completely positive. The **classical shadow** of \(\rho\) is defined by: \[\hat{\rho}:=\mathcal{M}^{-1}\left(U^{\dagger}|\hat{b}\rangle\langle\hat{b}|U \right), \tag{3.3}\] it depends on the choice of \(\mathcal{U}\) and it is obtained by a single measurement on \(\rho\). By construction, \(\mathbb{E}_{U,\hat{b}}[\hat{\rho}]=\rho\). However, \(\mathcal{M}^{-1}\) is not a quantum channel in general, so the classical shadow is computed classically and stored as classical information, as it may not be a quantum state. Classical shadows can be used to predict expectation values of given observables \(O_{1},...,O_{M}\) on the unknown state \(\rho\). In fact, the classical shadows define the following random variables: \[\hat{o}_{i}:=\text{Tr}(O_{i}\hat{\rho})\qquad\forall i\in[M], \tag{3.4}\] with the nice property: \[\mathbb{E}_{U,\hat{b}}[\hat{o}_{i}]=\text{Tr}(O_{i}\rho)\qquad\forall i\in[M]. \tag{3.5}\] **Lemma 3.1** ([6]).: _Let \(O\) be an observable of the \(n\)-qubit system. The fluctuations of the random variable \(\hat{o}=\text{Tr}(O\hat{\rho})\) around \(\langle O\rangle_{\rho}\) are described by the variance:_ \[Var[\hat{o}]=\mathbb{E}_{U,\hat{b}}\left[\left(\hat{o}-\mathbb{E}_{U,\hat{b}} [\hat{o}]\right)^{2}\right]\leq\left\|O-\frac{\text{Tr}[O]}{2^{n}}\mathbb{I} \right\|_{sh}^{2}, \tag{3.6}\] _where the shadow norm is defined by:_ \[\|O\|_{sh} :=\max_{\sigma}\left(\mathbb{E}_{U\sim\mathcal{U}}\left[\sum_{b} \langle b|U\sigma U^{\dagger}|b\rangle\ \langle b|U\mathcal{M}^{-1}(O)U^{\dagger}|b\rangle^{2}\right]\right)^{ \frac{1}{2}}\] \[=\left\|\mathbb{E}_{U\sim\mathcal{U}}\sum_{b}\left\langle b|U \mathcal{M}^{-1}(O)U^{\dagger}|b\rangle^{2}\ U^{\dagger}|b\rangle\langle b|U \right\|_{\infty}^{\frac{1}{2}}. \tag{3.7}\] For any observable \(O\), a classical shadow predicts \(\text{Tr}(O\rho)\) in expectation and the characterization of the variance above allows to boost the convergence providing a good approximation with few measurement processes. Consider \(K\) collections of \(N\) classical shadows and take the median of the \(K\) empirical means over \(N\) as an estimator: \[\hat{o}(N,K):=\text{median}\{\hat{o}^{(1)}(N,1),...,\hat{o}^{(K)}(N,1)\}, \tag{3.8}\] \[\hat{o}^{(k)}(N,1)=\frac{1}{N}\sum_{j=N(k-1)+1}^{Nk}\text{Tr}(O\hat{\rho}_{j}) \qquad k\in[K]\] **Theorem 3.1** ([6]).: _Given an ensemble \(\mathcal{U}\) of unitaries and observables \(O_{1},...,O_{M}\) on an \(n\)-qubit Hilbert space, let \(\epsilon,\delta\in[0,1]\) and set the values:_ \[K=2\log\left(\frac{2M}{\delta}\right),\] \[N=\frac{34}{\epsilon^{2}}\max_{i}\left\|O_{i}-\frac{\operatorname{Tr}(O_{i})}{ 2^{n}}\mathbb{I}\right\|_{sh}^{2}.\] _Then:_ \[|\partial_{i}(N,K)-\operatorname{Tr}(O_{i}\rho)|\leq\epsilon\qquad\forall i \in[M]\] _with probability at most \(1-\delta\)._ The proof of Theorem 3.1 is based on standard properties of the estimator _median of means_. Since each classical shadow results from a single measurement on \(\rho\), the total number of measurements required to estimate the expectation values \(\operatorname{Tr}(O_{i}\rho)\) up to error \(\epsilon\) is: \[N=O\left(\frac{\log M}{\epsilon^{2}}\max_{i}\left\|O_{i}-\frac{\operatorname{ Tr}(O_{i})}{2^{n}}\mathbb{I}\right\|_{sh}^{2}\right). \tag{3.9}\] The sample complexity is logarithmic in the number of observables we consider and does not depend on the number of qubits. However, there is a dependence on the chosen ensemble \(\mathcal{U}\) via the shadow norm. **Definition 3.2**.: A **Pauli measurement primitive** is a tomographically complete ensemble \(\mathcal{U}\) such that any \(U\in\mathcal{U}\) is a tensor product \(U=U_{1}\otimes\cdots\otimes U_{n}\) of randomly selected 1-qubit Clifford gates \(U_{i}\in\operatorname{Cl}(2)\). Equivalently, a random Pauli matrix is measured on each qubit. Let us focus on the case where classical shadows are constructed applying a Pauli measurement primitive and let us consider the case of \(k\)-local observables, i.e. \(O\) is given by an elementary tensor product supported on \(k\) qubits like \(O=O_{1}\otimes\cdots\otimes O_{k}\otimes\mathbb{I}^{n-k}\) for instance. According to [6, Proposition S2 and Lemma S3], we can state the following result. **Proposition 3.1**.: _Let \(\rho\) be a \(n\)-qubit quantum states. The classical shadow of \(\rho\) constructed out from a Pauli measurement primitive is:_ \[\hat{\rho}=\bigotimes_{i=1}^{n}(3U_{i}^{\dagger}|\hat{b}_{i}\rangle\langle\hat {b}_{i}|U_{i}-\mathbb{I})\quad\text{where}\quad\hat{b}_{i}\in\{0,1\}\quad \forall i=1,...,n. \tag{3.10}\] _Moreover, let \(O\) be a \(k\)-local observable, then:_ \[\|O\|_{sh}^{2}=3^{k}. \tag{3.11}\] The following Theorem 3.2 provides a criterion for estimating the expectation values of local observables using the empirical mean of collected classical shadows. The proof is essentially a consequence of the Bernstein's concentration inequality, a similar result is proved in [18]. **Theorem 3.2**.: _Let \(O_{1},...,O_{M}\)\(k\)-local observables, let \(\hat{\rho}_{1},...,\hat{\rho}_{N}\) be classical shadows of the unknown \(n\)-qubit state \(\rho\) constructed out from a Pauli measurement primitive, and let \(\hat{\rho}:=(1/N)\sum_{i=1}^{N}\hat{\rho}_{i}\) be their empirical mean. Then, for any \(\epsilon,\delta>0\), if_ \[N\geq 3^{k+1}\frac{\log\left(\frac{2M}{\delta}\right)}{\epsilon^{2}} \tag{3.12}\] _we have_ \[|\mathrm{Tr}(\hat{\rho}O_{m})-\mathrm{Tr}(\rho O_{m})|\leq\epsilon\quad\forall m =1,...,M \tag{3.13}\] _with probability at least \(1-\delta\)._ Proof.: The claim is a consequence of the following inequality: \[\mathbb{P}[\max_{m}|\mathrm{Tr}(\hat{O}_{m}\hat{\rho})-\mathrm{Tr}(O_{m}\rho) |\geq\epsilon]\leq 2M\exp\left[-\frac{\epsilon^{2}N}{3^{k+1}}\right], \tag{3.14}\] that we can prove applying the Bernstein's concentration inequality: Let \(X_{1},...,X_{N}\) be independent random variables such that \(\mathbb{E}[X_{i}]=0\) and \(|X_{i}|\leq R\) almost surely for all \(i=1,...N\). Then, for \(\epsilon>0\): \[\mathbb{P}\left[\left|\frac{1}{T}\sum_{i=1}^{N}X_{i}\right|\geq\epsilon\right] \leq 2\exp\left(-\frac{\epsilon^{2}N^{2}/2}{\sigma^{2}+RN\epsilon}\right), \tag{3.15}\] where \(\sigma^{2}=\sum_{i=1}^{N}\mathbb{E}[(X_{i})^{2}]\). Given a \(k\)-local observable \(O\), let us define the random variables as \(X_{i}:=|\mathrm{Tr}(O\hat{\rho}_{i})-\mathrm{Tr}(O\rho)|\) that are independent and centered by construction of the classical shadows. Let \(\Lambda\subseteq[n]\), with \(|\Lambda|=k\), be the region on which \(O\) acts non-trivially. By the Holder's inequlaity, \(\mathrm{Tr}(O_{\Lambda}\rho_{\Lambda})\leq\|O_{\Lambda}\|_{\infty}\)\(\|\rho_{\Lambda}\|_{1}\), we have: \[|X_{i}|=|\mathrm{Tr}(O_{\Lambda}\hat{\rho}_{i\Lambda})-\mathrm{Tr}(O_{\Lambda }\rho_{\Lambda})|\leq\|O_{\Lambda}\|_{\infty}(\|\rho_{\Lambda}\|_{1}+\|\hat{ \rho}_{i\Lambda}\|_{1})\leq 1+3^{k},\] where we used the factorized form (3.10) of \(\hat{\rho}_{i}\) and the fact that \(\|O_{\Lambda}\|_{\infty}\leq 1\). In view of Lemma 3.1 and Proposition 3.1, we have \(\mathbb{E}\left[X_{i}^{2}\right]\leq\|O\|_{sh}=3^{k}\), then \(\sigma^{2}\leq N3^{k}\). Now, we apply (3.15) obtaining: \[\mathbb{P}\left[|\mathrm{Tr}(O\hat{\rho})-\mathrm{Tr}(O\rho)|\geq\epsilon \right]\leq 2\exp\left[-\frac{\epsilon^{2}N^{2}/2}{N3^{k}+(1+3^{k})N\epsilon} \right]\leq 2\exp\left[-\frac{\epsilon^{2}N}{3^{k+1}}\right]. \tag{3.16}\] The argument above applies for any \(O_{m}\), then: \[\mathbb{P}\left[\max_{m}|\mathrm{Tr}(O_{m}\hat{\rho})-\mathrm{Tr} (O_{m}\rho)|\geq\epsilon\right] \leq\sum_{m=1}^{M}\mathbb{P}\left[\max_{m}|\mathrm{Tr}(O_{m}\hat{ \rho})-\mathrm{Tr}(O_{m}\rho)|\geq\epsilon\right]\] \[\leq 2\,M\exp\left[-\frac{\epsilon^{2}N}{3^{k+1}}\right], \tag{3.17}\] obtaining (3.14). Therefore, \(N\geq 3^{k+1}\frac{\log\left(\frac{2M}{\delta}\right)}{\epsilon^{2}}\) implies \[\mathbb{P}\left[\max_{m}|\mathrm{Tr}(O_{m}\hat{\rho})-\mathrm{Tr}(O_{m}\rho)| \geq\epsilon\right]\leq\delta\,, \tag{3.18}\] that implies in turn that \(\mathbb{P}\left[|\mathrm{Tr}(O_{m}\hat{\rho})-\mathrm{Tr}(O_{m}\rho)|\leq \epsilon\right]\geq 1-\delta\) for any \(m=1,...,M\) that is the claim. \(n\)-qubit Pauli operators are tensor products of \(n\) Pauli matrices (identity included), and the locality or Hamming weight \(|P|\) of the Pauli operator \(P\) is the number of factors different from the identity, _i.e._, the number of qubits on which \(P\) acts nontrivially. Classical shadows are not the best protocol to estimate the expectation values of Pauli operators if their locality is high. Indeed, allowing Bell measurements on two copies of the state, the shadow protocol can be improved obtaining a sample complexity which does not depend on the locality degree of the considered Pauli operators [8]. Let \(\{|\Psi^{+}\rangle,|\Psi^{-}\rangle,|\Phi^{+}\rangle,|\Phi^{-}\rangle\}\) be the Bell basis: \[|\Psi^{\pm}\rangle=\frac{1}{\sqrt{2}}(|00\rangle\pm|11\rangle)\qquad|\Phi^{ \pm}\rangle=\frac{1}{\sqrt{2}}(|01\rangle\pm|10\rangle), \tag{3.19}\] assume to perform a measurement in this basis on each qubit pair of the \(n\) qubit pairs in the state \(\rho\otimes\rho\). After the Bell measurements on \(N_{1}\) copies of \(\rho\otimes\rho\), one obtains a \((2nN_{1})\)-bit string from which the value \(|\mathrm{Tr}(P\rho)|\) can be estimated for any Pauli operator \(P\). Then, with additional \(N_{2}\) measurements, on can estimate the sign of \(\mathrm{Tr}(P\rho)\). Any Bell state is an eigenvector of \(\sigma\otimes\sigma\) with eigenvalue \(\pm 1\) and \(\sigma\in\{I,X,Y,Z\}\). Consider the Pauli operator \(P=\sigma_{1}\otimes\cdots\otimes\sigma_{n}\), then: \[|\mathrm{Tr}(P\rho)|^{2}=\mathrm{Tr}[(P\otimes P)(\rho\otimes\rho)]=\mathbb{E }\left[\prod_{k=1}^{n}\mathrm{Tr}[(\sigma_{k}\otimes\sigma_{k})S_{k}]\right], \tag{3.20}\] where \(S_{k}\) is an eigenprojector of \(\sigma_{k}\otimes\sigma_{k}\). The average is taken over the distribution of the outcomes of a Bell measurement on any qubit pair in \(\rho\otimes\rho\). Let \(\{S_{k}^{(t)}\}\) be the collection of obtained outcomes from the repeated Bell measurements, then we can get the empirical mean as an estimation of \(|\mathrm{Tr}(P\rho)|^{2}\): \[\hat{a}(P)=\frac{1}{N_{1}}\sum_{t=1}^{N_{1}}\prod_{k=1}^{n}\mathrm{Tr}[( \sigma_{k}\otimes\sigma_{k})S_{k}^{(t)}]\,. \tag{3.21}\] More precisely, the following proposition is proven in [8]: **Proposition 3.2** ([8]).: _Given \(N_{1}=\Theta(\log(1/\delta)/\epsilon^{4})\) copies of \(\rho\otimes\rho\), the following is true for any Pauli operator \(P\):_ \[|\sqrt{\max(\hat{a}(P),0)}-|\mathrm{Tr}(P\rho)||<\epsilon, \tag{3.22}\] _with probability \(1-\delta\)._ If \(|{\rm Tr}(P\rho)|\) is large enough, then considering \(N_{2}\) copies of \(\rho\) allows to estimate the sign by measuring \(P\) on each of the \(N_{2}\) copies and taking the majority voting of the obtained \(1\)s and \(-1\)s. Assuming \(|{\rm Tr}(P\rho)|\) is large enough, then the majority voting is close to the correct answer, and if \(N_{2}\) is also large enough such that \(\rho^{\otimes N_{2}}\) is not highly perturbed by the measurement, then it can be used to decide the sign for many different Pauli operators. Formally: **Proposition 3.3** ([8]).: _For any \(\delta,\epsilon,M>0\), let \(N_{2}=\Theta(\log(M/\delta)\epsilon^{2})\). For any \(M\) Pauli operators \(P_{1},...,P_{M}\) with \(|{\rm Tr}(P_{i}\rho)|>\epsilon\) for all \(i\), measuring \(\rho^{\otimes N_{2}}\), \({\rm sign}({\rm Tr}(P_{i}\rho))\) can be obtained with probability \(1-\delta\) for any \(i=1,...,M\)._ The total number of copies of \(\rho\) required by the tomographic procedure above to estimate the expectations of \(M\) local Pauli operators is \(2N_{1}+N_{2}\). More precisely, as a consequence of the propositions above, we have the next lemma. **Lemma 3.2** ([8, Theorem 2]).: _Given any \(P_{1},...,P_{M}\) Pauli operators and a state \(\rho\), there is a procedure that produces \(\hat{p}_{1},...,\hat{p}_{M}\) with_ \[|\hat{p}_{i}-{\rm Tr}(P_{i}\rho)|\leq\epsilon,\forall i\in[M] \tag{3.23}\] _with probability at least \(1-\delta\) using \(O(\frac{\log(M/\delta)}{\epsilon^{4}})\) copies of \(\rho\)._ Remarkably, the sample complexity of the protocol does not depend on the locality degree (i.e. the number of qubits on which the action is nontrivial) of the considered Pauli operators. ## 4 Convergence of the classical shadow in the local quantum \(W_{1}\) distance Let us study the convergence of the empirical mean of the classical shadows to the original state with respect to the local quantum \(W_{1}\) distance: **Theorem 4.1**.: _Let \(\rho\) be an unknown quantum state of \(n\) qubits, let \(\hat{\rho}_{1},...,\hat{\rho}_{N}\) be classical shadows of \(\rho\) constructed out from a Pauli measurement primitive, and let \(\hat{\rho}:=\frac{1}{N}\sum_{i=1}^{N}\hat{\rho}_{i}\) be their empirical mean. We set the coefficients of the local norm (Definition 2.6) to \(c_{l}=\frac{c^{l-1}}{l}\), \(l=1,...,n\) with \(c>\sqrt{10}\), Then, for any \(0<\delta<1\) the normalized local \(W_{1}\) distance between \(\rho\) and \(\hat{\rho}\) can be bounded as follows:_ \[\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}{\rm loc}}\,\leq w, \tag{4.1}\] _with probability at least \(1-\delta\), by a number of classical shadows that scales as:_ \[N=O\left(\frac{\log\frac{1}{\delta}+\log\frac{1}{w}\log n}{w^{2}}\right). \tag{4.2}\] Proof.: For a given \(k<n\), let us assume we need to estimate the expectations of all the \(|P|\)-local Pauli operators on \(n\) qubits for any \(|P|\leq k\), up to an error \(\epsilon_{P}:=3^{|P|/2}(3/c)^{k}\), using the empirical mean \(\hat{\rho}\). By Theorem 3.2, we need a number \(N\) of classical shadows satisfying: \[N\geq 3\,\left(\frac{c}{3}\right)^{2k}\log\frac{2M_{k}}{\delta}\qquad\qquad M_{ k}=\sum_{i=1}^{k}{n\choose i}3^{i}\leq\frac{(3n)^{k}}{(k-1)!}, \tag{4.3}\] where \(M_{k}\) is the total number of \(|P|\)-local Pauli operators with \(|P|\leq k\). The quantum state \(\rho\) and its estimator \(\hat{\rho}\) can be decomposed onto the Pauli basis \(\{P\}\): \[\rho=\frac{1}{2^{n}}\sum_{P}\langle P\rangle P\quad\text{where}\quad\langle P \rangle=\operatorname{Tr}(P\rho)\,,\qquad\hat{\rho}=\frac{1}{2^{n}}\sum_{|P| \leq k}\hat{P}P\qquad k\leq n\,, \tag{4.4}\] where \(\hat{P}=\operatorname{Tr}\left[\hat{\rho}\,P\right]\) is the estimated expectation value of \(P\) computed with the empirical mean \(\hat{\rho}\) of the classical shadows. Let us consider the difference of the marginals on a region \(\Lambda\subseteq[n]\): \[\hat{\rho}_{\Lambda}-\rho_{\Lambda}=\frac{1}{2^{|\Lambda|}}\sum_{P\in\mathcal{ P}_{\Lambda}}(\hat{P}-\langle P\rangle)P, \tag{4.5}\] where \(\mathcal{P}_{\Lambda}\) is the set of local Pauli observables defined on \(\Lambda\). Now, we need to bound the trace norm of \(\hat{\rho}_{\Lambda}-\rho_{\Lambda}\) and apply Proposition 2.6. Let us recall that, given a \(d\times d\) complex matrix \(A\), we have \(\|A\|_{1}\leq\sqrt{d}\|A\|_{2}\) where \(\|\;\;\|_{2}\) is the Hilbert-Schmidt norm. Therefore: \[\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}\leq 2^{\frac{|\Lambda|}{2}}\|\hat{ \rho}_{\Lambda}-\rho_{\Lambda}\|_{2}\leq 2^{\frac{|\Lambda|}{2}}\sqrt{\frac{1}{2^{ 2|\Lambda|}}\sum_{P\in\mathcal{P}_{\Lambda}}(\hat{P}-\langle P\rangle)^{2}2^{| \Lambda|}}=\sqrt{\sum_{P\in\mathcal{P}_{\Lambda}}(\hat{P}-\langle P\rangle)^{2 }}. \tag{4.6}\] For \(|\Lambda|\leq k\), we have \((\hat{P}-\langle P\rangle)^{2}\leq\epsilon_{P}^{2}\). Thus, applying (4.6): \[\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}\leq\sqrt{\sum_{P\in\mathcal{P}_{ \Lambda}}\epsilon_{P}^{2}}=\sqrt{\sum_{i=1}^{|\Lambda|}{|\Lambda|\choose i}9^{ i}\,\left(\frac{3}{c}\right)^{2k}}=\frac{3^{k}}{c^{k}}\sqrt{(10^{|\Lambda|}-1)} \leq\sqrt{10^{|\Lambda|}}\,\frac{3^{k}}{c^{k}}, \tag{4.7}\] where we used the standard identity \(\sum_{i=0}^{N}{N\choose i}x^{i}=(1+x)^{N}\) with \(N=|\Lambda|\) and \(x=9\). In the case \(|\Lambda|>k\), we can bound the trace distance as follows: \[\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}\leq\|\hat{\rho}_{\Lambda}\|_{1}+1 \leq 3^{|\Lambda|}+1, \tag{4.8}\] where we have used the triangle inequality and the form of \(\hat{\rho}_{\Lambda}\) given by (3.10). Therefore, within the choice \(c_{l}=\frac{c^{l-1}}{l}\), for \(|\Lambda|\leq k\): \[\frac{\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}}{2|\Lambda|c_{|\Lambda|}} \leq\frac{\sqrt{10}^{|\Lambda|}}{2c^{|\Lambda|-1}}\frac{3^{k}}{c^{k}}\leq \frac{\sqrt{10}}{2}\frac{3^{k}}{c^{k}}. \tag{4.9}\] For \(|\Lambda|\geq k+1\), we have: \[\frac{\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}}{2|\Lambda|c_{|\Lambda|}}\leq \frac{3^{|\Lambda|}+1}{2c^{|\Lambda|-1}}\leq\frac{2}{3}\frac{3^{|\Lambda|}}{c ^{|\Lambda|-1}}\leq 2\,\frac{3^{k}}{c^{k}}. \tag{4.10}\] In (4.9) and (4.10), we have used the fact that the terms \(\frac{\sqrt{10}^{|\Lambda|}}{c^{|\Lambda|-1}}\) and \(\frac{3^{|\Lambda|}}{c^{|\Lambda|-1}}\) are decreasing in \(|\Lambda|\) and achieve the maximum for \(|\Lambda|=1\) and \(|\Lambda|=k+1\) respectively. According to Proposition 2.6, we can set: \[\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}\text{loc}}\leq 2\,\frac{3^{k}}{c^{k}}=w. \tag{4.11}\] In view of (4.3), we can estimate the required number of classical shadows to guarantee \(\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}\text{loc}}\leq w\). From (4.11), we obtain that \(N\geq 3\,\frac{4}{w^{2}}\log\left(\frac{2M_{k}}{\delta}\right)\) and observing that \(M_{k}=O(n^{k})\) with \(k=O(\log(1/w))\) the claim is proved. A second protocol to estimate an unknown \(n\)-qubit state \(\rho\) employs the estimates of the expectation values of all the Pauli operators acting on few qubits. Let \(\hat{P}\) be the estimate of the Pauli operator \(P\). We can then build the following estimate of the state \(\rho\): \[\hat{\rho}:=\frac{1}{2^{n}}\sum_{P}\hat{P}P\,. \tag{4.12}\] The operator \(\hat{\rho}\) of (4.12) may not be positive semidefinite. However, it satisfies \(\operatorname{Tr}\left[\hat{\rho}\,P\right]=\hat{P}\) for any Pauli operator \(P\). In section 3, we have summarized the tomographic procedure presented in [8], based on Bell measurements, which improves the shadow protocol. Let us consider \(\hat{\rho}\) as defined in (4.12), where the expectation values of the Pauli operators acting on at most \(k\) qubits are estimated by the _Bell procedure_, and the expectation values of the Pauli operators acting on more than \(k\) qubits are set to \(0\). We determine in Theorem 4.2 below the convergence rate of the above estimate to the true state with respect to the local quantum \(W_{1}\) distance. **Theorem 4.2**.: _Let \(\rho\) be an unknown quantum state of \(n\) qubits, \(\hat{\rho}\) be the estimating operator defined in (4.12) constructed out from the Bell procedure accessing \(N\) copies of \(\rho\). Let us set the coefficients of the local norm (Definition 2.6) to \(c_{l}=\frac{c^{l-1}}{l}\), \(l=1,...,n\) with \(c>2\). Then, for any \(0<\delta<1\) the normalized local \(W_{1}\) distance between \(\rho\) and \(\hat{\rho}\) can be bounded as follows:_ \[\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}\text{loc}}\leq w \tag{4.13}\] _with probability \(1-\delta\) using a number of copies that scales as:_ \[N=O\left(\frac{\log\frac{1}{\delta}+\log\frac{1}{w}\log n}{w^{4}}\right)\,. \tag{4.14}\] Proof.: The construction of \(\hat{\rho}\) as in (4.12) requires the shadow tomography over all the local Pauli observables up to \(k\) qubits, that are \[M_{k}=\sum_{i=1}^{k}\binom{n}{i}3^{i}\leq\sum_{i=1}^{k}\frac{(3n)^{i}}{i!}\leq \frac{(3n)^{k}}{(k-1)!}, \tag{4.15}\] for estimating the expectation value of any \(P\) with \(|P|\leq k\). As in the previous proof, we consider the trace norm of the difference of the marginals \(\hat{\rho}_{\Lambda}-\rho_{\Lambda}\) on a region \(\Lambda\subset[n]\). Since Lemma 3.2 does not depend on the choice of Paulis, we can fix a single \(0<\epsilon<1\) for all of them. Applying (4.6), for \(|\Lambda|\leq k\), we get with probability \(1-\delta\): \[\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}\leq\sqrt{\sum_{P\in\mathcal{P}_{ \Lambda}}\epsilon^{2}}\leq\epsilon\sqrt{\sum_{i=1}^{|\Lambda|}\binom{|\Lambda| }{i}3^{i}}\leq\epsilon 2^{|\Lambda|}. \tag{4.16}\] Else if \(|\Lambda|\geq k+1\), since we estimated non-local Paulis to be \(0\), the error for each is at most one: \[\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}\leq\sqrt{\sum_{i=1}^{|\Lambda|} \binom{|\Lambda|}{i}3^{i}}\leq\sqrt{4^{|\Lambda|}-1}. \tag{4.17}\] Applying Proposition 2.6, we have \[\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}\mathrm{loc}}\leq\max_{\Lambda\subseteq[ n]}\frac{\|\hat{\rho}_{\Lambda}-\rho_{\Lambda}\|_{1}}{2|\Lambda|c_{|\Lambda|}}. \tag{4.18}\] Choosing \(c>2\), and fixing the coefficients \(c_{|\Lambda|}=\frac{c^{|\Lambda|-1}}{|\Lambda|}\), we have, for \(|\Lambda|\leq k\), \[\frac{\epsilon 2^{|\Lambda|-1}}{c^{|\Lambda|-1}}\leq\epsilon, \tag{4.19}\] with equality achieved for \(|\Lambda|=1\). And for \(|\Lambda|\geq k+1\), \[\frac{2^{|\Lambda|-1}}{c^{|\Lambda|-1}}\leq\frac{2^{k}}{c^{k}}, \tag{4.20}\] with equality achieved for \(|\Lambda|=k+1\). Therefore we have: \[\frac{1}{n}\|\hat{\rho}-\rho\|_{W_{1}\mathrm{loc}}\leq\max\left(\epsilon, \frac{2^{k}}{c^{k}}\right)=w. \tag{4.21}\] It is in our interest to make both arguments of the maximum equal, as it will not change the required number of copies. If \(\epsilon\leq\frac{2^{k}}{c^{k}}\), we can increase \(\epsilon\) up to \(\frac{2^{k}}{c^{k}}\) without needing to use extra copies, making both arguments of the max equal. If it is larger, we can decrease the size of the region \(k\) until \(\epsilon\) is smaller, and then increase \(\epsilon\). In the end, we can write: \[\epsilon=\frac{2^{k}}{c^{k}},k=\Bigg{\lceil}\frac{-\log(\epsilon)}{\log(c/2)} \Bigg{\rceil}. \tag{4.22}\] The total number of copies needed, using Lemma 3.2 is: \[N=O\left(\frac{\log(M/\delta)}{w^{4}}\right)=O\left(\frac{1}{w^{4}}(\log(1/\delta )+\frac{\log\frac{1}{w}}{\log(c/2)}\log n)\right). \tag{4.23}\] Let us compare the convergence results of Theorem 4.1 and Theorem 4.2. On the one hand, the number of copies required by the empirical mean of the classical shadows has a better scaling with respect to the local quantum \(W_{1}\) distance compared to the Bell protocol (\(O\left(\frac{1}{w^{2}}\log\frac{1}{w}\right)\) compared to \(O\left(\frac{1}{w^{4}}\log\frac{1}{w}\right)\)). On the other hand, while for the convergence of the Bell-protocol estimate it is enough that the coefficients \(c_{k}\) in the definition of the local quantum norm grow as \(\frac{c^{k}}{k}\) with \(c>2\), the convergence of the empirical mean of the classical shadows requires \(c>\sqrt{10}\). ## 5 Gibbs states We have proved in section 4 that an estimate of a generic state of \(n\) qubits that is accurate in the local quantum \(W_{1}\) distance can be obtained by measuring \(O(\log n)\) copies of the state. Measuring \(O(\operatorname{polylog}n)\) copies of the state \(\omega\in\mathcal{S}_{[n]}\) is sufficient to get an estimate that is accurate for the quantum \(W_{1}\) distance of [9] if \(\omega\) is a Gibbs state of a local Hamiltonian satisfying the transportation cost-inequality \[\left\|\rho-\omega\right\|_{W_{1}}^{2}\leq\frac{n\,C}{2}\,S(\rho\|\omega) \qquad\forall\;\rho\in\mathcal{S}_{[n]}\,, \tag{5.1}\] which upper bounds the quantum \(W_{1}\) distance between \(\omega\) and a generic state \(\rho\) with their quantum relative entropy [12, 13]. Such transportation-cost inequality has been proved for the Gibbs states of local Hamiltonians satisfying suitable forms of decay of correlations [13, 14]. In this section, we connect the two results by proving that when we restrict the quantum \(W_{1}\) distance and the local quantum \(W_{1}\) distance to any family of Gibbs states of Hamiltonians with local quantum norm \(O(1)\) satisfying the transportation-cost inequality (5.1), the two distances become equivalent: **Proposition 5.1**.: _For any Hamiltonian \(H\in\mathcal{O}_{[n]}\), let_ \[\omega_{H}=\frac{e^{-H}}{\operatorname{Tr}e^{-H}} \tag{5.2}\] _be the associated Gibbs state (the inverse temperature does not appear since it can be reabsorbed in \(H\)). Let \(\mathcal{F}\subseteq\mathcal{O}_{[n]}\) be a family of Hamiltonians with local quantum norm at most \(M\). Let us assume that for any \(H\in\mathcal{F}\), the Gibbs state \(\omega_{H}\) satisfies the transportation-cost inequality (5.1) with a constant \(C\) that does not depend on \(H\). Then, for any \(H,\,K\in\mathcal{F}\) we have_ \[\frac{\left\|\omega_{H}-\omega_{K}\right\|_{W_{1}\!\operatorname{loc}}^{2}}{n^ {2}}\leq\frac{\left\|\omega_{H}-\omega_{K}\right\|_{W_{1}}^{2}}{n^{2}}\leq \frac{M\,C}{2}\,\frac{\left\|\omega_{H}-\omega_{K}\right\|_{W_{1}\!\operatorname {loc}}}{n}\,. \tag{5.3}\] Proof.: The first inequality in (5.3) follows from Proposition 2.5. We have from (5.1) \[\frac{\|\omega_{H}-\omega_{K}\|_{W_{1}}^{2}}{n^{2}} \leq\frac{C}{4\,n}\left(S(\omega_{H}\|\omega_{K})+S(\omega_{K}\| \omega_{H})\right)=\frac{C}{4\,n}\operatorname{Tr}\left[\left(\omega_{H}- \omega_{K}\right)(K-H)\right]\] \[\leq\frac{C}{4\,n}\left\|\omega_{H}-\omega_{K}\right\|_{W_{1} \mathrm{loc}}\left\|K-H\right\|_{\mathrm{loc}}\leq\frac{C\,M}{2\,n}\left\| \omega_{H}-\omega_{K}\right\|_{W_{1}\mathrm{loc}}\,. \tag{5.4}\] The claim follows. ## 6 Quantum Wasserstein Generative Adversarial Networks Quantum Generative Adversarial Networks (QGANs) constitute an algorithm to train a parametric quantum circuit to learn an unknown quantum state [19]. The training takes the form of an adversarial game, where a generator parametric quantum circuit with the goal of generating a state as close as possible to the true state is trained against a discriminator with the goal of discriminating between the true state and the generated state. In the typical setup, the discriminator trains a parametric observable to maximize the difference between its expectation value on the true state and on the generated state. The choice of the parametric observable plays a crucial role for the success of the training. In the original proposal of [19], the observable is constrained to have operator norm at most one, such that if the available set of parametric observables is large enough, the discriminator obtains the trace distance between the true and the generated state. This choice has later been shown to suffer from the problem of barren plateaus, _i.e._, the gradient of the cost function decays exponentially with the number of qubits and quickly becomes indistinguishable from zero, thus making the training impossible [20]. This problem can be ascribed to the property that any two orthogonal states have trace distance equal to one. Therefore, if we want to obtain the state \(|1\rangle^{\otimes n}\) starting from the state \(|0\rangle^{\otimes n}\) and we proceed by flipping the qubits one by one, the trace distance will not notice any progress until the last qubit is flipped. To solve this problem, Ref. [15] has proposed a quantum Wasserstein GAN (QWGAN) where the discriminator optimizes his cost over observables with quantum Lipschitz constant at most one. In this case, if the available set of parametric observables is large enough, the discriminator obtains the quantum \(W_{1}\) distance between the true and the generated state. This choice was inspired both by the predominance of the Wasserstein distance as cost function of the classical GANs [21] and by the results of Ref. [22] proving that local cost functions computed at the output of quantum circuits with logarithmic depth do not suffer from barren plateaus. Ref. [15] shows that, contrarily to the original QGAN, the QWGAN is capable of learning complex quantum states, such as the \(n\)-qubit GHZ state. In practice, the computational complexity of computing the exact Lipschitz constant grows exponentially with the number of qubits. Therefore, the QWGAN of [15] actually replaces the quantum Lipschitz constant with the upper bound given by the local quantum norm of the present paper with all the coefficients \(c_{k}\) set to one. Moreover, since the dimension of the vector space of the observables grows exponentially with the number of qubits \(n\), the QWGAN of [15] restricts the optimization of the discriminator to the linear combinations of a set of \(O(\operatorname{poly}n)\) tensor products of Pauli matrices. If no a priori information on the state to be learnt is available, the most natural choice for such set is made by the tensor product of few Pauli matrices. With this choice, the constraint on the observable becomes effectively a constraint on its local quantum norm, and the QWGAN will measure the quality of the generated state with respect to the local quantum \(W_{1}\) distance. We have proved in section 4 that the classical shadow obtained by measuring \(O(\operatorname{poly}n)\) copies of any quantum state constitutes an accurate estimate with respect to the local quantum \(W_{1}\) distance. Therefore, our results imply that the QWGAN can be equivalently trained using the classical shadow in place of the true state and does not get any advantage in having quantum access to the true state, unless some prior information on the true state motivates the addition of some tensor product of many Pauli matrices to the set of observables available to the discriminator. Indeed, the successful learning of the \(n\)-qubit GHZ state by the QWGAN of [15] was based on such an addition. ## 7 Conclusions We have defined the local quantum \(W_{1}\) distance as a distance that captures the notion of local distinguishability and we have proved that the classical shadow produced by measuring \(O(\log n)\) copies of any state of \(n\) qubits provides an estimate of the state which is accurate with respect to the local quantum \(W_{1}\) distance. In particular, we have determined the speed of convergence toward the true state of the estimate given by the empirical mean of a collection of classical shadows (Theorem 4.1). Moreover, we have considered the tomographic protocol presented in [8] that improves the shadow protocol by means of Bell measurements. Also in this case we have determined the speed of convergence of the estimate to the true quantum state in the local quantum \(W_{1}\) distance (Theorem 4.2). Moreover, we have proved that when restricted to the set of Gibbs states of local Hamiltonians which can be efficiently estimated with respect to the quantum \(W_{1}\) distance, the local quantum \(W_{1}\) distance is equivalent to the quantum \(W_{1}\) distance. Furthermore, we have applied our results to quantum generative adversarial networks, showing that the QWGAN proposed in [15] can get advantages from having quantum access to the state to be learned only when some prior information on such state is available. Fundamental questions that are left open are whether the convergence speeds of Theorem 4.1 and Theorem 4.2 are optimal, and whether the requirements of such theorems on the scaling of the coefficients \(c_{k}\) in the definition of the local quantum norm can be relaxed. ## Acknowledgements GDP was supported by the HPC Italian National Centre for HPC, Big Data and Quantum Computing - Proposal code CN00000013 and by the Italian Extended Partnership PE01 - FAIR Future Artificial Intelligence Research - Proposal code PE00000013 under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU. DP was supported by project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU. GDP is a member of the "Gruppo Nazionale per la Fisica Matematica (GNFM)" of the "Istituto Nazionale di Alta Matematica "Francesco Severi" (INdAM)". ## Appendix A Further approaches to quantum optimal mass transport Several quantum generalizations of optimal transport distances have been proposed besides the one of Ref. [9]. One line of research by Carlen, Maas, Datta and Rouze [23, 24, 25, 26, 27, 28, 29] defines a quantum Wasserstein distance of order 2 from a Riemannian metric on the space of quantum states based on a quantum analog of a differential structure. Exploiting their quantum differential structure, Refs. [25, 26, 30] also define a quantum generalization of the Lipschitz constant and of the Wasserstein distance of order 1. Alternative definitions of quantum Wasserstein distances of order 1 based on a quantum differential structure are proposed in Refs. [31, 32, 33, 34]. Refs. [35, 36, 37] propose quantum Wasserstein distances of order 1 based on a distance between the vectors of the canonical basis. Another line of research by Golse, Mouhot, Paul and Caglioti [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48] arose in the context of the study of the semiclassical limit of quantum mechanics and defines a family of quantum Wasserstein distances of order 2 built on a quantum generalization of couplings. Such distances have been generalized to von Neumann algebras [49, 50, 51]. Ref. [52] proposes another quantum Wasserstein distance of order 2 based on couplings, with the property that each quantum coupling is associated to a quantum channel. The relation between quantum couplings and quantum channels in the framework of von Neumann algebras has been explored in [53]. The problem of defining a quantum Wasserstein distance of order 1 through quantum couplings has been explored in Ref. [54]. The quantum Wasserstein distance between two quantum states can be defined as the classical Wasserstein distance between the probability distributions of the outcomes of an informationally complete measurement performed on the states, which is a measurement whose probability distribution completely determines the state. This definition has been explored for Gaussian quantum systems with the heterodyne measurement in Refs. [55, 56, 57]. Related works on classical shadows Tomography with classical shadows, summarized in section 3, has been proposed as a restricted version of the shadow tomography protocol originally developed by Aaronson [5]. In general, the shadow tomography addresses this problem: given a collection of \(m\) observables, how many copies of an \(n\)-qubit state \(\rho\) are necessary and sufficient to estimate their expectation values over \(\rho\) up to an error \(\epsilon\)? A crucial requirement is to avoid considering an exponential number of copies of the unknown state as done in standard quantum state tomography. Using post selected learning [58], shadow tomography achieves a sample complexity \(\widetilde{O}((n\log^{4}m)/\epsilon^{4})\). However, the original shadow tomography protocol presents an exponential time complexity, in this respect classical shadows provide a more efficient protocol [6]. Moreover, tomography with classical shadows has been analyzed in presence of noise [59], extended to continuous variables quantum systems [60], characterized in terms of Bayesian analysis [61], and applied in several contexts [62, 63, 64, 18, 60]. A recent theoretical generalization of classical shadows, called _hybrid shadows_[65], has been proposed. In this case, given an \(n\)-qubit state, some of the qubits are measured to store classical shadows and the entangled states of the remaining qubits are stored as quantum data. This technique can be used for providing more accurate estimates of expectations values at the cost of more quantum memory. Another recent generalization of classical shadow tomography has been proposed considering unitary ensembles where the probability distribution of the evolution unitaries is invariant under local-basis transformations, such as random unitary circuits and quantum Brownian dynamics [66]. Beyond classical shadows, there are other improvements of the shadow tomography. For instance, Badescu and O'Donnell [67], improved the sample complexity of shadow tomography to \(\widetilde{O}((n^{2}\log^{2}m)/\epsilon^{2})\) based on a procedure called _quantum hypothesis selection_ which can be viewed as an _agnostic learning_ of quantum states [4]. Shadow tomography with only allowed separable measurements is considered by Chen et al. [68], they proved that \(\widetilde{\Omega}(\min\{m,d\})\) copies of a \(d\)-dimensional quantum states are necessary for estimating \(m\) expectation values. This sample complexity matches to the upper bound \(\widetilde{O}(\min\{m,d\})\) showed in the first proposal of classical shadow tomography [6]. ## Appendix C Auxiliary Lemmas **Lemma C.1**.: _For any \(\Delta_{1}\in\mathcal{O}_{m}^{T},\Delta_{2}\in\mathcal{O}_{n}^{T}\), we have_ \[\|\Delta_{1}\otimes\Delta_{2}\|_{W_{1}\mathrm{loc}}\leq\|\Delta_{1}\|_{W_{1} \mathrm{loc}}\|\Delta_{2}\|_{1}.\] (C.1) Proof.: Using (2.14), we can write \[\|\Delta_{1}\otimes\Delta_{2}\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x\in[m+n ]}a_{x}:\frac{\|\mathrm{Tr}_{\Lambda^{c}}\Delta_{1}\otimes\Delta_{2}\|_{1}}{2 \,c_{|\Lambda|}}\leq\sum_{x\in\Lambda}a_{x}\ \forall\,\Lambda\subseteq[m+n]\right\}.\] (C.2) We can then separate \(\Lambda\) in two, one part acting on \(\Delta_{1}\) and another acting on \(\Delta_{2}\): \(\Lambda=\Lambda_{1}\cup\Lambda_{2}\). We can therefore write \[\|\Delta_{1}\otimes\Delta_{2}\|_{W_{1}\mathrm{loc}}=\min\left\{\sum_{x\in[m+n]} a_{x}:\frac{\left\|\mathrm{Tr}_{\Lambda_{1}^{c}}\Delta_{1}\right\|_{1}\left\| \mathrm{Tr}_{\Lambda_{2}^{c}}\Delta_{2}\right\|_{1}}{2\,c_{|\Lambda|}}\leq \sum_{x\in\Lambda}a_{x}\;\forall\,\Lambda\subseteq[m+n]\right\}.\] (C.3) Since \(\left\|\mathrm{Tr}_{\Lambda_{2}^{c}}\Delta_{2}\right\|_{1}\leq\|\Delta_{2}\|_{1}\), and \(c_{|\Lambda_{1}|}\leq c_{|\Lambda|}\), we get \[\|\Delta_{1}\otimes\Delta_{2}\|_{W_{1}\mathrm{loc}}\leq\min\left\{\sum_{x\in[m +n]}a_{x}:\frac{\left\|\mathrm{Tr}_{\Lambda_{1}^{c}}\Delta_{1}\right\|_{1}}{2 \,c_{|\Lambda_{1}|}}\,\|\Delta_{2}\|_{1}\leq\sum_{x\in\Lambda}a_{x}\;\forall \,\Lambda\subseteq[m+n]\right\}.\] (C.4) There is no longer any dependency on \(\Lambda_{2}\), which means \(\forall x\geq m+1,a_{x}=0\). Therefore we can extract \(\|\Delta_{2}\|_{1}\), \[\|\Delta_{1}\otimes\Delta_{2}\|_{W_{1}\mathrm{loc}} \leq\min\left\{\sum_{x\in[m]}a_{x}:\frac{\left\|\mathrm{Tr}_{ \Lambda_{1}^{c}}\Delta_{1}\right\|_{1}}{2\,c_{|\Lambda_{1}|}}\leq\sum_{x\in \Lambda_{1}}a_{x}\;\forall\,\Lambda_{1}\subseteq[m]\right\}\left\|\Delta_{2} \right\|_{1}\] \[=\|\Delta_{1}\|_{W_{1}\mathrm{loc}}\|\Delta_{2}\|_{1}.\] (C.5)
古典的な影は、M個の観測量の期待値を推定するためのプロトコルとして機能します。これらの観測量はO(1) qubitsを持つ、未知のn qubitsの状態を測定する際に作用します。測定数はnに依存せず、Mにのみ比例して cresce logaritimi で増加します。私たちは、[De Palma et al., IEEE Trans. Inf. Theory 67, 6627(2021)]の量子 Wasserstein距離の1次の局所的なバージョンを提案し、その古典的な影が、その距離に対する学習すべき状態の測定回数O(log n)の複製を測定することで、精度が高い推定となります。私たちは、量子生成性競合ネットワークにこの結果を適用し、学習すべき状態への量子アクセスが、そのような状態に関する事前情報が存在する場合にのみ有益であると示しました。
2304.00092
DynamoPMU: A Physics Informed Anomaly Detection and Prediction Methodology using non-linear dynamics from $μ$PMU Measurement Data
The expansion in technology and attainability of a large number of sensors has led to a huge amount of real-time streaming data. The real-time data in the electrical distribution system is collected through distribution-level phasor measurement units referred to as $\mu$PMU which report high-resolution phasor measurements comprising various event signatures which provide situational awareness and enable a level of visibility into the distribution system. These events are infrequent, unschedule, and uncertain; it is a challenge to scrutinize, detect and predict the occurrence of such events. For electrical distribution systems, it is challenging to explicitly identify evolution functions that describe the complex, non-linear, and non-stationary signature patterns of events. In this paper, we seek to address this problem by developing a physics dynamics-based approach to detect anomalies in the $\mu$PMU streaming data and simultaneously predict the events using governing equations. We propose a data-driven approach based on the Hankel alternative view of the Koopman (HAVOK) operator, called DynamoPMU, to analyze the underlying dynamics of the distribution system by representing them in a linear intrinsic space. The key technical idea is that the proposed method separates out the linear dynamical behaviour pattern and intermittent forcing (anomalous events) in sequential data which turns out to be very useful for anomaly detection and simultaneous data prediction. We demonstrate the efficacy of our proposed framework through analysis of real $\mu$PMU data taken from the LBNL distribution grid. DynamoPMU is suitable for real-time event detection as well as prediction in an unsupervised way and adapts to varying statistics.
Divyanshi Dwivedi, Pradeep Kumar Yemula, Mayukha Pal
2023-03-31T19:32:24
http://arxiv.org/abs/2304.00092v1
DynamoPMU: A Physics Informed Anomaly Detection and Prediction Methodology using non-linear dynamics from \(\mu\)PMU Measurement Data ###### Abstract The expansion in technology and attainability of a large number of sensors has led to a huge amount of real-time streaming data. The real-time data in the electrical distribution system is collected through distribution-level phasor measurement units referred to as \(\mu\)PMU which report high-resolution phasor measurements comprising various event signatures which provide situational awareness and enable a level of visibility into the distribution system. These events are infrequent, unschedule, and uncertain; it is a challenge to scrutinize, detect and predict the occurrence of such events. For electrical distribution systems, it is challenging to explicitly identify evolution functions that describe the complex, non-linear, and non-stationary signature patterns of events. In this paper, we seek to address this problem by developing a physics dynamics-based approach to detect anomalies in the \(\mu\)PMU streaming data and simultaneously predict the events using governing equations. We propose a data-driven approach based on the Hankel alternative view of the Koopman (HAVOK) operator, called DynamoPMU, to analyze the underlying dynamics of the distribution system by representing them in a linear intrinsic space. The key technical idea is that the proposed method separates out the linear dynamical behaviour pattern and intermittent forcing(anomalous events) in sequential data which turns out to be very useful for anomaly detection and simultaneous data prediction. We demonstrate the efficacy of our proposed framework through analysis of real \(\mu\)PMU data taken from the LBNL distribution grid. DynamoPMU is suitable for real-time event detection as well as prediction in an unsupervised way and adapts to varying statistics. Anomaly detection; distribution grid; Hankel alternative view of the Koopman; micro-PMU measurements; prediction; sparse identification of nonlinear dynamics; unsupervised data-driven analysis. ## I Introduction ### _Background and Motivation_ The modern electrical distribution grid is equipped with many utility assets and equipment which include capacitor banks, tap-changing transformers, protection devices, dynamic loads, electric vehicles, and distributed energy resources [1]. The normal and abnormal switching operations of these assets can create various events in the system, and the behaviour of these events has different signature patterns [2]. It is an important aspect of capturing and detecting the events; allows awareness and enhances the visibility of the distribution grid [3], helps in diagnosing the health of the asset [4], analyzes the distribution-level oscillation occurring because of distributed energy resources [5], microgrid synchronization [6], and fault detection and analysis [7]. These analyses and event detection have been made possible with the integration of distribution-level phasor measurement units in smart grids named \(\mu\)PMU [8] as shown in Fig. 1. \(\mu\)PMU provides all three-phase measurements of voltage and current magnitude and phase angle at 120 readings per second. Using the enormous measurements of \(\mu\)PMU, data-driven techniques were found suitable as they automate the process of detection as well as prediction of events. The data-driven techniques help in identifying infrequent, uncertain and unscheduled occurring events. The example of anomalous events in voltage signal is shown in Fig. 2. In this work, we solve this dynamic problem of detecting anomalies in the \(\mu\)PMU data and simultaneously predicted the events occurring in the long term by using physics informed unsupervised data-driven method. ### _Summary of Technical Contributions_ We use the online available real-world dataset of \(\mu\)PMU measurement which enables advanced diagnostic and monitoring strategies in distribution systems. The dataset involves various unknown, infrequent and unscheduled natures of events. Fig. 1: \(\mu\)PMU installed in 8-bus distribution feeder system. We propose a physics-inspired dynamic approach which analyzes the measurement data without the requirement of prior knowledge about the event signatures. The key contributions of this paper are listed as follows: * A novel physics-inspired analytics method which represents the nonlinear dynamical electrical distribution system by a linear model based on the Hankel matrix alternative view of Koopman (HAVOK) theory. For detecting and analyzing the change in dynamical behaviour (i.e., anomalous events) in various linear inherent coordinates we identify the intermittent forcing in the dynamic system. The proposed method has the capability to represent complex dynamics and effectively simplify the dynamical pattern through the developed framework. The \(\mu\)PMU could easily be analyzed based on this dynamic system theory without making any prior knowledge about the system which shows the robustness to handle the data-inherent uncertainty. * It involves time-delay embedding theory and dynamic mode decomposition (DMD) combined with the Koopman operator; a linear infinite-dimensional operator. The finite approximation of the Koopman operator is the key phenomenon which evolves the inherent functions to acquire a similar dynamical behaviour as the original nonlinear dynamic response through the Koopman transformation. From that, we identify the different dynamical signatures or events in the data. * Accordingly, any dynamical signatures or patterns that deviate from the linear dynamical response are considered as the event in the \(\mu\)PMU data. The proposed method analyses the dynamical behaviour in the \(\mu\)PMU data thus we named it DynamoPMU. To check the effectiveness of the proposed event detection method compared to state-of-the-art methods in the literature. We computed various metrics scores which show that it outperforms the existing methods. This method only requires the \(\mu\)PMU measurement data without any information about the network model or prior labelling of the events. * Using the governing equations obtained while performing the anomaly detection, we can predict the \(\mu\)PMU measurements and anomalies simultaneously for the long term. This is achieved by using a sparse identification of nonlinear dynamics (SINDy), which prominently predict the measurements by considering the dynamical governing equations of the system. To train the model we use 10 days of \(\mu\)PMU data and predicted the measurements for the next two days. The obtained results give better metrics scores for the prediction which shows the reliability of the proposed method. ### _Literature Review_ Event detection has been explored significantly in the electrical distribution grid. In literature, data-driven methods have been proposed based on statistical methods [9], machine learning-based supervised, semi-supervised and unsupervised learning algorithms. Statistical methods involve the concept of absolute deviation around the median with consideration of dynamic window sizes [3]. Using null and alternative hypothesis test tools of statistics frequency deviation, including load variation and faults is detected and further detected high impedance fault (HIF) which is the hardest to detect [10]. In terms of metrics computed for event detection, the statistical method lacks performance in comparison to the proposed DynamoPMU method. The implementation of machine learning models got extensive attention because of the great amount of measurement data available. Using supervised and semi-supervised learning which requires either full labelling or partial labelling of the events has been implemented in [11, 12, 13] and many others. However, it is difficult to achieve prior labels for the events, thus the implementation of these methods is not suitable for real-world applications. Further, researchers also acknowledge the implication of deep learning unsupervised learning algorithms to overcome the gap of requirements of prior labelling. Available work focuses on detecting some particular events, such as frequency-related events [14, 15, 16], fault events [16], capacitor bank switching [17] or voltage-related events [18]. However, we propose an event detection method which covers the detection of a wide range of events. There are research papers available which can also detect a wide range of events using unsupervised deep learning algorithms which includes the implementation of Generative Adversarial Networks (GAN) [19, 20]. We have compared the performance of the method in terms of metrics and our proposed method shows better results for detecting the events. Another unsupervised method based on ensemble learning is proposed to develop a model for fast, scalable bad data/event detection for PMU [21]. These methods are suitable for offline event detection while the proposed method has the capability to be implemented for real-time streaming data. The proposed method has the capability to detect the events ahead of their actual occurrence and this could be utilized as an alarm signal to the operator for situational awareness. Further, this Fig. 2: Example of anomalies detected in voltage signal. approach using the past data simultaneously can predict the measurements as well as anomalies for the long term. In [22], they worked on real-time anomaly detection as well as simultaneous prediction. This method based on hierarchical temporal memory has the capability to learn online in one pass and adapt to changing statistics, but it cannot detect prior to the event occurrence and it has not detected many signature events. Thus, DynamoPMU analyzes the changes in the dynamic behaviour of the measurement data which allows it to capture various events significantly prior to their occurrence. The main approach used in the paper for anomaly detection is inspired by the HAVOK analysis, it has been utilized broadly in various areas such as biomedical signal processing for detecting anomalies in multi-variate EEG system [23], pathophysiological processes of obstructive sleep apnea [24], real-time control of robotic systems [25] and turbulence flow study [26, 27]. The method works efficiently in these fields thus considering its performance in other domains we utilized its implementation for detecting anomalies in the electrical distribution system. For accurate prediction, the implementation of SINDy has been found very effective in predicting the transmission dynamics of COVID-19 [28], discovering mechanistic equations for measles, chickenpox, and rubella [29], accurate prediction of thermal comfort in the vehicle cabin [30], prediction of blood glucose [31] and many others. It has also been used in the power grid to predict online voltage evolution to replace the static voltage-sensitivity analysis [32]. SINDY models have shown a superior performance when dealing with highly nonlinear, high-dimensional, multi-scale dynamical systems with limited training data and low computational burden. The paper is organized as follows: a description of the methodology is detailed in Section II with the process of how we detected the anomalies and predicted the measurements. Section III provides the analysis and discussion for the obtained results. Finally, Section IV concludes the paper. ## II Materials and Methods The integration of renewable energy sources, electric vehicles, demand response, and peer-to-peer energy trading changes the network's load profiles and configuration spontaneously. This complex interaction causes great uncertainties, and even bidirectional power flow in the distribution network, making distribution networks' behavioural response more non-linear and dynamic. The proposed framework is found to be suitable to analyze the dynamic response and understand the non-linear behaviour of the electrical distribution system. Let us consider a dynamic electrical distribution system in the following form: \[\frac{d}{dt}x(t)=f(x(t)) \tag{1}\] where \(x(t)\in\mathbb{R}^{n}\) is the state of system at time \(t\), \(f\) is a function which describe the dynamics of the state \(x\). Equation 1 represents a continuous-time dynamical system, while taking \(\mu\)PMU data in samples it is represented as: \[x_{k+1}=F(x_{k})=x_{k}+\int_{k\Delta t}^{(k+1)\Delta t}f(x(\delta))d\delta \tag{2}\] where, \(x_{k}=x(k\Delta t)\) represents the system's trajectory from equation 1 and \(F\) is a discrete-time propagator. ### _Koopman Operator Theory_ Traditionally, the geometric outlook of a dynamic system transverse trajectory which are dependent on fixed points, periodic orbits and attractors. On the other hand, a Koopman operator introduced in 1931, provides a better and alternative view which analyzes the measurements of state \(y=m(x)\). It is the more practical view for analyzing a nonlinear complex dynamical system through observation space, as measurement data is growing abundantly. Koopman operator \((\mathbb{K})\) is a linear representation of a nonlinear system by converting the n-dimensional state vector to an infinite dimensional state vector. Thus, it is defined as an infinite-dimensional linear operator that works on measurement functions \(m:G\rightarrow\mathbb{R}\) of state \(x\) and given as: \[\mathbb{K}_{m}=m\circ F \tag{3}\] where, \(\circ\) is a composition operator, which can be expanded as: \[\mathbb{K}_{m}(x_{k})=m(F(x_{k}))=m(x_{k}+1) \tag{4}\] The measurements in the given space are: \[m=\alpha_{1}m_{1}+\alpha_{2}m_{2}+\cdots+\alpha_{s}m_{s} \tag{5}\] after applying the Koopman operator, \(m\) remains in the same state space and is given as: \[\mathbb{K}_{m}=\beta_{1}m_{1}+\beta_{2}m_{2}+\cdots+\beta_{s}m_{s} \tag{6}\] where, \(\alpha_{s}\), \(\beta_{s}\in\mathbb{R}\), \(\forall s=1,\ldots,s\) are the coefficients for linear subspace for the observations \(m_{s}\) before and after application of Koopman operator for \(s-\)dimensional measurement subspace. The linear system that supports the Koopman observable is bounded to the subspace as: \[y_{k+1}=\mathbb{K}y_{k} \tag{7}\] \[y_{k}=[m_{1}(x_{k}),m_{2}(x_{k}),\ldots,m_{s}(x_{k})]^{T} \tag{8}\] where \(y_{k}\) is a measurement vector in the Koopman invariant subspace as shown in Fig.3. Thus, the Koopman operator expresses a nonlinear system into a linear system. However, finite-dimensional approximation of the Koopman operator is challenging [33]. Using dynamic mode decomposition (DMD), a best-fitted linear matrix which represents spatial measurements is implemented, but it possesses a large error in many nonlinear systems [34]. Another extended DMD was proposed but that is not reliable [35]. To overcome the drawback, intrinsic data-driven measurement coordinates are derived from the time history of the system measurement, namely eigen time-delay coordinate is proposed [33]. In this work, we have utilized the eigen time-delay coordinates by satisfying Taken's embedding theorem conditions. ### _Takens' embedding theorem for state space reconstruction_ Takens' embedding theorem suggests that we can enhance the measurements \(a(t)\) with time shifts \(a(t-\tau)\) known as delay coordinates such that the attractor of a dynamical system is an isomorphism of smooth manifolds to the original attractor under specific conditions. In accordance with the theorem, the trajectory is formed by points \(\hat{x_{i}}\) using the delay map as follows: \[\hat{x_{i}}=[p(x_{i}),p(x_{i+\tau}),\ldots,p(x_{i+(m-1)\tau})] \tag{9}\] \[\hat{x_{i}}=(a_{i},a_{i+\tau},\ldots,a_{i+(m-1)\tau}) \tag{10}\] \[i=1,\ldots,N-e+1 \tag{11}\] where \(\tau\) is the time lag, \(e\) is the embedding dimension and \(N\) length of time-series. ### _Hankel alternative view of Koopman (HAVOK) analysis_ The measurement time-series \(a(t)\) from \(\mu\)PMU is taken and Hankel matrix \(\mathbb{H}_{m}\) is formed under the assumption that the conditions of the Taken's embedding theorem are satisfied [36]. Then we compute eigentime delay coordinates by performing singular vector decomposition (SVD) of the Hankel matrix \(\mathbb{H}_{m}\). Hankel Matrix, \[\mathbb{H}_{m}=\begin{bmatrix}a(t_{1})&a(t_{2})&\ldots&a(t_{p})\\ a(t_{2})&a(t_{3})&\ldots&a(t_{p+1})\\ \vdots&\vdots&\ddots&\vdots\\ a(t_{q})&a(t_{q+1})&\ldots&a(t_{T})\end{bmatrix} \tag{12}\] where, time points \(t_{i+1}=t_{i}+\tau(i=1,\ldots,T-1)\), \(q\) represents the number of points in the trajectory and \(p\) represents window length which means the longest periodicity attained by the Hankel matrix. After implementing SVD we get, \[\mathbb{H}_{m}=U\Sigma V^{T} \tag{13}\] The columns from SVD obtained are \(U\) and \(V\) which are exhibited hierarchically in decreasing order. Generally, \(\mathbb{H}_{m}\) acquires a low-rank approximation for the first \(r-th\) columns of \(U\) and \(V\) which raise a measurement subspace that is invariant to the Koopman variant for the states. Thus we can rewrite the Hankel matrix as: \[\mathbb{H}_{m}=\begin{bmatrix}a(t_{1})&\mathbb{K}a(t_{1})&\ldots&\mathbb{K}^ {p-1}a(t_{1})\\ \mathbb{K}a(t_{1})&\mathbb{K}^{2}a(t_{1})&\ldots&\mathbb{K}^{p}a(t_{1})\\ \vdots&\vdots&\ddots&\vdots\\ \mathbb{K}^{q-1}a(t_{1})&\mathbb{K}^{q}a(t_{1})&\ldots&\mathbb{K}^{T-1}a(t_{1} )\end{bmatrix} \tag{14}\] Hankel matrix's rows and columns are well estimated by the first \(r-th\) columns and rows of \(U\) and \(V\) respectively, which are referred to as eigen time-series which provides a Koopman-invariant measurement system. The first \(r\) columns of \(V\) give a time series of the magnitude for each of the columns of \(U\Sigma\) in the data. By plotting the first three columns of \(V\), we obtain an embedded attractor as shown in Fig. 4. The SINDy algorithm is applied to achieve a forced linear system from the delay coordinates and obtain a good linear fitting for the initial \(r-1\) variables and a bad fit for the \(r-th\) variable. Basically, eigen time-delay with the Koopman operator performs a linear regression where a linear model built on the first \(r-1\) variables in \(V\) and \(v_{r}\) as an intermittent forcing term which is given as: \[\frac{d}{dt}v(t)=Av(t)+Bv_{r}(t) \tag{15}\] \[\frac{d}{dt}\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{r-1}\\ v_{r}\end{bmatrix}=\begin{bmatrix}A&B\\ 0&0\end{bmatrix}\begin{bmatrix}v_{1}\\ v_{2}\\ \vdots\\ v_{r-1}\\ v_{r}\end{bmatrix} \tag{16}\] where \(v=[v_{1},v_{2},\ldots,v_{r-1}]^{T}\) represents the first \(r-1\) eigen time-delay coordinates. \(v_{r}\) represented as a forcing in the linear dynamical system means nonlinear dynamics in the system. The statistics of \(v_{r}(t)\) are non-Gaussian in nature, its long tail corresponds to rare events that drive a lobe switching in the dynamical system. Although statistics of \(v_{r}(t)\) is not sufficient in identifying the high-frequency bursts, thus dynamics splitting concept is applied to chaotic systems, in which the Koopman operators have a continuous spectrum which is given as: \[V_{r}=\begin{bmatrix}v^{(1)}&v^{(2)}&\ldots&v^{(q-1)}\\ v^{(1)}_{r}&v^{(2)}_{r}&\ldots&v^{(q-1)}_{r}\end{bmatrix} \tag{17}\] \[V^{{}^{\prime}}_{r}=\begin{bmatrix}v^{(2)}&v^{(3)}&\ldots&v^{(q)}\\ v^{(2)}_{r}&v^{(2)}_{r}&\ldots&v^{(q)}_{r}\end{bmatrix} \tag{18}\] \[\mathbb{A}\approx=V^{{}^{\prime}}_{r}V^{{}^{\prime}}_{r} \tag{19}\] where, \(V^{{}^{\prime}}_{r}\) is the 1-step time advanced eigen time-delay coordinates of \(V_{r}\) and \(V^{{}^{\prime}}_{r}\) is the pseudo inverse of \(V_{r}\) computed using SVD. The matrices \(V_{r}\) and \(V^{{}^{\prime}}_{r}\) are linkable by a best fitting operator \(\mathbb{A}\) which minimize the Frobenius norm error \(||V^{{}^{\prime}}_{r}-\mathbb{A}V_{r}||\). Further, sparse identification of nonlinear dynamical systems (SINDy) [37] is implemented to obtain the first eigen decomposition of \(\mathbb{A}\). Finally, the thresholding is applied to the values of forcing operators using the three-sigma rule [38], which states that if values go beyond \(\mu\pm 3\sigma\), those values are considered anomalies in the measurements. Fig. 3: Koopman operator linearizing a nonlinear system using an observable measurement function \(m\). ### _SINDY-based measurements and anomaly prediction_ From the available measurements of \(\mu\)PMU dynamic data set of the distribution system, the SINDY identifies fully nonlinear dynamical terms using a modified version of equation 1 given as: \[\dot{X}=\Theta^{T}(X) \tag{20}\] where \(X\) is a Hankel matrix given in equation 12. The term \(\Theta^{T}\) is a library of the candidate dynamics which enhances the performance of the SINDY algorithm [39]. In the electrical distribution system, the library of candidate dynamics is chosen which comprises a mixture of nonlinear trigonometric functions such as \(\sin{(\delta-\theta)}\) or \(\cos{(\delta-\theta)}\). Using a sparse regression algorithm, sparse coefficients \(\zeta_{k}\) are obtained from the initial values \(\hat{\zeta_{k}}\) given as: \[\zeta_{k}=argmin\zeta_{k}||\dot{X}_{k}-\hat{\zeta_{k}}(X)\Theta^{T}||_{2}+ \alpha||\hat{\zeta_{k}}||_{1} \tag{21}\] where, \(\dot{X}_{k}\) represents the \(k-th\) row of \(\dot{X}\). The value of \(\alpha\) is chosen such that the prediction model maintain complexity and accuracy. ### _Source of \(\mu\)PMU datasets_ For anomaly detection and prediction in \(\mu\)PMU measurements using the proposed DynamoPMU, we use an open-source dataset provided by Lawrence Berkeley National Laboratory's (LBNL) distribution grid [40]. This is the first dataset of \(\mu\)PMU installed in an electrical distribution grid which is available for research. We chose a dataset of \(\mu\)PMU installed in a \(7.2\) kV grid, named a6bus1. The used \(\mu\)PMU is PQube3 which measures the three-phase voltage and current phasor and magnitudes at 512 samples per cycle, 60 cycles per second. The \(\mu\)PMUs measure phase angle and output synchrophasor data at 120 Hz frequency. The available data is in millisecond resolution for analysis we have re-sampled at second intervals for 12 days and 13 hours (data points 1,083,600). The events that occurred are of long duration, thus we can easily capture them in resampled data. The voltage and current magnitude are considered to detect the anomaly, however, the phase angles of voltage and current are often not used directly because of leading frequency fluctuations. Thus, using voltage and current angle measurements we computed the power factor and considered it as a feature [19]. ### _Evaluation Metrics_ For validating the performance of the proposed DynamoPMU to detect and predict the anomalies in a dynamic electrical distribution system. Metrics for analyzing the performance of the proposed algorithm for detecting anomalies are given: * Precision- Evaluate the fraction of correctly classified Fig. 4: An illustration of the proposed Hankel alternative view of Koopman operator-based analytics framework for anomaly detection and prediction in \(\mu\)PMU measurements. The measurements with delay coordinates are used to create a Hankel matrix which is further modified in terms of the Koopman operator and then Singular value decomposition is applied. From the achieved components, eigen time-delay coordinates of “\(V\)” are analyzed by separating the first eigen time-delay coordinates \(V_{r-1}\) that mimic the pattern similar to the input time series by providing the best-fit linear dynamical model and the last eigen time-delay coordinate \(V_{r-1}\) is considered as stochastic input that intermittently forces the first \(r-1\) variables. Here, we decomposed the measurement data into linear dynamics with intermittent forcing terms. events among the positively classified events. \[Precision=\frac{TP}{(TP+FP)}\] (22) * Recall- Evaluate total relevant events classified correctly. \[Recall=\frac{TP}{(TP+FN)}\] (23) * F1 Score- It is a harmonic mean of the precision and recall. \[F1Score=\frac{2*Precision*Recall}{(Precision+Recall)}\] (24) * Accuracy- calculated in terms of positives and negatives as follows: \[Accuracy=\frac{TP+TN}{(TP+TN+FP+FN)}\] (25) * Area under the ROC Curve (AUC)- It is an average of true positive rates over all possible values of the false positive rate. * Matthews's Correlation Coefficient (MCC)- It is the most reliable statistical metric which provides a good score only if the event detection is appropriate in all of the four measures of the confusion matrix. Mathematically, it is expressed as: \[MCC=\] \[\frac{(TP\times TN-FP\times FN)}{\sqrt{(TP+FP)(TP+FN)(TN+FP)(TN+FN)}}\] (26) where, TP = True Positives, TN = True Negatives, FP = False Positives, and FN = False Negatives. Metrics for analyzing the performance of the proposed algorithm for predicting anomalies are given: * \(R^{2}\) Score- It is known as the coefficient of determination, which measures how well a statistical model predicts an outcome. The best possible score is 1.0 and calculated as: \[R^{2}(y,\hat{y})=1-\frac{\sum_{i=1}^{n}(y_{i}-\hat{y_{i}})}{\sum_{i=1}^{n}(y_ {i}-\bar{y_{i}})}\] (27) \(\hat{y_{i}}\) is the predicted value, \(y_{i}\) is the actual value of \(i-th\) sample and \(\bar{y_{i}}\) is the mean of true values. * RMSE- Root-mean-square error is a standard deviation of prediction errors or residuals. It is computed as: \[RMSE(y,\hat{y})=\frac{1}{N_{s}}\sum_{i=0}^{N_{s}-1}(y_{i}-\hat{y_{i}})^{2}\] (28) where, \(N_{s}\) is the number of samples. * Explained variance score- It explains the dispersion of errors of a given dataset. The best possible score is 1.0 and computed as: \[Variance(y,\hat{y})=1-\frac{Var(y-\hat{y})}{Var(y)}\] (29) * MAE- The mean absolute error function computes a risk metric corresponding to the expected value of the absolute error loss or \(l\)-norm loss. \[MAE(y,\hat{y})=\frac{1}{N_{s}}\sum_{i=0}^{N_{s}-1}|y_{i}-\hat{y_{i}}|\] (30) ## III Results and Discussion The proposed DynamoPMU algorithm for anomaly detection and prediction is applied to 1 million measurements over 12 days of real-world \(\mu\)PMU data. For analysis, we considered the current, voltage and power factor measurements of phase-A. Anomaly is detected in the available dataset and metrics Fig. 5: Current, Voltage and Power Factor measurements fed to the proposed DynamoPMU, obtained the forcing operator which suggests the intermittency in the measurements, further implemented the 3-sigma rule to detect the anomalies. are obtained. Then, ten days of data are used for training the prediction model and two days of data are used to test its performance. ### _Anomaly Detection_ All three features' measurements are fed as input to the proposed method. Time-delay coordinates are applied to the time-series measurements with the Koopman operator which provides the first Eigen-time delay coordinates. Then performed singular value decomposition with delay embedding which separates out the linear pattern and forcing operator (rare events). Further, applied the threshold using the three-sigma rule and obtained the anomaly in the data. The obtained result in Fig. 5 shows the anomaly detected in current, voltage and power factor measurements of \(\mu\)PMU. We can observe that the fluctuating values of the Forcing Operator are obtained, which are further passed through the three-sigma rule for capturing the anomalous events in the dataset. The existence of anomaly is shown with magnitude 1, otherwise, its value is 0. The proposed method has detected the various events as shown in Fig. 6 which include capacitor bank switching, inrush current conditions, change in tappings of the transformer, and many other events a few milliseconds before their occurrence. Using the proposed method on Day-0 at time 18:32:54, inrush current condition is detected; on Day-3 at time 13:22:20, change in transformer tappings is detected; and on Day-5 at time 10:18:27, switching capacitor bank condition is detected. Thus, we can use the proposed method for real-time streaming data and detect the events prior to their occurrence such that an alarm or awareness can be raised to deal with them. Also, the lobe switching in the embedded attractor for current and voltage is shown in Fig. 7, which represents the active and inactive status of the forcing operator in a 3D view. The performance of the proposed method is compared with Fig. 6: Types of anomalies detected in voltage and current measurements; (a) zoom out view of the sudden voltage drop and current increment due to detected inrush current occurred on Day-0, time 18:32:54, (b) zoom out view of capacitor bank switching condition detection with an increase in voltages and sudden increase in currents for all three-phases observed on Day-5, time 10:18:27, and (c) zoom out view of change in the tapping of the transformer event detection with the drop in voltages, no changes in current for all three-phases as observed on Day-3, time 13:22:20. Fig. 7: Lobe switching in embedded attractor for current and voltage measurements. the existing method using metrics which help in analyzing the performance of the anomaly detection algorithm. From Table I we can observe that the proposed method outperforms in comparison to the statistical method as well as GAN. As mentioned in [19], the MCC score is an important score to check the performance of anomaly detection algorithms because it acquires all four categories of classification into consideration. Here, we can observe that the MCC score is improved significantly in comparison to the GAN. We obtained an MCC score value of 0.977 with the proposed method which shows that it has outperformed in detecting the anomalies. The obtained accuracy for the proposed model is 0.999 which is very high and suggest that the proposed model has effectively detected the anomalies in the dataset. Also, the other score's values are significant in comparison to other methods which shows the superiority of the proposed algorithm. The proposed methodology can be implemented on real-time streaming data as it does not require any prior training data. ### _Anomaly Prediction_ The governing equations for the system, are utilized for the prediction using sparse regression i.e., SINDY model. For training the prediction model we used 10 days and 13 hours of measurement data and predicted the measurements for the next two days. Further implemented the DynamoPMU to the predicted data and obtained the predicted anomaly which would facilitate the situational awareness of the electrical distribution grid. From Fig. 8, we can see the predicted voltage for two days using the proposed algorithm and we can say that actual and predicted voltage measurement follows the same pattern. To visualize it more effectively, we have zoomed out and shown the data measurements from 6001-6020 data points in Fig. 8. Further, we evaluated the performance of the prediction model using metrics, from Table II we can observe that the obtained values shows the effectiveness of the proposed algorithm. The \(R^{2}\) and expected variance scores values are near 1 which shows that the prediction of the measurement data is done effectively and we can rely on the model. For real-time implementation for the detection of anomalies, the proposed method is suitable to be integrated into the microcontroller of a \(\mu\)PMU device or an AI Accelerator attached to it that brings situational awareness and gives an alarm signal to avoid unnecessary trippings. Also, the algorithm is flexible to be integrated into the edge or cloud computing devices for anomaly prediction in the long term. ## IV Conclusion A novel unsupervised physics-informed dynamical approach is proposed called DynamoPMU, to detect the anomaly in real-time and simultaneously predict the events in power distribution systems. The proposed framework, based on Koopman theory and Hankel matrix, reconstructs a complex system in the linear intrinsic space where the nonlinear dynamics are expressed in a linear model and intermittent forcing operator. This intermittency represented anomalous events, which were scrutinized and detected in real-time without prior knowledge. We also obtained the governing equations which were utilized to predict future occurring events that help us in handling the events to avoid any damage to the assets and human lives. Sparse identification of the nonlinear dynamical regression model was used for simultaneous event detection. The DynamoPMU is implemented on the online available \(\mu\)PMU dataset and in terms of performance matrices it was observed that we achieve better scores in comparison to the existing algorithms for anomaly detection as well as prediction of the events. In future work, anomaly detection could be utilized to monitor the asset's health diagnostics and distribution-level oscillation analysis. The proposed method would serve as a motivation for research to implement physics inspired dynamical approach on electrical distribution grids for other real-time applications. Fig. 8: Prediction of the voltage measurements with anomaly detection using DynamoPMU for two days and zoom out the voltage signal from 6001-6020 to visualize the variation in actual and predicted voltage.
技術の進歩とセンサーの利用可能性の拡大により、リアルタイムストリーミングデータの大規模な量が発生しています。電気配電システムにおけるリアルタイムデータは、$\mu$PMUと呼ばれる配電レベルのPhasor測定装置を通じて収集され、高解像度Phasor測定値を含む多様なイベントシグネチャーを提供し、状況認識と配電システムへの視覚的表示を可能にする。これらのイベントは稀、無スケジュール、不確実であるため、その発生を調査、検知、予測することは困難である。電気配電システムでは、イベントの複雑で非線形、非定常なシグネチャーパターンを説明する明確な進化関数を見つけるのは困難である。この論文では、この問題に取り組むために、$\mu$PMUストリーミングデータにおける異常検出とイベントの予測を目的とする物理のダイナミクスに基づいたアプローチを開発します。支配方程式を用いて
2309.11163
Composition-dependent absorption of radiation in semiconducting MSi2Z4 Monolayers
The recent synthesis of MoSi2N4 material, along with theoretical predictions encompassing the entire family of chemical analogs, has opened up a new array of low-dimensional materials for a diverse range of optoelectronics and photovoltaics applications. In this study, we conducted state-of-the-art many-body first-principles calculations to analyze the quasi-particle electronic structure of the material class MSi2Z4 (where M = Mo, W, and Z = N, P, As, Sb). All monolayers display a direct band gap at the K point, with the exception of MoSi2N4. In tungsten-based compounds, the fundamental-gap can be adjusted over a significantly broader energy range compared to their molybdenum-based counterparts. Additionally, increasing atomic weight of the Z, both the band gap and exciton binding energies decrease. A noteworthy feature is the absence of a lateral valley ({\Lambda} or Q) near the conduction band minimum, indicating potential higher photoluminescence efficiencies compared to conventional transition-metal dichalcogenide monolayers. The optical spectra of these materials are predominantly characterized by tightly bound excitons, leading to an absorption onset in the visible range (for N-based) and in the infrared region (for others). This diversity offers promising opportunities to incorporate these materials and their heterostructures into optoelectronic devices, with tandem solar cells being particularly promising.
Muhammad Sufyan Ramzan, Tomasz Woźniak, Agnieszka Kuc, Caterina Cocchi
2023-09-20T09:20:11
http://arxiv.org/abs/2309.11163v1
# Composition-dependent absorption of radiation in semiconducting MSiZ\({}_{4}\) monolayers ###### Abstract We present a new characterization of the \(\mathrm{MSi_{2}Z_{4}}\) monolayers of ###### Abstract The recent synthesis of MoSi\({}_{2}\)N\({}_{4}\) material, along with theoretical predictions encompassing the entire family of chemical analogs, has opened up a new array of low-dimensional materials for a diverse range of optoelectronics and photovoltaics applications. In this study, we conducted state-of-the-art many-body first-principles calculations to analyze the quasi-particle electronic structure of the material class MSi\({}_{2}\)Z\({}_{4}\) (where M = Mo, W, and Z = N, P, As, Sb). All monolayers display a direct band gap at the K point, with the exception of MoSi\({}_{2}\)N\({}_{4}\). In tungsten-based compounds, the fundamental-gap can be adjusted over a significantly broader energy range compared to their molybdenum-based counterparts. Additionally, increasing atomic weight of the Z, both the band gap and exciton binding energies decrease. A noteworthy feature is the absence of a lateral valley (\(\Lambda\) or Q) near the conduction band minimum, indicating potential higher photoluminescence efficiencies compared to conventional transition-metal dichalcogenide monolayers. The optical spectra of these materials are predominantly characterized by tightly bound excitons, leading to an absorption onset in the visible range (for N-based) and in the infrared region (for others). This diversity offers promising opportunities to incorporate these materials and their heterostructures into optoelectronic devices, with tandem solar cells being particularly promising. Introduction Graphene was successfully exfoliated from its bulk form in 2004,[1] sparking a surge of interest in other two-dimensional (2D) materials, which offer a variation of electronic properties.[2, 3, 4, 5, 6, 7, 8, 9, 10, 11] The current catalogue of 2D materials offers a diverse range of insulating, semiconducting, semi-metallic, and metallic monolayers (MLs).[12, 13, 14, 15, 16, 17] Among the reported 2D materials, transition metal dichalcogenides (TMDCs) have been extensively studied due to their direct band gaps, high carrier mobilities, and stability in ambient conditions.[18, 4, 19] However, their optoelectronic properties are limited by the presence of a lateral valley, so-called \(\Lambda\) or Q, near the conduction band minimum (CBM), which provides non-radiative recombination sites for the excited carriers, thus, reducing the photoluminescence efficiency.[20, 21, 22, 23] Several methods have been proposed to suppress this non-radiative recombination channel and, thus, enhance the quantum yield of TMDC MLs,[24, 25, 22] but despite these efforts, additional research is needed to make them ready for commercial optoelectronic applications. One possible strategy to overcome current limitations of these materials is to interface them with other organic and inorganic semiconductors with smaller band gap to form so-called "tandem stacks" that lead to improved solar cell efficiency by means of photon upconversion.[26, 27] However, despite the efforts devoted to improve the performance of TMDCs, the search for other 2D semiconductors is still an active area of research. Recently, a new 2D material, MoSi\({}_{2}\)N\({}_{4}\), with a crystal structure analogous to TMDCs, was synthesized using chemical vapor deposition method[28]. It belongs to \(P\overline{6}m2\) space group and has a thickness of seven atomic planes with MoN\({}_{2}\) layer sandwiched between two layers of SiN, see **Figure 1**. MoSi\({}_{2}\)N\({}_{4}\) has an indirect bandgap of 1.94 eV and exhibits excitonic transitions at 2.21 eV (the so-called A resonance) and 2.35 eV (B resonance) originating from the spin-splitting of the valence band maximum (VBM), similar to TMDCs.[29, 30, 28] Moreover, MoSi\({}_{2}\)N\({}_{4}\) has high electron (270 cm\({}^{2}\) V-\({}^{1}\)s-\({}^{1}\)) and hole (1200 cm\({}^{2}\) V-\({}^{1}\)s-\({}^{1}\)) mobilities resulting in a high on-off ratio of 4000 at 77 K in a field-effect transistor.[28] A recent theoretical study has demonstrated that its band edges are well protected by the local environment.[31] Moreover, density functional theory calculations predicted an entire class of 2D analogs, with a general formula of MA\({}_{2}\)Z\({}_{4}\), where M indicates Group-2 or transition metals, A stands for elements belonging to Group-13 or -14, and Z species of Group-15 or -16. Similar to TMDCs, different structural phases of \(\mathrm{MSi_{2}Z_{4}}\) have also been proposed and investigated.[32] So far, numerous ground-state calculations for several members of this new material class have been reported.[33, 34, 35, 28, 36] Most of these studies showed that \(\mathrm{MA_{2}Z_{4}}\) MLs have direct band gaps at the high-symmetry point K, with the exception of \(\mathrm{MoSi_{2}N_{4}}\) and \(\mathrm{WSi_{2}N_{4}}\) that both exhibit indirect band gaps with VBM at \(\Gamma\) point. It is worth highlighting that, unlike the TMDCs, \(\mathrm{MSi_{2}Z_{4}}\) do not have the \(\Lambda\) valley near the CBM, which is responsible for detrimental electron-hole recombination in TMDCs, as discussed above. The absence of the \(\Lambda\) valley between \(\Gamma\) and K suggests these materials as potential candidates for optoelectronics and photovoltaics. However, a detailed investigation of the electronic and optical characteristics of these systems based on state-of-the-art _ab initio_ methods is necessary to substantiate this intuitive claim. To date only a few reliable studies of this kind are available in the literature.[37, 38, 39, 31] In this work, we present a systematic study of the electronic and excitonic properties of \(\mathrm{MSi_{2}Z_{4}}\) (M = Mo, W and Z = N, P, As, and Sb) family of MLs carried out in the framework of density functional theory (DFT) and many-body perturbation theory (GW approximation and Bethe-Salpeter equation). We analyze the trends obtained at varying composition for the electronic and optical band gaps as well as for the excitons and their binding energies, identifying trends that enable engineering of these materials and their properties to maximize their optical performance. We find that the optical onset of the N-based MLs is in the visible region, while for the others, it is shifted to the infra-red (IR), suggesting intriguing perspectives for efficient tandem solar cells based on vertically stacked heterostructures of these systems. ### 2.1 Results and Discussion ### 1. Structural properties \(\mathrm{MSi_{2}Z_{4}}\) ML systems have a hexagonal crystal structure belonging to the \(\mathrm{D_{2h}}\) point group. Its unit cell includes seven atomic layers, with \(\mathrm{MZ_{2}}\) layer sandwiched between two Si-Z layers (see **Figure 1a**, b). The optimized in-plane lattice constants \(a=b\) (see **Table 1**) are sensitive to the Z elements and increase monotonically as their atomic mass increases. Similar to the TMDCs in the 2H phase, MLs with the same Z element, but different metal atom M, have nearly identical lattice constants, such that, e.g., MoSi\({}_{2}\)P\({}_{4}\) and WSi\({}_{2}\)P\({}_{4}\) have the same lattice constant equal to 3.454 A. This interesting feature promises the creation of Mo- and W- based heterostructures without lattice mismatch. Our calculated lattice constants agree well with earlier experimental and theoretical reports[31, 36, 38]. All the MLs are predicted to be mechanically stable[31, 36, 28]. The first Brillouin zone is hexagonal (see **Figure 1c**) with the usual high symmetry points \(\pm\)K indicating valleys with opposite polarization (**Figure 1d**). ### Electronic properties The band structures of the considered MSi\({}_{2}\)Z\({}_{4}\) monolayers are shown in **Figure 2**. These results, obtained at the PBE level including SOC, exhibit an underestimated band gap, due to the known shortcomings of this semi-local functional. Yet, the qualitative picture provided by these results is consistent with the one delivered by more sophisticated methods such as GW, see Figure S7 Figure 1: a - b Top and side views of the MSi\({}_{2}\)Z\({}_{4}\) ML atomic structure with the unit cell boundaries marked by dashed lines. (c) Bruillion zone with the path connecting high symmtery points indicated in light blue. (d) Schematic depiction of \(\pm\)K valley polarization. and S8. The VBM and CBM of all monolayers are situated at the K point, thus giving rise to direct band gaps, except for MoSi\({}_{2}\)N\({}_{4}\) and WSi\({}_{2}\)N\({}_{4}\) which have an indirect band gap with the VBM at the \(\Gamma\) point. In these materials, the highest valence band has a maximum at the K point 234 meV (for MoSi\({}_{2}\)N\({}_{4}\)) and 50 meV (for WSi\({}_{2}\)N\({}_{4}\)) below the VBM. In all other monolayers, the K valley is always higher than the \(\Gamma\) valley and their difference increases for heavier Z elements. We also notice a dependence of the SOC splitting of the valence and conduction bands on the composition of the MLs. In particular, heavier Z elements increase the VBM splitting, leading to an overall decrease of the band gap. The conduction-band splitting is only noticeable for MSi\({}_{2}\)As\({}_{4}\) and MSi\({}_{2}\)Sb\({}_{4}\), but it is still below 35 meV even for the latter system including the heavier element Sb. The corresponding values of band gaps and spin splitting are given in **Table 1**. The geometrical and electronic parameters and their trends agree with previous studies[31, 32, 36]. \begin{table} \begin{tabular}{c c c c c c c c} & & & **SCe-** & & Band gap (**GV**) & & S8.E \\ System & \(\alpha\) (\(\hat{\alpha}\)) & \(\sigma\) = \(\hat{\rho}\) (\(\hat{\kappa}\)) & splitting of & & & & \\ & & & **y/s (inety)** & PBE & GW & O-Ree & (SJ) \\ MoSi\({}_{2}\)N\({}_{4}\) & 7.0 & 2.900 & 130/3 & 1.781 (2.015) & 2.720 & 2.470 & 0.389 \\ MoSi\({}_{2}\)P\({}_{4}\) & 9.4 & 3.454 & 138/4 & 0.620 & 1.067 & 0.859 & 0.208 \\ MoSi\({}_{2}\)As\({}_{4}\) & 9.9 & 3.597 & 181/16 & 0.528 & 0.881 & 0.693 & 0.188 \\ MoSi\({}_{2}\)Sb\({}_{4}\) & 10.9 & 3.879 & 226/25 & 0.263 & 0.495 & 0.380 & 0.115 \\ WSi\({}_{2}\)N\({}_{4}\) & 7.0 & 2.889 & 400/10 & 2.110 (2.160) & 3.047 & 2.624 & 0.423 \\ WSi\({}_{2}\)P\({}_{4}\) & 9.4 & 3.454 & 439/6 & 0.300 & 0.652 & 0.452 & 0.200 \\ WSi\({}_{2}\)As\({}_{4}\) & 9.9 & 3.599 & 503/25 & 0.211 & 0.467 & 0.291 & 0.176 \\ WSi\({}_{2}\)Sb\({}_{4}\) & 10.9 & 3.884 & 510/19 & 0.031 & 0.178 & 0.019 & 0.159 \\ \end{tabular} \end{table} Table 1: Optimized lattice constants of the hexagonal unit cells of the MSi\({}_{2}\)Z\({}_{4}\) MLs, \(\alpha\) = \(\hat{b}\), layer thickness \(d\), SOC-induced splitting of the highest valence band (\(\nu\)) and lowest conduction band (\(\hat{\kappa}\)) at the K point, PBE and GW gap (direct gap in parenthesis if the fundamental gap is indirect), as well as optical band gap corresponding to the lowest bright excitation predicted by the BSE, and its binding energy (B.E.). The most striking feature in the electronic structures of MSi\({}_{2}\)Z\({}_{4}\) is the absence of the \(\Lambda\) valley between the high-symmetry points \(\Gamma\) and K near the CBM, in contrast to conventional TMDCs, [18, 19, 40, 41] see Figure 2 and S1. It is worth noting that a feature analogous to the \(\Lambda\) valley is still present in the band structure of the MSi\({}_{2}\)N\({}_{4}\) (see Figure 2), but it is energetically much higher than the CBM at K compared to the \(\Lambda\)-valley in the TMDCs. The residual presence of this band minimum in the conduction region of the MSi\({}_{2}\)N\({}_{4}\) can be explained by the largely different electronegativities of Si (1.8) and N (3.0) compared to Si and the heavier Z elements. For the same reason, an additional valley also appears at the M point, which is dominated by the Si and N p orbitals (see Fig. S5 in ref. [36]). For W-based MLs, these two valleys are closer to the CBM than in the Mo-based ones. In all other MLs, with heavier Z elements than N, the \(\Lambda\) and M valleys disappear and a new valley appears (CBM+1) above the CBM, composed of \(p\) and \(d\) orbitals of the Z and M atoms, respectively.[36] Furthermore, with heavier Z elements, the spin-splitting of CBM+1 valley increases and the energy difference between CBM and CBM+1 valley at K decreases. The \(\Lambda\) valley, when energetically close to the K valley, provides nonradiative recombination channels, and hence, suppresses the photoluminescence quantum yield.[42, 43, 41, 25] Due to the absence of this feature in monolayers MSi\({}_{2}\)Z\({}_{4}\), excellent optical performance of these systems can be anticipated. It is worth noting that for the Mo- based monolayers, the dispersion of valence valleys at the K point changes insignificantly when going towards heavier Z atoms, but the dispersion of conduction valleys at K drastically increases from N to Sb. As mentioned above, for all Z except N, the CBM+1 valley at K moves downward towards CBM for heavier Z atoms. This shift is accommodated by increasing the dispersion of CBM valley. As a result, the larger is the shift of CBM+1 valley, the larger is the dispersion of CBM at K. However, for W-based MLs, the change in the dispersion of CBM valley is only prominent in case of ML WSi\({}_{2}\)Sb\({}_{4}\). This discrepancy can be understood by observing the splitting of CBM valley between K-\(\Gamma\) and K-M. This splitting increases with heavier Z elements and protects the dispersion CBM valley by deforming dispersion of CBM+1 valley. For ML WSi\({}_{2}\)Sb\({}_{4}\), due to large splitting of both CBM and CBM+1 valleys, the dispersion of both valleys changes significantly. Next, the elemental composition of frontier bands was analyzed for an exemplary system (ML WSi\({}_{2}\)P\({}_{4}\)) which revealed that VBM and CBM are mainly composed of different orbitals of W (M-) atoms, see **Figure S4**. The isosurface of second highest valley (VBM -1) at \(\Gamma\) point is mainly composed by dz\({}^{2}\) orbitals of W atoms with a considerable portion of isosurface on the A-Z bond positioned along the layer thickness. Changing Z atom increases the layer thickness and results in different overlap of these two isosurfaces. This different extent of overlap moves \(\Gamma\) valley on the energy scale[44, 45], which controls the size or nature of the band gap. Surprisingly, the outer layers have no contribution in forming the frontier states, which may be the cause that its band edges are well protected by the local environment.[31] Figure 2: Band structures of MSi\({}_{2}\)Z\({}_{4}\) MLs calculated at the PBE level of theory including spin-orbit coupling for Mo (top panel) and W (bottom panel) based systems. The Fermi level, set in the mid-gap to zero, is marked by a dashed line. The fundamental gap is indicated by a blue arrow. In the upper left most panel, \(\Lambda\) valley is marked. The trend of band gap size, the frontier band dispersion, and the spin-splitting remain unchanged within GW approximation, and thus, WSi\({}_{2}\)N\({}_{4}\) (WSi\({}_{2}\)Sb\({}_{4}\)) has the largest (smallest) QP gap, see Table 1. The only qualitative change occurs in the electronic structure of WSi\({}_{2}\)N\({}_{4}\), where VBM shifts from \(\Gamma\) to K point, hence, turning it into direct gap semiconductor with the band gap of 3.05 eV. A similar indirect-direct transition for ML WSi\({}_{2}\)N\({}_{4}\) was also reported when band structure was corrected with hybrid (HSE06) functionals [36]. However, ML MoS\({}_{2}\)N\({}_{4}\) remains indirect within GW approximation, but the energy difference between valence band maxima at K and \(\Gamma\) points reduces to 139 meV. Since these values are sensitive to the convergence parameters of the GW calculations, we checked them carefully, as reported in the SI. ## 3 Optical absorption spectrum and exciton wave functions To assess the optical performance of MSi\({}_{2}\)Z\({}_{4}\) MLs, the optical absorption spectra were computed by solving the BSE, see results in **Figure 3** and **Table 1**. The N-containing MLs are characterized by the first excitation in the visible range, see **Figure 3**a and **e**, while other ones have their onset in the IR region, see **Figure 3**. The lowest energy excitons in all materials stem from the transition between the VBM and the CBM, see detailed analysis in the SI, **Figure57** and **Figure58**. Above the absorption onset, all materials exhibit large absorption with pronounced maxima at the upper boundary of the visible region (see Figure 3). Interestingly, these features overlap well with the lowest-energy peaks in the spectra of the N-containing MLs, suggesting the possibility of stacking them with other, heavier members of this material family (e.g., containing P or As) to form tandem solar cells. To check the accuracy of our calculations, we compared the calculated energies of the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) excitations with the available experimental data [28]. Hong et al. [28], reported the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) peaks of MoSi\({}_{2}\)N\({}_{4}\) at 2.210 and 2.350 eV, respectively, corresponding to the two direct excitations originating from the VBM splitting of 140 meV. In our calculations, the 1\({}^{\text{st}}\) and 2\({}^{\text{nd}}\) excitation peaks of MoSi\({}_{2}\)N\({}_{4}\) are at 2.470 and 2.618 eV which are slightly blue-shifted and exhibit a similar splitting to the experimental one. For reference, the VBM splitting of MoSi\({}_{2}\)N\({}_{4}\) at the PBE level is 130 meV. The energies of the first excitation and the binding energies decrease by increasing the mass of Z, while the oscillator strength follows the opposite trend. In general, the W-based MLs exhibit excitations with larger intensities than their Mo-based counterparts. The lowest-energy excitons of WSi\({}_{2}\)N\({}_{4}\) and MoSi\({}_{2}\)N\({}_{4}\) (WSi\({}_{2}\)Sb\({}_{4}\) and MoSi\({}_{2}\)Sb\({}_{4}\)) have the largest (smallest) binding energies, equal to 450 and 390 meV (160 and 120 meV), respectively. Regardless of the larger thickness of MA\({}_{2}\)Z\({}_{4}\), their binding energies and band gaps follow the linear scaling law, as previously reported for conventional 2D materials, see **Figure 4a**[46]. It should be noted that the values of the binding energies are highly sensitive to the convergence parameters employed in the GW and BSE calculations. For a thorough discussion in this regard, see **Figure S2-3** and **Figure S5-6** for the convergence of GW band gap and binding energy. In addition to the usual analysis of the exciton binding energies _versus_ the QP gap, we inspected the trends for E\({}_{\text{b}}\) with respect to the static dielectric screening calculated from the random-phase approximation entering the solution of the BSE.[47, 48] From the results plotted in **Figure 4b**, we notice that, contrary to the intuition based on the number of electrons in the metal atoms, W-containing MLs feature a lower screening than their Mo-based counterparts. We assign this behavior to the structural characteristics of the MSi\({}_{2}\)Z\({}_{4}\). As shown in **Table 1**, the thickness of the MLs ranges from 7.0 to 10.9 A, for going towards heavier Z elements, but it does not depend on the M atoms, which are buried in the inner layer. Due to the same thickness of MLs with same Figure 3: Optical absorption spectra of the considered MSi\({}_{2}\)Z\({}_{4}\) MLs. The vertical dashed lines indicate the GW band gaps and the solid blue bars, the first excitation. For reference, the visible light with IR and UV range is marked in the middle. non-metallic, MoSi\({}_{2}\)Z\({}_{4}\) are, thus, more compact than WSi\({}_{2}\)Z\({}_{4}\) and, as such, feature larger polarizability. Examining now the results shown in **Figure 4**b, we find that the biding energies decrease with increasing size of the Z element, upon which its layer thickness becomes larger and the QP gap smaller. The approximately linear behavior identified with respect to the gap in **Figure 4**a is not reproduced for the static screening. However, the trend shown in **Figure 4**b suggests closer similarities among the same non-metallic compositions (MoSi\({}_{2}\)Z\({}_{4}\) and WSi\({}_{2}\)Z\({}_{4}\)) at varying Z than among materials with the same metal element. Again, this finding is promising in view of constructing heterostructures based on these systems: by choosing equal non-metallic compositions, which also exhibit negligible lattice mismatch as discussed above, it can lead to a bilayer in which approximately the same amount of energy is required to dissociate the excitons in either side of the interface. This prediction holds particularly for P- and As-containing materials. Figure 4: Binding energies of the lowest-energy exciton plotted with respect to a) the QP gap and b) the static screening. The linear fitting is indicated by the red dashed line. Summary and Conclusions In summary, we have investigated with first-principles many-body calculations the quasi-particle electronic structure and the optical properties of MSi\({}_{2}\)Z\({}_{4}\) monolayers with M = Mo, W and Z = N, P, As, Sb. All systems have a fundamental direct band gap with the valence band maximum (VBM) and conduction band minimum (CBM) at the K point ranging from about 3 eV in WSi\({}_{2}\)N\({}_{4}\) down to about 0.15 eV in WSi\({}_{2}\)Sb\({}_{4}\). MoSi\({}_{2}\)N\({}_{4}\) features an indirect QP gap of approximately 2.7 eV with the VBM at \(\Gamma\): it differs by about 200 meV from the optical gap. Upon incrementing the mass of Z, the spin-orbit splitting increases leading to a decrease of the band gap. WSi\({}_{2}\)N\({}_{4}\) has the largest band gap and WSi\({}_{2}\)Sb\({}_{4}\) offers the smallest band gap. Unlike conventional TMDCs, MSi\({}_{2}\)Z\({}_{4}\) do not exhibit the \(\Lambda\) valley in the conduction region, which is also known as channel for nonradiative recombination, quenching photoluminescence efficiency. The considered materials have an intriguing composition-dependent absorption. The spectra of MoSi\({}_{2}\)N\({}_{4}\) and WSi\({}_{2}\)N\({}_{4}\) have their absorption onset in the visible region, while all the other materials absorb IR radiation. All monolayers exhibit intense excitations at the onset indicating they are good light absorbers. The first exciton stems from the transition between the VBM and CBM meaning that electron and hole dissociated from the exciton will be localized. Exciton binding energies range from 0.42 eV in WSi\({}_{2}\)N\({}_{4}\) to 0.12 eV in MoSi\({}_{2}\)Sb\({}_{4}\), much lower than these of corresponding TMDC MLs. In general, N- (Sb-) based monolayers have the largest (smallest) binding energy, which decreases linearly when changing the Z element from N to Sb, following a linear scaling law between the band gaps and binding energies in 2D materials[46]. By contrasting binding energies with the calculated value of the static screening, one does not notice an equally clear trend. Nonetheless, important findings include the larger dielectric screening of the Mo-based MLs compared to the W-based ones, due to the more compact structure of the latter, as well as the larger similarity exhibited by materials differing only by the metal atom. The results of this study indicate MSi\({}_{2}\)Z\({}_{4}\) as a family of favorable light absorbers at energies that vary with their composition. The systems with Z = P, As, Sb, absorbing IR radiation, can be favorably combined with other low-dimensional semiconductors, such as conventional TDMCs or even MSi\({}_{2}\)N\({}_{4}\), that absorb in the visible range to form tandem solar cells. The combination of several MSi\({}_{2}\)Z\({}_{4}\) layers, also with different compositions, is particularly attractive due to the lattice parameter independent of the metal atom for a given combination or A and Z atoms. Dedicated studies exploring these heterostructures are needed to assess their behavior, but our results provide a suitable basis to conduct this analysis. ### 4.1 Computational details All calculations presented in this work were performed within the framework of DFT [49] and many-body perturbation theory[47] as implemented in the Vienna ab-initio simulation (VASP) code [50]. The interactions between electrons and nuclei in the Kohn-Sham (KS) equations [51] were treated with the projector augmented wave (PAW) method [52]. In all calculations, the generalized gradient approximation for the exchange correlational potential as proposed by Perdew, Burke, and Ernzerhof (PBE) [53] was employed, along with Grimme's DFT-D3 dispersion correction [54] to account for the contributions of van der Waals (vdW) interactions. Spin-orbit coupling (SOC) was included in all calculations. To minimize spurious interactions between periodic images, a vacuum of ~15 A was inserted along the non-periodic lattice vector. The unit-cell parameters and the atomic positions were optimized using a \(\Gamma\)-centered 12x12x1 **k**-point mesh with a plane wave energy cutoff of 500 eV. The structures were optimized until the residual interatomic forces were less than 10 meV A-1, with an electronic self-consistency convergence threshold of 10\({}^{-8}\) eV. Crystal structures were visualized using VESTA [55]. The optical properties were calculated from the GW approximation and the solution of the Bethe-Salpeter equation (BSE). The quasiparticle eigenvalues were obtained starting from the PBE energies and wave functions by solving the quasiparticle (QP) equation[29]: \[[T+V_{ext}+V_{H}+\sum(E_{m}^{QP})]\psi_{m}^{QP}=E_{m}^{QP}\psi_{m}^{QP}\,, \tag{1}\] where the self-energy operator \(\sum\) is calculated in the single-shot flavor (GoW\({}_{0}\)) of the GW approximation[56]. The other terms in Eq. (1) are the kinetic energy (\(T\)), the external potential accounting for the electron-nuclear attraction (\(V_{ext}\)), and the Hartree potential (\(V_{h}\)); \(E_{m}^{QP}\) and \(\psi_{m}^{QP}\) are the single-particle energies and the wave functions corrected with the self-energy contribution. Optical absorption spectra were obtained by solving the BSE, the equation of motion of the two-particle correlation function [57], on top of the QP electronic structure. In practice, the problem is cast into a Schrodinger-like equation with the form: \[\big{(}E_{ck}^{QP}-E_{vk}^{QP}\big{)}A_{vck}^{S}+\sum_{c^{\prime}v^{\prime}}K_{( e-h)vck,v^{\prime}c^{\prime}k}(\Omega^{s})A_{c^{\prime}v^{\prime}k}^{S}= \Omega^{s}A_{vck}^{s}\, \tag{2}\] where \(A_{vck}^{S}\) are exciton amplitudes, \(K_{(e-h)}\) is the kernel describing the electron-hole (e-h) interactions, and \(\Omega^{s}\) are the excitation energies. The coefficients, \(A_{vck}^{S}\), provide information about the single-particle transitions contributing to the exciton. They can be visualized in the reciprocal space using the so-called exciton weights defined as: [58] \[w_{vk}^{S}=\sum_{c}\big{|}A_{vck}^{S}\big{|}\,\qquad w_{ck}^{S}=\sum_{v}\big{|}A_{ vck}^{S}\big{|}\] where \(S\) is the index of the exciton. In the GW calculations, we chose a total of 240 bands, a cutoff energy of 100 eV, 100 frequency points, and a Gaussian smearing of 50 meV. The **k**-point mesh to sample the Brillouin zone was doubled with respect to the choice adopted for the DFT calculations (24\(\times\)24\(\times\)1), except for WSi\({}_{2}\)Sb\({}_{4}\) where an 18\(\times\)18\(\times\)1 mesh was used. A plane wave energy cutoff of 400 eV was adopted in these calculations. A set of 4 valence bands and 8 conduction bands was used to construct and solve the BSE. ### 5.1 Data Availability The data to reproduce the plots and findings within this paper are available from the corresponding author(s) upon reasonable request. ### 6.1 Conflict Of Interests The authors declare no competing financial or non-financial interests Acknowledgements M.S.R and C.C. thank the funding by the Lower Saxony Ministry of Science and Culture (programs Professorinnen fur Niedersachsen and "Digitalization in the natural sciences", project SMART), by the Quanter ERA II European Union's Horizon 2020 research and innovation programme under the EQUAISE project, Grant Agreement No. 101017733, and by the Federal Ministry for Education and Research (Professorinnenprogramm III). The computational resources were provided by the high-performance computing center of ZIH Dresden and by the by the North German Computer Alliance (project nip00063). M.S.R., T.W., and A.K. thank the Deutsche Forschungsgemeinschaft (project GRK 2247/1 (QM3) and project CRC1415, number 417590517) for financial support and the high-performance computing center of ZIH Dresden for computational resources. A.K. also acknowledges association with priority program (project SPP2244 (2DMP)). T.W. also acknowledges the financial support of National Science Centre, Poland within Project No. 2021/41/N/ST3/04516
recente synthesis of MoSi2N4 material, along with theoretical predictionsencompassing the entire family of chemical analogs, has opened up a new arrayof low-dimensional materials for a diverse range of optoelectronics and photovoltaics applications. この研究では、M = Mo, W, and Z = N,P, As, Sb でなる MSi2Z4 (物質クラス) の quasi-particle electronic structure を分析するために、state-of-the-art many-body first-principles calculations を行った。すべてのmonolayer は K 点における直線的なバンドギャップを示しており、MoSi2N4 には例外があります。Tungsten-based compound では、Fundamental-gap は molybdenum-based compound と比べてはるかに広いエネルギー範囲で調整できます。さらに、Z の原子重量が増加すると、バンドギャップと exciton binding energies は低下します。注目すべき点は、導電帯 minimum における lateral valley ({\Lambda} or Q
2302.14369
Quantum Programming of the Satisfiability Problem with Rydberg Atom Graphs
Finding a quantum computing method to solve nondeterministic polynomial time (NP)-complete problems is currently of paramount importance in quantum information science. Here an experiment is presented to demonstrate the use of Rydberg atoms to solve (i.e., to program and obtain the solution of) the satisfiability (3-SAT) problem, which is the prototypical NP-complete problem allowing general programming of all NP problems. Boolean expressions of the 3-SAT problem are programmed with the blockade interactions of Rydberg atom graphs and their many-body ground states are experimentally obtained, to determine the satisfiabilities of the given 3-SAT problem instances quantum mechanically.
Seokho Jeong, Minhyuk Kim, Minki Hhan, Jaewook Ahn
2023-02-28T07:49:10
http://arxiv.org/abs/2302.14369v1
# Quantum Programming of the Satisfiability Problem with Rydberg Atom Graphs ###### Abstract Finding a quantum computing method to solve nondeterministic polynomial time (NP)-complete problems is currently of paramount importance in quantum information science. Here an experiment is presented to demonstrate the use of Rydberg atoms to solve (i.e., to program and obtain the solution of) the satisfiability (3-SAT) problem, which is the prototypical NP-complete problem allowing general programming of all NP problems. Boolean expressions of the 3-SAT problem are programmed with the blockade interactions of Rydberg atom graphs and their many-body ground states are experimentally obtained, to determine the satisfiabilities of the given 3-SAT problem instances quantum mechanically. Currently there are considerable efforts being devoted to making a quantum computer [1; 2; 3]. One prominent goal is to engineer a quantum system that can formulate quantum algorithms of classically difficult computational problems [4; 5]. According to the Cook-Levin theorem [6], an efficient algorithm which can solve a problem in the computational complexity class of non-deterministic polynomial (NP)-complete can be used as a subroutine for the efficient algorithm for all other problems in NP. So, if a quantum computer can solve an NP-complete problem efficiently, all other NP problems can also be efficiently solvable by the polynomial time reduction to the NP-complete problem [7; 8]. Boolean satisfiability problem (SAT or B-SAT), and the 3-SAT problem that has clauses of at most three literals, are a prototypical NP-complete problem that belongs to the class of NP-complete, i.e., no classical algorithms can efficiently (i.e., in a polynomial time) solve the 3-SAT problem, unless P=NP [9; 10]. There are limited physical implementations of the 3-SAT problem, which include an algorithmic conversion to a network-based biocomputation format [11], a quantum circuit approach using Grover's quantum search algorithm in conjunction with David-Putnam-Logemann-Loveland algorithm [12], and an IBM-Q operation of the Grover's quantum algorithm for the 3-SAT problem [13]. However, these approaches are nonimmune to errors, so it may be worthy for the current noisy-intermediate scale (NISQ) quantum computers to consider the robustness of quantum adiabatic computing. Of particular relevance in the context of the present paper, the 3-SAT problem is reducible to the maximum independent set (MIS) problem, which is also the NP-complete problem [14; 15], and the MIS problem is physically implementable with Rydberg atom graphs [16; 17]. So, in this paper, we introduce a quantum algorithm to formulate the 3-SAT problem with Rydberg atoms; we formulate a quantum experiment to obtain the MIS solution of Rydberg-atom graphs programmed to algorithmically determine a given 3-SAT problem instance, i.e., to evaluate the satisfiability of the 3-SAT instance experimentally. The 3-SAT problem is to determine whether a given propositional logic formula (Boolean expression), \(\Psi(x_{1},x_{2},\cdots)\), of Boolean variables, \(x_{1},x_{2},\cdots\), is _satisfiable_ (i.e., there exists a set of Boolean values for the variables satisfying the formula) or _unsatisfiable_. The 3-SAT formula is given in the conjunctive normal form [18], i.e, a conjunction of \(N_{C}\) clauses, \(\Psi(x_{1},x_{2},\cdots,x_{n})=\bigwedge_{j=1}^{N_{C}}C_{j}\), where each clause, \(C_{j}=\ell_{j,1}\vee\ell_{j,2}\) or \(\ell_{j,1}\vee\ell_{j,2}\vee\ell_{j,3}\), is a disjunction of at most three literals, \(\ell_{j,1},\ell_{j,2},\ell_{j,3}\in\{x_{k},\bar{x}_{k}|k=1,\cdots,n\}\)[19; 20]. The given 3-SAT problem can be reduced to the MIS problem for an MIS graph \(G(V,E)\), given by \[V =\{(j,k)|\ell_{j,k}\in C_{j}\}\] \[E =E_{1}\cup E_{2}\] \[E_{1} =\{[(j,k_{1}),(j,k_{2})]|k_{1}\neq k_{2}\}\] \[E_{2} =\{[(j_{1},k_{1}),(j_{2},k_{2})]|k_{1}\ \neq k_{2},\ell_{j_{1},k_{1}}=\bar{\ell}_{k_{2},k_{2}}\}\] where \(V\) is the set of vertices of which an element \((j,k)\) corresponds to the literal \(\ell_{j,k}\) in the \(j\)-th clause \(C_{j}\); \(E\) is the set of all edges, the union of two edge sets \(E_{1}\) and \(E_{2}\); \(E_{1}\) is the set of all intra-clause edges connecting two vertices corresponding to the literals \(\ell_{j,k_{1}}\) and \(\ell_{j,k_{2}}\) in the same clause \(C_{j}\); and \(E_{2}\) is the set of all inter-clause edges which connect two vertices in different clauses, whose corresponding literals are negation to each other [14; 15]. Figure 1 shows examples of MIS graphs obtained with the above reduction algorithm. The first graph, \(G_{1}\) in Fig. 1(a) is for a 3-SAT instance, given by \[\Psi_{1}(x_{1},x_{2},x_{3},x_{4},x_{5},\pi_{6}) =C_{0}\wedge C_{1}\wedge C_{2} \tag{1a}\] \[C_{0} =x_{1}\lor x_{2}\lor x_{3}\] (1b) \[C_{1} =\bar{x}_{1}\lor x_{4}\] (1c) \[C_{2} =x_{1}\lor x_{5}\lor x_{6} \tag{1d}\] where the clauses having two or three literals are respectively mapped to the two- or three-vertex subgraphs (of solid edges) and the literal-negation pairs are mapped to the inter-clause edges (of dashed lines). Similarly, we define two more MIS graphs \(G_{2}\) and \(G_{3}\) for
量子計算方法を用いて非決定論的多項式時間(NP)-完全問題を解くことは、量子情報科学において現在非常に重要な課題です。ここでは、Rydberg原子の使用を例に、SAT問題(3SAT問題)の解法を実現する実験を紹介します。これはNP-完全問題の代表的な問題であり、すべてのNP問題のプログラム化を可能にするものです。3SAT問題の論理演算子は、Rydberg原子のブロック運動とそれらの多体基态を用いてプログラムされ、量子的にSAT問題のインスタンスの判定結果を得ます。
2308.16525
Method for calculation of the beta exponent from the Heitler-Matthews model of hadronic air showers
The number of muons in an air shower is a strong indicator of the mass of the primary particle and increases with a small power of the cosmic ray mass by the $\beta$-exponent, $N_{\mu} \sim A^{(1-\beta)}$. This behaviour can be explained in terms of the Heitler-Matthews model of hadronic air showers. In this paper, we present a method for calculating $\beta$ from the Heitler-Matthews model. The method has been successfully verified with a series of simulated events observed by the Pierre Auger Observatory at $10^{19}$ eV. To follow real measurements of the mass composition at this energy, the generated sample consists of a certain fraction of events produced with p, He, N and Fe primary energies. Since hadronic interactions at the highest energies can differ from those observed at energies reached by terrestrial accelerators, we generate a mock data set with $\beta =0.92$ (the canonical value) and $\beta =0.96$ (a more exotic scenario). The method can be applied to measured events to determine the muon signal for each primary particle as well as the muon scaling factor and the $\beta$-exponent. Determining the $\beta$-exponent can effectively constrain the parameters that govern hadronic interactions and help solve the so-called muon problem, where hadronic interaction models predict too few muons relative to observed events. In this paper, we lay the foundation for the future analysis of measured data from the Pierre Auger Observatory with a simulation study.
Kevin Almeida Cheminant, Dariusz Gora, Nataliia Borodai, Ralph Engel, Tanguy Pierog, Jan Pekala, Markus Roth, Jarosław Stasielak, Michael Unger, Darko Veberic, Henryk Wilczynski
2023-08-31T08:16:19
http://arxiv.org/abs/2308.16525v1
# Method for calculation of the beta exponent from the Heitler-Matthews model of hadronic air showers ###### Abstract: The number of muons in an air shower is a strong indicator of the mass of the primary particle and increases with a small power of the cosmic ray mass by the \(\beta\)-exponent, \(N_{\mu}\sim A^{(1-\beta)}\). This behaviour can be explained in terms of the Heitler-Matthews model of hadronic air showers. In this paper, we present a method for calculating \(\beta\) from the Heitler-Matthews model. The method has been successfully verified with a series of simulated events observed by the Pierre Auger Observatory at \(10^{19}\) eV. To follow real measurements of the mass composition at this energy, the generated sample consists of a certain fraction of events produced with p, He, N and Fe primary energies. Since hadronic interactions at the highest energies can differ from those observed at energies reached by terrestrial accelerators, we generate a mock data set with \(\beta=0.92\) (the canonical value) and \(\beta=0.96\) (a more exotic scenario). The method can be applied to measured events to determine the muon signal for each primary particle as well as the muon scaling factor and the \(\beta\)-exponent. Determining the \(\beta\)-exponent can effectively constrain the parameters that govern hadronic interactions and help solve the so-called muon problem, where hadronic interaction models predict too few muons relative to observed events. In this paper, we lay the foundation for the future analysis of measured data from the Pierre Auger Observatory with a simulation study. Introduction Simulations of extensive air showers using current hadronic interaction models predict too small numbers of muons compared to events observed in the air-shower experiments, which is known as the muon-deficit problem. To study the muon-deficit we use the top-down (TD) method [1, 2, 3, 4] - This chain of simulations and reconstructions enable us to calculate signals in the fluorescence (FD) and surface detectors (SD) of cosmic ray experiments like for example the Pierre Auger Observatory or Telescope Array. For each observed hybrid shower 1, starting with a large number of simulated air showers with varying initial conditions, we select the one which has a longitudinal profile most similar to the profile of the observed shower (the reference profile). As a result of the simulation-reconstruction chain we get an event, with complete information about the distributions of the signals in the detectors (including information on the specific components that contribute to these signals) - these signals can then be compared with their reference counterparts. Since the results of the simulations depend on the properties of the hadronic interaction models that are included in the simulation software, by comparing the simulations with corresponding observational results we should be able to verify these models at energies exceeding those available in any man-made accelerators. We expect that we will gain new information, which will enable improvement of the interaction models, and in this way we are also able to reduce the discrepancy between the observations and simulations [1, 4]. Footnote 1: Hybrid event is seen simultaneously by the SD and FD detector. In this note the method is proposed to calculate the \(\beta\)-exponent from the Heitler-Matthews model [5] by including also the muon deficit-problem. The idea of the method is to find the set of muon rescaling parameters \(\epsilon_{k}\) for different primaries \(k\), which is a function of only two parameters: \(\epsilon_{\rm p}\) and \(\Delta\beta\). These two parameters indicate how much we need to scale the proton signal (\(\epsilon_{\rm p}\) term) and by how much to modify the \(\beta\)-exponent (\(\Delta\beta\)) in the Heitler-Matthews formula in order to match the observed numbers of muons in data and in simulations. The method requires that the first two moments of the individual so-called \(z_{k}\)-distributions (our model) and overall \(z\)-distribution (the measured observable) are matched. In addition we require that the \(\epsilon_{k}\) parameters should follow the Heitler-Mathews progression. The \(z_{k}\)-distribution is essentially the difference between the total signal at 1000 m of a real hybrid event and of the total signal at 1000 m of the Monte Carlo (MC) dataset. In other words, the method tells us by how much each individual \(z_{k}\)-distribution must be shifted, rescaled and then, weighted and summed, in order to retrieve the \(z\)-distribution. In TD-analysis we have the _input dataset_, which are real or mock hybrid events, and the _matched dataset_, which is produced via Convex/Corsika Monte-Carlo simulations [9]. The input dataset contains \(N\) events and the events will be indexed as \(n=1,\ldots,N\). The multiple profile-matched MC events simulated with primary \(k\) corresponding to an input event \(n\) are indexed with \(i=1,\ldots,M\) and are thus denoted with the triplet subscript \(nki\)2. Footnote 2: All \(S^{n}\) symbols will be referring to the signal at 1000 m from the shower core so that the 1000 subscript can be dropped entirely. The signals at 1000 m for the input dataset will have no decorations, i.e. just \(S\), and the signals from the matched dataset will be denoted with \(\tilde{S}\). ## 2 Two-parameter nonlinear scaling model Observations of air showers, and also simulations, demonstrated that the number of muons \(N_{\rm\mu}\) grows almost linearly with the shower energy \(E\), and it also increases with a small power of the primary mass \(A_{k}\). These relations can be reproduced in the framework of the Heitler-Matthews model of hadronic air showers [5]. This model predicts \[N_{\mu}^{A_{k}}=A_{k}\,\,\left(\frac{E/A_{k}}{\epsilon_{\rm c}^{\pi}}\right)^{ \beta}\,, \tag{1}\] where \(\beta\approx 0.9\). More precisely, MC simulations yield \(\beta^{\rm mc}=0.927\pm 0.002\) for Epos-LHC and \(\beta^{\rm mc}=0.925\pm 0.002\) for QGSJerII-04 [10]. For any fixed energy Eq. (1) describes how the muon number depends on the primary mass: \(N_{\mu}^{A_{k}}=N_{\mu}^{\rm p}\,A_{k}^{1-\beta}\).3 Simulations have shown that muon number depends on various properties of hadronic interactions (e.g. multiplicity, charge ratio, baryon anti-baryon pair production) [12]. Therefore, estimating the \(\beta\)-exponent from data would be helpful in constraining the parameters of hadronic interactions and improving the accuracy of models. On the other hand results obtained from the Pierre Auger Observatory and other leading cosmic ray experiments indicate that simulations using LHC-tuned hadronic interaction models underestimate the number of muons in extensive air showers compared to experimental data. To account for this effect, we can formulate a scaling ansatz in Eq. (1) by: Footnote 3: the \(N_{\mu}^{\rm p}\) is the number of muons in proton shower; \(\epsilon_{\rm c}^{\pi}\) is the critical energy at which pions decay into muons \[N_{\mu}^{A_{k}}=\bar{N}_{\mu}^{A_{k}}\,A_{k}^{1-\beta}\,e^{\rm\,e_{\rm p}}\,A_ {k}^{-\Delta\beta}. \tag{2}\] where the scaling factor can be defined as: \(r_{\mu,k}:=1+\varepsilon_{k}:=e^{\varepsilon_{\rm p}}\,A_{k}^{-\Delta\beta}= \exp(\varepsilon_{\rm p}-\Delta\beta\ln\,A_{k})\). Thus, having MC values of the \(\beta^{\rm mc}\) for the hadron interaction model and the value of the parameter \(\Delta\beta\), we can calculate the \(\beta\) exponent from \(\beta=\beta^{\rm mc}+\Delta\beta\). In the context of this work, this then corresponds to saying that the number of muons \(N_{\mu}^{A_{k}}\) in the input dataset is proportional to the muon number \(\bar{N}_{\mu}^{A_{k}}\) in the matched dataset, with the usual Matthews-Heitler progression with mass \(A_{k}\), but with a slight scaling \(1+\varepsilon_{\rm p}\) and modification \(\Delta\beta\). In this work, the input dataset is constructed from Epos-LHC simulations (mock dataset) and is built by taking MC simulations from the TD simulation chain obtained with Epos-LHC around \(10^{18}\) eV. The matched dataset is a dataset from QGSJerII-04 simulations. Details regarding these two datasets can be also found in Ref. [4]. Since these simulations were performed for p, He, N, and Fe primaries for both Epos-LHC and QGSJetII-04, we can plot the evolution of the average muon signal as a function of the primary mass for both hadronic models, as shown in Fig. 1. Since QGSJetII-04 has, on average, fewer muons than Epos-LHC, one can imagine that the muon problem can be recreated by comparing the two hadronic models. Therefore, we can try in this work and figure what is the best way to rescale QGSJerII-04 in order to match the muon signal of the mock dataset built with Epos-LHC. From Fig. 1 we can also see, that the average muon signal increases as a function of the primary mass. As expected, both considered hadronic models display a similar ratio with the average about \(r_{\rm true}^{\rm mc}=\bar{S}_{\rm epos}^{\mu}/\bar{S}_{\rm qgsjet}^{\mu}=1.10 \pm 0.04\); upon closer examination we also see that larger rescaling is needed for protons (\(1.12\pm 0.03\)) than for iron (\(1.08\pm 0.03\)). In Fig. 1 we show also the linear fit to the MC muon signal from Epos-LHC and QGSJerII-04, motivated by the Heitler-Matthews model. The calculated value of \(\beta^{\rm mc}\) from the fit is about 0.92, so it is pretty close to the values from Ref. [10]. This cross-check of \(\beta\)-calculation is a validation of our TD simulations. ## 3 Fitting the \(z\)-histogram The mean signal \(\langle S\rangle\) of the input dataset is the sum of the mean electromagnetic (em) and muonic components \[\langle S\rangle=\sum_{k}f_{k}\ \langle S\rangle_{k}=\sum_{k}f_{k}\ (\langle S ^{\rm em}\rangle_{k}+\langle S^{\rm H}\rangle_{k})=\langle S^{\rm em} \rangle+\langle S^{\rm H}\rangle, \tag{3}\] where \(\langle\cdot\rangle_{k}\) denotes a mean within a given primary class \(k\). Note that for the input dataset the averages for given \(k\) are not really observable, but it is clear that a sum over the composition fractions \(f_{k}\) gives then the average in the whole input dataset, a quantity which is fully available. Equivalently, for the mean signal \(\langle\bar{S}\rangle\) in the matched dataset, where the quantities are known for various primary groups \(k\), we can explicitly write \[\langle\bar{S}\rangle=\sum_{k}f_{k}\ \langle\bar{S}\rangle_{k}=\sum_{k}f_{k} \ \bigl{(}\langle\bar{S}^{\rm em}\rangle_{k}+\langle\bar{S}^{\rm H}\rangle_{k} \bigr{)}=\langle\bar{S}^{\rm em}\rangle+\langle\bar{S}^{\rm H}\rangle, \tag{4}\] where \(\langle\bar{S}\rangle_{k}:=(\sum_{n}^{N}\sum_{i}^{M_{nk}}\bar{S}_{nki})/\sum_{ n}^{N}M_{nk}\) is the signal \(\bar{S}_{nki}\) of the matched dataset averaged over all \(n\) and \(i\) for a given \(k\). Figure 2: The \(z_{k}\)-distributions for stations at 1000 m from the shower core, from TD simulations at energy \(10^{19}\) eV for proton (left) and iron (right) induced air showers simulated with Epos-LHC and QGSJerII-04 for the so-called mock dataset, see Ref. [4] for more details. Since we assume a perfect matching of the longitudinal profile and thus the EM component of the signal, all the \(\bar{S}^{\rm em}_{nki}\) are very close or identical to the corresponding input events with signals \(S^{\rm em}_{n}\). The mean difference \(\Delta S\) of the signals in the two datasets thus only depends on the muonic part \[\Delta S:=\langle S\rangle-\langle\bar{S}\rangle=\sum_{k}f_{k}\left(\langle S \rangle_{k}-\langle\bar{S}\rangle_{k}\right)=\sum_{k}f_{k}\left(\langle S^{ \rm H}\rangle_{k}-\langle\bar{S}^{\rm H}\rangle_{k}\right)=\langle S^{\rm H} \rangle-\langle\bar{S}^{\rm H}\rangle=\Delta S^{\rm H}. \tag{5}\] The mean muonic signals \(\langle S^{\rm H}\rangle_{k}\) of the primary \(k\) in the input data can be obtained by rescaling the muonic signals \(\langle\bar{S}^{\rm H}\rangle_{k}\) in the matched dataset with corresponding scaling factors \(1+\varepsilon_{k}\), \[\langle S^{\rm H}\rangle_{k}=\left(1+\varepsilon_{k}\right)\langle\bar{S}^{ \rm H}\rangle_{k}. \tag{6}\] With this scaling we can simplify the difference \(\Delta S\) from Eq. (5) into \[\Delta S^{\rm H}=\sum_{k}f_{k}\,\varepsilon_{k}\,\langle\bar{S}^{\rm H} \rangle_{k}. \tag{7}\] On the other hand, as it is clear from Eq. (5), \(\Delta S\equiv\Delta S_{\rm H}\). The third term of Eq. (5) can be rewritten as \[\sum_{k}f_{k}\,\left(\langle S\rangle_{k}-\langle\bar{S}\rangle_{k}\right)= \langle S\rangle-\sum_{k}f_{k}\,\langle\bar{S}\rangle_{k}, \tag{8}\] so that we can define for each event \(n\) and match \(i\) an observable \[z_{ni}=S_{n}-\sum_{k}f_{k}\,\bar{S}_{nki}. \tag{9}\] Equivalently, based on Eq. (7) we can define a scaling-dependent quantity \[\bar{z}_{ni}=\sum_{k}f_{k}\,\varepsilon_{k}\,\bar{S}^{\rm H}_{nki}=\sum_{k}f_ {k}\,\varepsilon_{k}\,g_{k}(\theta_{n})\,\bar{S}_{nki}, \tag{10}\] where \(\bar{S}^{\rm H}_{nki}\) is obtained either directly from the MC events or, like here, by using a factor \(g\) from Universality, \(\bar{S}^{\rm H}_{nki}=g_{k}(\theta_{n})\,\bar{S}_{nki}\).The average muon signal as a fraction of the total signal at the ground, \(g_{k}(\theta_{n})\) has been calculated in our previous analyses, see for example [4]4. Footnote 4: It is worth mentioning that this fraction depends on the shower zenith angle and the type of the primary cosmic ray, and only slightly on different hadronic interaction models [11]. For each event \(n\) and \(i\) we can also define a variable \(z_{nki}=S_{nki}-\bar{S}_{nki}\), which is a simple difference between the total signal for data and MC, for given primary \(k\). In Fig. 2 we show corresponding distributions of this variable for the considered primaries obtained from TD simulations with Epos-LHC and QGSJetII-04 (for simplicity we use notation \(z_{k}\) for each individual histogram). As we can see from Fig. 2 for the considered number of events, the corresponding \(z_{k}\)-distribution can be quite well described by a Gaussian function, the fit to histograms gives \(\chi^{2}/{\rm ndf}\approx 1.5\). From the fit for individual distributions we can get the mean value of signal difference \(\langle z_{k}\rangle\) and the corresponding standard deviation \(\sigma(z_{k})\). These variables can be used to define the probability density function (PDFs) for each primary \(k\), which is given by \[P_{k}(z_{k},\sigma(z_{k}))=\frac{1}{\sqrt{2\pi}\sigma(z_{k})}\exp\left[-\frac {(z_{nki}-\langle z_{k}\rangle)^{2}}{2\sigma^{2}(z_{k})}\right], \tag{11}\] where again index \(k\) spans over different primaries. Note, that according to Eq. (10) the mean position of \(z_{k}\)-distribution should be connected with an average ground muon signal expected for given primary. However, such conversion is possible, if we already know proportionality constants i.e. scaling factors \(\varepsilon_{k}\). In other words, if we plot rescaled distribution shown in Fig. 2 in \(\langle S^{\mu}\rangle\) phase-space, the means of such distributions should give us average muon signals on the ground for considered masses. Moreover, we should expect from physics of extensive air showers that position of mean for lighter element should be smaller that for heavier element i.e. \(\langle S^{\mu}\rangle_{\rm p}<\langle S^{\mu}\rangle_{\rm He}<\langle S^{\mu} \rangle_{\rm N}<\langle S^{\mu}\rangle_{\rm Fe}\). Based on the Heitler-Matthews model it is also expected, that logarithm of the muon signal should increase linearly with logarithm of the primary mass, therefore corresponding linearity conditions were introduced by using two-parameter scaling model \(\varepsilon_{k}\). In order to find \(\varepsilon_{k}\) and thus to convert the mean of \(z_{k}\)-distribution to \(S^{\mu}\) phase-space, we can use the Minuit minimization, where the fitted function is a combination of four Gaussian PDFs, which have the form \[F(\vec{\varepsilon},A_{\rm mpl})=A_{\rm mpl}\sum_{k}f_{k}\,\frac{1}{\sqrt{2\pi }\sigma(z_{k})}\exp\left[-\frac{(z_{nik}-\varepsilon_{k}\langle\bar{S}^{\mu} \rangle_{k})^{2}}{2\sigma^{2}(z_{k})}\right]\,, \tag{12}\] where \(\varepsilon_{k}=e^{\varepsilon_{p}-\Delta\beta\ln A_{k}}-1\) and \(\rm const=A_{\rm mpl}\). The \(f_{k}\) is fraction of \(N=68\) pure mass samples and \(const\) gives possibility to rescale the normalized individual \(z_{k}\)-distribution to overall \(z_{ni}\)-histogram. In this way from the Gaussian fit given by Eq. (12) to overall \(z_{ni}\)-histogram, correction factors \(\varepsilon_{k}\) and \(\Delta\beta\) for hadronic models can be calculated. In other words, Eq. (12) tells us by how much each \(z_{k}\)-distribution must be shifted, rescaled, and then weighted and summed, in order to retrieve the \(z_{ni}\)-distribution and also its first and second moments, see also Fig. 3. Figure 3: (Left): The \(z_{ni}\)-distribution as described by Eq. (9) with \(f_{\rm p}=0.15\), \(f_{\rm He}=0.38\), \(f_{\rm N}=0.46\), and \(f_{\rm Fe}=0.01\) for mock dataset. Since we have 68 mock events (Eros-LHC) and 10 QGSJerII-04 events associated to each of the mock events we have 680 events contained in this histogram. The distribution is fitted (red line) with a Gaussian function in order to get its mean \(\langle z_{ni}\rangle=2.825\pm 0.16\) and the standard deviation \(\sigma(z_{ni})=3.80\pm 0.14\). (Right): Sketch showing the idea of the method i.e. each \(z_{k}\)-distribution must be shifted, rescaled, and then weighted and summed, in order to retrieve the \(z_{ni}\)-histogram. ## 4 Results of the fit of four individual \(z_{k}\)-distributions to \(z_{ni}\)-histogram The results are shown in Fig. 4 and Table 1. We see that the fit can reproduce the ratio of the muon signals of simulations using Eros-LHC (mock data) and QGSJerII-04 within \(\sim\)5%: as we already previously mentioned, the ratio for MC-true is \(r_{\rm true}^{\rm MC}=1.10\pm 0.04\) and the average reconstructed ratio (from Table 1) is \(1.15\pm 0.06\). The difference is caused by the fact that the signal for the mock dataset is not exactly equal to the one for Eros-LHC (Table 1). We also recover the \(\beta\) parameter (average \(\simeq\)0.92) for the studied set, because parameter \(\Delta\beta\) is zero within its error i.e. \(\Delta\beta=0.003\pm 0.035\). Finally we can check our solution by comparing the mean given by Eq. (10) and this from a Gaussian fit to the \(z\)-histogram shown in Fig. 3. We get \(2.74\pm 0.49\,\)VEM vs. \(\langle z_{ni}\rangle=2.83\pm 0.16\,\)VEM, which agree very well within the limits. We have a standard deviation match by definition, because \(\sigma^{2}(z_{ni})=\int\,\sum_{k}\,f_{k}z_{nik}^{2}\,P_{k}(z_{k},\sigma(z_{k} ))\,{\rm d}z_{nik}=\sum_{k}\,f_{k}\,\sigma^{2}(z_{k})\). Since the true value of the hybrid dataset may differ from that of the hadron interaction models used in this analysis, it would be interesting to perform the same analysis for a sample dataset built from the Eros-LHC sample, but with hadron interaction evolution. For a sample dataset built from the Eros-LHC sample, we constructed a mockdataset with the evolution of the mean muon signal \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \(k\) & \(r_{\rm\mu,k}\) & \(\langle\bar{S}_{k}^{\rm\mu}\rangle/\rm VEM\) & \(\langle S_{\rm\mu,k}^{\rm rec}\rangle/\rm VEM\) & \(\delta\) & \(r_{\rm\mu,k}\) & \(\langle S_{\rm\mu,k}^{\rm rec}\rangle/\rm VEM\) & \(\delta\) \\ \hline p & \(1.16\pm 0.06\) & \(15.57\pm 0.17\) & \(18.03\pm 0.18\) & \(4.2\%\) & \(1.13\pm 0.04\) & \(17.52\pm 0.17\) & \(1.0\%\) \\ He & \(1.15\pm 0.06\) & \(17.25\pm 0.19\) & \(19.90\pm 0.20\) & \(4.3\%\) & \(1.07\pm 0.07\) & \(18.11\pm 0.30\) & \(1.4\%\) \\ N & \(1.15\pm 0.06\) & \(19.37\pm 0.20\) & \(22.26\pm 0.21\) & \(5.3\%\) & \(1.01\pm 0.09\) & \(19.24\pm 0.38\) & \(1.9\%\) \\ Fe & \(1.14\pm 0.06\) & \(21.61\pm 0.23\) & \(24.73\pm 0.24\) & \(5.6\%\) & \(0.96\pm 0.10\) & \(20.23\pm 0.25\) & \(2.4\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values of the muon rescaling factors obtained with the fitting procedure, and of the MC muon signal, the reconstructed muon signals, for all primaries considered and with \(f_{\rm p}=0.15\), \(f_{\rm He}=0.38\), \(f_{\rm N}=0.46\), and \(f_{\rm Fe}=0.01\). The overestimation \(\delta=(\langle S_{\rm\mu,i}^{\rm rec}\rangle-\langle S_{\rm\mu,i}^{\rm mock} \rangle)/\langle S_{\rm\mu,i}^{\rm mock}\rangle\) of the reconstructed muon signal compared to the one from the mock dataset is also provided. The errors shown in the four column are the square root of the sum of the squares of the errors \(\delta r_{\rm\mu,k}\) and \(\delta\langle\bar{S}_{\rm\mu,k}^{\rm rec}\rangle\), i.e. those listed in the second and third columns, respectively. The last free columns show results for mockdataset with \(\beta=0.96\). as a function of primary mass, leading to a significantly different exponent value \(\beta\simeq 0.96\). This allows us to investigate whether the fitting procedure is able to recover this value as well. The average muon signal of the new sample set as a function of primary mass is shown in Table 1. Two features of this mock dataset can be noticed: that for nitrogen, a slight rescaling from QGSJetII-04 to the mock dataset is needed (\(r_{\mu,\mathrm{N}}=1.01\)) and that for iron the average muon signal is lower than 1 for the mock dataset (\(r_{\mu,\mathrm{Fe}}=0.96\)). The results of the fit are shown in Fig. 4 (right) and in Table 1. We can see that the negative scaling of the primary iron is slightly underestimated, while the signal is well recovered for all other elements. The muon signal from the mocked-up dataset is recovered within 2.4%. Moreover, the fitting of the reconstructed muon signal gives a value of \(\Delta\beta=0.04\) which agrees quite well with the expectation \(\beta=0.955\pm 0.005\), although error of the \(\Delta\beta\) is quite large 0.02. ## 5 Summary and Conclusion The method presented in this work recovers the mean muon signal and provides the ability to calculate muon signals for each element in the considered sample of real-like events. In this work, we have been performed calculations of muon scaling factors and \(\beta\) exponents, by fitting a four-element Gaussian distribution to the overall z-histogram, with two-parameter scaling model which should follow Heitler-Matthews progression. This work shows that the \(z\)-method can be applied to hybrid events to determine the muon signal, the scaling factor (total and for each element), and the \(\beta\) exponent. **Acknowledgments:** The authors are very grateful to the Pierre Auger Collaboration for providing the tools necessary for simulation for this contribution. The authors would like to thank the colleagues from the Pierre Auger Collaboration for all the fruitful discussions 5. Footnote 5: We want to acknowledge the support in Poland from the National Science Centre grant No. 2016/23/B/ST9/01635, grant No. 2020/39/B/ST9/01398, grant No. 2022/45/B/ST9/02163 and from the Ministry of Education and Science grant No. DIR/WK/2018/11 and grant No. 2022/WK/12.
Muonの数が多いほど、空気シャワー内の粒子の質量が高いことがわかります。これは、宇宙線質量の小さいべき乗と、$\beta$-指数による増加と、$\beta$指数と質量との関係を説明できます。 $N_{\mu} \sim A^{(1-\beta)}$. この現象は、Heitler-Matthews モデルに基づいて説明できます。この論文では、$\beta$をHeitler-Matthews モデルを用いて計算する方法を提案します。この方法は、Pierre Auger天文台によって観測されたシミュレーションされたイベントの系列を用いて検証されています。$10^{19}$ eVのエネルギーで観測される質量組成の測定値をフォローするために、生成されたサンプルには、p、He、N、Fe 原子質量が設定されたイベントの割合が含まれます。最高エネルギーにおけるハドロン相互作用は、地上の加速器で到達したエネルギーに比べて異なる可能性があるため、
2305.19832
Problems of search and pursuit of unmanned aerial vehicles using the game-theoretic approach
Unmanned aerial vehicles (UAVs) have become increasingly prevalent in various domains, ranging from military operations to civilian applications. However, the proliferation of UAVs has also given rise to concerns regarding their potential misuse and security threats. As a result, the search and pursuit of UAVs have become crucial tasks for law enforcement agencies and security organizations. In this paper, we use a game theoretic approach to explore the problem of searching for and pursuing submarines and translate the problem into a UAV search and pursuit problem. Game theory provides a mathematical framework for modeling and analyzing strategic interactions among multiple decision makers. By applying game theoretic principles to the search and pursuit problem, we aim to improve the effectiveness of UAV detection and capture strategies. We begin by formulating the problem as a game, where the UAV represents the evader, and the search and pursuit team represents the pursuers. Each player's objective is to optimize their own utility while considering the actions and strategies of the other players. By leveraging game theory, we can gain insights into the optimal decision-making strategies for both the UAV and the pursuers, leading to improved search and pursuit outcomes and enhanced security in the face of UAV threats.
Oleg Malafeyev, Kun Zhang
2023-05-31T13:18:59
http://arxiv.org/abs/2305.19832v1
# Problems of search and pursuit of unmanned aerial vehicles using the game-theoretic approach ###### Abstract Unmanned aerial vehicles (UAVs) have become increasingly prevalent in various domains, ranging from military operations to civilian applications. However, the proliferation of UAVs has also given rise to concerns regarding their potential misuse and security threats. As a result, the search and pursuit of UAVs have become crucial tasks for law enforcement agencies and security organizations. In this paper, we use a game theoretic approach to explore the problem of searching for and pursuing submarines and translate the problem into a UAV search and pursuit problem. Game theory provides a mathematical framework for modeling and analyzing strategic interactions among multiple decision makers. By applying game theoretic principles to the search and pursuit problem, we aim to improve the effectiveness of UAV detection and capture strategies. We begin by formulating the problem as a game, where the UAV represents the evader, and the search and pursuit team represents the pursuers. Each player's objective is to optimize their own utility while considering the actions and strategies of the other players. By leveraging game theory, we can gain insights into the optimal decision-making strategies for both the UAV and the pursuers, leading to improved search and pursuit outcomes and enhanced security in the face of UAV threats. Dynamic Models of Inspections Currently, the method of mathematical modeling has gained wide popularity, with the benefit of constructing and studying mathematical models for the purpose of analyzing and forecasting various processes in natural, technical, economic, and other sciences [1-7]. The application of mathematical theory can be used to prevent illegal actions. Terrorists and members of drug cartels use modern means of communication and transportation in their activities. There is a need to conduct inspection measures to prevent the spread of their actions. To organize successful countermeasures, it is also necessary to use modern technical means and an apparatus for optimizing the use of resources, and to design dynamic models of inspections. Let us consider the following situation. An interceptor ship equipped with sonar detected the periscope of a submarine, which immediately disappeared in an unknown direction. It is necessary to intercept the submarine in the shortest possible time. Let us assume that the interceptor ship does not know the exact speed of the submarine. However, a discrete set of speeds is known, one of which is the actual speed of the submarine. Next, we will refer to the interceptor ship as P and the submarine as E, respectively. First, let us present an algorithm for finding the search time under conditions where the speed of the escaping submarine is unknown to the interceptor. Suppose that the speed of the interceptor is much greater than the speed of the escaping submarine. At the initial moment of time of detection, the ship accurately determines the location of the submarine. Thus, the distance between him and the escaping submarine, denoted by \(\textbf{{D}}_{0}\), is known. To find the interception time, it is necessary to determine the trajectory along which the interceptor ship should move. We introduce the polar coordinate system \(\rho\) and \(\varphi\) in such a way that the pole, point O, is located at the point of detection of the submarine, and the polar axis passes through the point where the interceptor ship is located. Then, the dynamics of the escaping submarine are described by equations: \[\overset{\cdot}{\boldsymbol{\rho}^{E}}=\boldsymbol{v}\] \[\overset{\cdot}{\boldsymbol{\varphi}^{E}}=0\] The pursuer does not know the speed v with certainty, but it is known that it is chosen from a discrete set \(\boldsymbol{V^{E}}\). The maximum possible speed of the pursuer ship is denoted by \(\boldsymbol{V^{P}}\). The pursuer can guarantee the capture by trying all elements of the set \(\boldsymbol{V^{E}}\). Initially, the ship assumes that the runaway has a speed \(\boldsymbol{v_{1}}\)\(\in\)\(\boldsymbol{V^{E}}\). To capture the submarine at time \(\boldsymbol{t_{0}}\), the pursuer begins moving at a speed of \(\boldsymbol{V^{P}}\) towards point O and continues until time\(\boldsymbol{t_{1}}\), at which point the players are at the same distance from point O, meaning that the equation is satisfied. \[\boldsymbol{\rho}_{1}^{\text{p}}=\boldsymbol{\rho}_{1}^{\text{E}}\] And \[\int_{\boldsymbol{t_{0}}}^{\boldsymbol{t_{1}}}\boldsymbol{v_{1}}\boldsymbol{ dt}+\boldsymbol{V^{P}}(\boldsymbol{t_{1}}-\boldsymbol{t_{0}})=\textbf{{D}}_{0}\] From time \(\boldsymbol{t_{1}}\), the pursuer must move, selecting a speed such that they constantly remain at the same distance from point O as the fleeing ship. To achieve this, the speed of the intercepting ship is divided into two components: radial \(\boldsymbol{V_{\rho}}\) and tangential \(\boldsymbol{V_{\varphi}}\). The radial component is the speed at which the ship moves away from the pole, i.e. \[\boldsymbol{V_{\rho}}=\overset{\cdot}{\boldsymbol{\rho}}\] The tangential component is the linear rotational velocity with respect to the pole, i.e. \[\mathbf{V}_{\mathbf{\varphi}}=\mathbf{\rho}\mathbf{\varphi}\] To make the encounter happen, the radial component of the pursuer's velocity is assumed to be equal to the velocity of the fugitive. Then, to find the trajectory of the pursuer, the system of differential equations must be solved: \[\dot{\mathbf{\rho}}=\mathbf{v}_{1}\] \[\dot{\mathbf{\varphi}}^{2}\mathbf{\rho}^{2}=(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}\] The initial conditions for this system will be: \[\mathbf{\varphi}(\mathbf{t}^{*})=0\] \[\mathbf{\rho}(\mathbf{t}_{1})=\mathbf{v}_{1}\mathbf{t}_{1}\] Solving it, we find: \[\mathbf{\varphi}(\mathbf{t})=\frac{\sqrt{(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}}}{\bm {v}_{1}}\mathbf{ln}\frac{\mathbf{v}_{1}\mathbf{t}}{\mathbf{v}_{1}\mathbf{t}_{1}}\] \[\mathbf{\rho}(\mathbf{t})=\mathbf{v}_{1}\mathbf{t}\] Let's express time as a function of the polar angle: \[\mathbf{t}(\mathbf{\varphi})=\mathbf{t}_{1}\mathbf{exp}\left(\frac{\mathbf{v}_{1}\mathbf{\varphi}}{ \sqrt{(\mathbf{V}^{\mathbf{p}})^{2}-(\mathbf{v}_{1})^{2}}}\right)\] Thus, the trajectory consists of linear segments and logarithmic spiral segments. In [2], it is proven that during movement along the spiral, the encounter will occur in a time not exceeding the time of passing one turn. Therefore, if the ship, having bypassed the turn of the spiral, does not find the submarine, then the initial assumption about the speed of the fleeing vessel was incorrect. Then the next speed \(\mathbf{v}_{2}\in\mathbf{V}^{\mathbf{E}}\) is chosen. Thus, the fleeing vessel during time \(\mathbf{t}_{2}\) covered the distance \(\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})=\mathbf{v}_{2}\mathbf{t}_{2}\), and the pursuer \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})=\mathbf{v}_{1}\mathbf{t}_{2}\). If \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})>\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), then the distance between the players will be equal to \(\mathbf{D}_{2}=\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})-\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), and to find the moment of time \(\mathbf{t}_{3}\), it is necessary to solve the equation \[\int_{\mathbf{t}_{2}}^{\mathbf{t}_{3}}\mathbf{v}_{2}\mathbf{dt}+\mathbf{V}^{\mathbf{p}}(\mathbf{t}_{3}-\bm {t}_{2})=\mathbf{D}_{2}\] If \(\mathbf{\rho}_{\text{P}}(\mathbf{t}_{2})<\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})\), then the distance between the players will be equal to \(\mathbb{[}\mathbf{D}_{2}=\mathbf{\rho}_{\text{E}}(\mathbf{t}_{2})-\mathbf{\rho}_{\text{P}}( \mathbf{t}_{2})\) and to find the time \(\mathbf{t}_{3}\) it is necessary to solve the equation \[\mathbf{V}^{\mathbf{p}}(\mathbf{t}_{3}-\mathbf{t}_{2})-\int_{\mathbf{t}_{2}}^{\mathbf{t}_{3}}\mathbf{v}_{ 2}\mathbf{dt}=\mathbf{D}_{2}\] After moving along a straight section, the pursuer moves along a spiral. To reduce the time, it is expedient for the pursuer to order the speed search in descending order. However, if this becomes known to the evader, he can move at a minimum speed, which will maximize the search time. Thus, the following game is obtained. The set of strategies for the submarine is the set of combinations of possible speeds \(\mathbf{v}\) of its movement and directions of movement \(\mathbf{\alpha}\). The set of strategies for the intercepting ship is the set of all possible permutations of elements \(\mathbf{V}^{\mathbf{E}}\). The matrix of the resulting game consists of elements T, which represent the capture time. Now suppose that the intercepting ship needs to detect n submarines, each of which requires \(\tau_{ij}\) hours to capture. To carry out the interception, there are m boats, each of which is directed to a submarine. The matrix \(A=\{\tau_{ij}\}\) is known, which represents the efficiency matrix of search operation for the i-th boat and j-th submarine. The task is to construct such an assignment plan \(X=\{x_{ij}\},t=1..m,j=1..n\), which minimizes the search time, while assigning each boat to search for no more than one submarine, and each submarine can be searched by no more than one boat. The values of\(x_{ij}\)can only take two values: \[x_{ij}=\begin{cases}1,assigned\ \ t\ \text{ boat for }j\ \text{ submarine}\\ 0,assigned\ \ t\ \text{ boat for }j\ \text{ submarine}\end{cases}\] Mathematical formulation of the optimal assignment problem \[min\,z=min\sum_{t=1}^{m}\sum_{j=1}^{n}\tau_{ij}*x_{ij}\] \[\sum_{i=1}^{m}x_{ij}\leq 1,j=1..n\] \[\sum_{j=1}^{n}x_{ij}\leq 1,t=1..m\] \[x_{ij}\geq 0\] In order for the problem of optimal assignments to have an optimal solution, it is necessary and sufficient that the number of boats is equal to the number of submarines, i.e., \(n=m\). Under this condition, the inequality constraints become equalities. \[min\,z=min\sum_{t=1}^{n}\sum_{j=1}^{n}\tau_{ij}*x_{ij}\] \[\sum_{i=1}^{n}x_{ij}=1,j=1..n\] \[\sum_{j=1}^{n}x_{ij}=1,t=1..n\] \[x_{ij}\geq 0\] If \(n\neq m\), then the assignment problem is unbalanced. Any assignment problem can be balanced by introducing the necessary number of dummy boats or submarines. The dual problem of the optimal assignments. \[max\,\omega=max(\sum_{t=1}^{n}i+\sum_{t=1}^{n}\text{i})\] \[i+i\geq\tau_{ij},t=1..n,j=1..n\] In the original performance matrix, A, determine the minimum element in each row and subtract it from all other elements in the row. In the matrix obtained in the first step, find the minimum element in each column, and subtract it from all other elements in the column. If a feasible solution is not obtained after steps 1 and 2, the following should be performed: a. In the last matrix, draw the minimum number of horizontal and vertical lines across rows and columns to cross out all zero elements. b. Find the minimum non-crossed out element and subtract it from all other non-crossed out elements and add it to all elements at the intersection of the lines drawn in the previous step. c. If the new distribution of zero elements does not allow for a feasible solution, repeat step 2a. Otherwise, proceed to step 3. The optimal assignments will correspond to the zero elements obtained in step 2. Let's consider another case. Suppose that an interceptor ship sends n boats after a single submarine. The escaping submarine has a discrete set of speeds and directions of movement, and it needs to choose how to act to maximize the time of capture. In other words, the escaping submarine must choose the best course of action or the best behavioral strategy. Let's use decision theory. Each boat tries to intercept the submarine one at a time in a random order. Therefore, we have n steps. Let's say we are at step t. It is necessary to determine the probability of winning in case strategy t is chosen, assuming that it is better than all the previous ones, i.e. the probability that it is the best one at all. Let's denote this probability by \(\boldsymbol{g_{t}}\) In addition, let's define the probability that the last strategy will be the best, if we skip the first t strategies and then the escaping submarine uses the optimal strategy. Let's denote this probability by \(\boldsymbol{h_{t}}\). According to the principle of dynamic programming, the escaping submarine knows how to act optimally starting from step t+1. The optimal behavioral strategy is: if the strategy at step t is not better than all the previous ones, then it should be rejected; if it is indeed better among the first t, then we need to compare \(\boldsymbol{g_{t}}\) and \(\boldsymbol{h_{t}}\). If \(\boldsymbol{g_{t}}\geq\boldsymbol{h_{t}}\), then the escaping s u b b m a r i n c c h h h \(\mathbf{H}\) is a monotonically non-increasing function. Insert graphs of functions. According to the chosen behavior strategy, if the \(\mathbf{1}\mathbf{m}\)-\(\mathbf{1}\)) \(\mathbf{m}\)-\(\mathbf{2}\mathbf{n}\)-\(\mathbf{1}\mathbf{n}\)It follows from the second case that E skips the \(\mathbf{\pi}\)-strategy. Then the chances of winning\(\mathbf{\lambda}\mathbf{n}\)-\(\mathbf{1}\)=\(\mathbf{1}\mathbf{n}\)Means \[\mathbf{h}_{\mathbf{n}-2}=\frac{1}{\mathbf{n}-1}*\frac{\mathbf{n}-1}{\mathbf{n}}+\frac{\mathbf{n}-2}{ \mathbf{n}-1}*\frac{1}{\mathbf{n}}=\frac{(\mathbf{n}-2)+(\mathbf{n}-1)}{\mathbf{n}*(\mathbf{n}-1)}\] \[\mathbf{h}_{\mathbf{t}}=\frac{\mathbf{t}}{\mathbf{n}}(\frac{1}{\mathbf{t}}+\frac{1}{\mathbf{t}+1}+ \cdots+\frac{1}{\mathbf{n}-1})\] for \[\frac{\mathbf{h}_{\mathbf{t}}}{\mathbf{g}_{\mathbf{t}}}=\frac{1}{\mathbf{t}}+\frac{1}{\mathbf{t}+1}+ \cdots+\frac{1}{\mathbf{n}-1}\] The number corresponding to the intersection point on the graph was found in [3] and is equal to \(\mathbf{t}=\frac{\mathbf{n}}{\mathbf{e}}\). In this case\(\mathbf{h}_{\mathbf{t}}=\mathbf{g}_{\mathbf{t}}=\frac{\mathbf{t}}{\mathbf{n}}=\frac{1}{\mathbf{e}}\), i.e. the probability of success for \(\mathbf{n}\to\infty\) is \(\frac{1}{\mathbf{e}}=0\),368. Using the Maple software package, several examples were solved. \(\mathbf{1}\)Example 1. Let the initial distance between the pursuer and the fugitive be 200 kilometers. The fugitive chooses a speed from the set \(\mathbf{V}^{\mathbf{F}}=\{\)8,56,78\(\}\) and a direction from the set \(\mathbf{\alpha}=\{\)23,137,182\(\}\). The maximum speed of the pursuer is \(\mathbf{V}^{\mathbf{P}}=100\)x\(\mathbf{1}\), \(\mathbf{\alpha}_{1}\). Then the set of fugitive strategies is: \[(\mathbf{\alpha}_{1},\mathbf{v}_{1}),(\mathbf{\alpha}_{1},\mathbf{v}_{2}),(\mathbf{\alpha}_{1}, \mathbf{v}_{3}),(\mathbf{\alpha}_{2},\mathbf{v}_{1}),(\mathbf{\alpha}_{2},\mathbf{v}_{2}),(\mathbf{ \alpha}_{2},\mathbf{v}_{3}),(\mathbf{\alpha}_{3},\mathbf{v}_{1}),(\mathbf{\alpha}_{3},\mathbf{v}_ {2}),\] set of pursuer strategies: \[(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}),(\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{2}),(\mathbf{v} _{2},\mathbf{v}_{1},\mathbf{v}_{3}),(\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{1}),(\mathbf{v}_{3}, \mathbf{v}_{1},\mathbf{v}_{2}),(\mathbf{v}_{3},\mathbf{v}_{2},\mathbf{v}_{1})\] The resulting game matrix looks like this: \(\mathbf{1}\)\(\mathbf{1}\ Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\rm{A}}\)E=[8,56,78] as the X-axis speed, and selects from the set \(\alpha\)={23,37,82} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\rm{A}}\)P=120 m/min Then the fugitive policy set is: \[(\mathbf{\alpha}_{1},\mathbf{v}_{1}),(\mathbf{\alpha}_{1},\mathbf{v}_{2}),(\mathbf{\alpha}_{1},\mathbf{v }_{3}),(\mathbf{\alpha}_{2},\mathbf{v}_{1}),(\mathbf{\alpha}_{2},\mathbf{v}_{2}),(\mathbf{\alpha}_{ 2},\mathbf{v}_{3}),(\mathbf{\alpha}_{3},\mathbf{v}_{1}),(\mathbf{\alpha}_{3},\mathbf{v}_{2})\] **Example 2**. Suppose a interceptor ship has detected 4 submarines. The initial distance to each of them is 100 kilometers, 200 kilometers, 50 kilometers, and 163 kilometers, respectively. The pursuer has 4 boats to catch the submarines. The maximum speed of each boat is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. The first submarine moves along the straight line \(\mathbf{\alpha}_{1}\)=23, at a speed of \(\mathbf{v}_{1}\)=23 km/h, the second one - \(\mathbf{\alpha}_{2}\)=137, \(\mathbf{v}_{2}\)=50 km/h, the third one - \(\mathbf{\alpha}_{3}\)=187, \(\mathbf{v}_{3}\)=67 km/h, and the fourth one -\(\mathbf{\alpha}_{4}\)=50, \(\mathbf{v}_{4}\)=70 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{c c c c} 1903 & 386 & 9,96 & 52 \\ 1,15 * 10\({}^{71}\) & 6,4 * 10\({}^{51}\)1,3 * 10\({}^{34}\) & 1,89 * 10\({}^{26}\) \\ \end{tabular} Figure 1: Nine strategies for simulating the runaway drone. \begin{tabular}{c c c} \(5.6*10^{172}\) & 1,13 \(*10^{90}\) \(2*10^{32}\) & 3,7 \(*10^{51}\) \\ \(2\),4 \(*10^{63}\) & 7,56 \(*10^{26}\),128 \(*10^{9}\) & 5,96 \(*10^{14}\) \\ \end{tabular} The game can be solved using the Hungarian method. We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 74 km/h, 90 km/h, 178 km/h and 124 km/h respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=23m/min, the maximum speed of the Y-axis a_1=23m/min, the height is 100 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=50m/min, a maximum speed of Y-axis a_2=137m/min, and a height of 200 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=67m/min, the maximum speed of the Y-axis a_3=7m/min, and a height of 50 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=70m/min. Y-axis maximum speed a_4=50m/min, height 163 meters matching matrix: \begin{tabular}{c c c c} \(0\) & \(0\) & \(1\) & \(0\) \\ \(1\) & \(0\) & \(0\) & \(0\) \\ \(0\) & \(1\) & \(0\) & \(0\) \\ \(0\) & \(0\) & \(0\) & \(1\) \\ \end{tabular} Figure 2.2:Simulate the X, Y, and Z axis value changes of the chaser and escaper drones. **Example 3**. Suppose the initial distance between the pursuer and the evader was 50 kilometers. The evader chooses a velocity from the set \(\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\ **Example 4**. Suppose a ship-interceptor has detected 4 submarines. The initial distance to each of them is 30 kilometers, 11 kilometers, 62 kilometers, and 8 kilometers, respectively. The pursuer has 4 boats to catch the submarines. The maximum speed of each boat is 60 km/h, 65 km/h, 95 km/h, and 105 km/h, respectively. The first submarine moves along the line \(\boldsymbol{\alpha_{1}}\)=7 with a speed o f f 1,02 0,89 0,49 0,43 1,02 1,03 2,04 2,05 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 0,43 1,02 2,06 3,07 4,08 4,09 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 5,00 6,00 7,00 8,00 91 10,02 0,89 0,49 5,00 6,00 7,00 8,00 9,01 10,02 0,89 0,49 **Example 5**. Suppose a ship-interceptor has detected 5 submarines. The initial distance between him and the escaping submarines is the same and is 20 km. After detection, each submarine goes in an unknown direction and at a different speed. The interceptor must intercept all escaping submarines. To do this, he sequentially intercepts each submarine. The first submarine moves a, the fourth -\(\boldsymbol{\alpha}_{\text{a}}\)=45, \(\boldsymbol{v}_{\text{a}}\)=45 km/h, the fifth - \(\boldsymbol{\alpha}_{\text{a}}\)=50, \(\boldsymbol{v}_{\text{a}}\)=50 km/h. The speed of the ship-interceptor is 250 km/h. Using the o M g suppose 1 Drone Interceptor detects 5 drones. 2 drone interceptor 3 The five intruders are quadcopter drones with heights of 20 m 40 m 60 m 80 m 100 m. 4 The speed of the first submarine along the straight-line X-axis v_1=120 m/min, y-axis speed a_1=20m/min, f The second X-axis velocity v_2=90 m/min, y-axis velocity a_2=50m/min, f The third X-axis speed v_3=70 m/min Y-axis speed a_3=70m/min, w The fourth X-axis velocity v_4=50 m/min, -y-axis velocity a_4=90m/min, i The fifth X-axis velocity v_5=20 m/min- y-axis velocity a_5=120m/min,. g The ship interceptor has a speed of 250 m/min. h t P h h k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k k ## 1 Introduction Research on the problem of search began intensively during World War II. Later, search theory methods found application in solving various other problems [8], [9]. Two monographs on the theory of search for moving objects can be distinguished, devoted to various aspects of search theory [10], [11]. All developments in the theory of object search can be conditionally divided into three groups: discrete search, continuous search, and search game problems. The first group includes works devoted to the search for an object with a finite or countable set of states. The second group includes works on the search for an object whose set of positions forms a certain (finite or infinite) area in the plane or in the search space. The tasks of search in the first and second groups are discussed, in which the object being sought does not counteract the search system. Studies that consider search problems in conditions of counteraction form the third group, namely search game problems. The following articles can be distinguished from the works of the first group. In [12], Staroverova O.V. presents a search algorithm that minimizes the average search time under the condition that the probability \(\alpha_{t}\) does not depend on the cell number and the search ends only when the pursuer detects the fleeing object. Kelin investigated the possibility of solving the problem of optimal distribution of search efforts using a Markov model, when the fleeing object E moves randomly. He considered two different cases: 1 The movement of the evader does not depend on their location, and the pursuer does not receive any information about the location of the evader during the search process. 2 The movement of the evader depends on their last location, and the search system knows the location of the evader in the previous moment at each moment in time. In both cases, it is assumed that the search system knows the probabilistic law of the evader's movement. [13] Koopman considered the problem of search on a surface, where the sought objects are uniformly distributed and their course angles are unknown (with equal probability in \([0,\,2\pi\!]\)). Assuming that the searcher P moves at a constant speed, he determined the average number of targets that enter a given circle centered on P per unit time at an arbitrary angle \(\gamma\epsilon(0,2\pi)\)and at a specified angle Figure 5: Quadrotor drone tracking path \(\boldsymbol{\gamma}\in[\boldsymbol{\gamma}^{\prime},\boldsymbol{\gamma}^{\prime}+ \boldsymbol{d}\boldsymbol{\gamma}^{\prime}]\), where \(\boldsymbol{\gamma}\) is the angle between the searcher's velocity vector and the line connecting P and E [14]. In [15], Kupman solved the problem of optimal distribution of given search efforts, maximizing the probability of detecting a stationary target with known a priori distribution of its location and an exponential conditional detection function. This was the first general result obtained in this field. Lapshin presented Kupman's theory and provided examples showing how to use it to solve specific problems[16]. Charnes and Cooper [17] formalized the problem of optimal distribution of search efforts as a convex programming problem. Mac-Kuen and Miller[18] investigated the search problem, in which it is necessary to decide whether to start the search or not, and after starting the search, whether to continue or stop the search. They obtained a general functional equation. Pozner[19] solved the two-stage (preliminary and final) search problem for a lost satellite. Dubrovin and Siro tin[20] solved the problem of determining the average time of finding an escaping object in a rectangular search area, if the initial locations of the pursuer and the escaping object in the specified area follow a uniform probability distribution and their courses are arbitrary. The sought object tries to leave the search area after being detected by the pursuer One of the first solved game search problems is as follows: an airplane is searching for a submarine that is passing through a strait with variable width and a sufficiently large length; the submarine cannot remain submerged for the entire crossing time; it is necessary to determine the optimal distribution of search efforts for the airplane and the probability density distribution of the submarine's submergence or surfacing location [21]. Discrete search game problems can be formulated as follows: the sought-after object is hiding in one of the cells and the searcher sequentially inspects them. One of the first solved search game problems is as follows: a plane is searching for an underwater submarine passing through a strait with varying width and of considerable length; the submarine cannot remain submerged throughout the entire crossing time; the task is to determine the optimal distribution of search efforts for the plane and the probability distribution of the submarine's submergence or surfacing location [21]. Discrete search game problems can be formulated as follows: the sought object is hidden in one of the cells, and the searcher sequentially examines them. When examining cell \(\boldsymbol{t}\) for a duration of\(\boldsymbol{t_{i}}\), the probability of detection, given that the target is in that cell, is equal to \(\boldsymbol{p_{i}}\). The payoff is the sum of m plus the duration of time spent examining the cells, \(\boldsymbol{m}+\sum_{k=1}^{m}\boldsymbol{t_{k}}\), Where m is the number of cells viewed to detect the desired object. Bram [22] solved this problem for \(\boldsymbol{p_{i}}=1\)\(=\)\(1 the pursuer P is the probability of not catching the evader E. In [30], Forman considers the problem of searching for "Princess and Monster" on a circle (with arbitrary initial positions and end time T) and in a certain area on a plane, assuming that the players are far from the boundary and cannot reach it in time T. The cost (for E) is the probability of being caught. Halpern considers a minimax problem that differs from the "Princess and Monster" problem in that E knows the trajectory of P [31]. For the case of a rectangular area, the "Princess and Monster" problem was solved by Gal [32]. Using this result, Fitzgerald obtained a solution for an arbitrary convex area, as well as for an area that is a finite union of convex areas [33]. Wilson [34] considers a wide class of differential search games of given duration, in which the players receive information about the initial position. He showed that such games have a solution in the class of mixed strategies and that a mixed strategy can be implemented by using a pure strategy whose choice depends on a randomly selected number from the unit interval. This work is dedicated to the study of search problems where the target is mobile. Depending on the information available to the search participants, different approaches are proposed for finding a solution, i.e., determining optimal behavior. ## 2 Fundamentals of Object Search Theory (Background Information) Subject of object search theory: The subject of object search theory is the search for real objects in various environments. Search can be defined as the process of purposeful exploration of a specific area of space to detect an object located there. Detection refers to obtaining information about the location of an object by establishing direct energetic contact with it. Detection is carried out using detection means such as optical, radar, hydroacoustic and other devices. One way to study the search process is to build and analyze mathematical models that reflect the objective laws of search and allow us to establish causal relationships between the conditions of search performance and its results. The search process involves two sides: the search object and the observer, who can be both individual and group. The search object is various objects located in different environments, such as aircraft, various objects on the surface of the Earth, ships, and vessels, etc. The search object has two characteristic features: Its properties differ from the properties of the environment in which the search is carried out. Information about the location of the object before the start of the search and during its execution is usually uncertain. It is this uncertainty that causes search actions, the essence of which is to obtain information about the location of the object. The contrast of the search object against the background of the environment creates the possibility of its detection. Search objects can be characterized by the presence or absence of radiation. Therefore, the operation of detection tools is based either on the detection of a signal reflected from the search object or on the reception of the object's own radiation. The search process largely depends on the properties of the detection object, as well as on the parameters of the detection equipment and the characteristics of the surrounding environment. All these issues form the physical basis of the theory of search. During the search process, the use of detection tools is combined with the active maneuvering of the observer who carries these tools. Therefore, the study of the patterns of mutual movement of the observer and the search object becomes especially important. These patterns constitute an integral part of the theory of object search - the kinematics of search. An important place in the theory of object search is occupied by the justification and methods of calculating the indicators of the success of the search - criteria of its effectiveness. The ultimate goal of the theory of search is to choose the optimal methods for performing search actions in a specific situation and under the conditions of the search - the so-called search situation. The choice of the optimal search method is based on an analysis of the mathematical model of the corresponding search situation and reduces to establishing control parameters of the search that ensure the solution of the search task in the shortest or specified time with minimal search efforts. Mathematical Models for Object Search Constructing a mathematical model requires identifying all the essential factors and conditions that determine both the state and the development of the search process, as well as the possible control of this process. These factors and conditions are variables and are called elements of the model. The variables that can be changed are called controllable, while those that cannot be changed are called uncontrollable. Depending on the nature of the search process, its mathematical model may contain only uncontrollable variables, or both controllable and uncontrollable ones. Mathematical models of the first type are called descriptive, while models of the second type are normative. In a descriptive model, there is no observer deciding about the search, nor is there a search object deciding to evade. A normative model is characterized by the presence of at least one of the parties making a decision. Depending on the amount of information about the search situation and the regularities underlying it, such models can be classified into one of the following levels: deterministic, stochastic, and uncertain. At the deterministic level, a normative model is constructed when the outcome of the search situation is subject to regularities and the factors influencing this outcome can be accurately measured or estimated, while random factors are either absent or can be neglected. In this case, it is quite difficult to collect and process data, except for simple situations that fully characterize the search conditions. In addition, purely mathematical difficulties arise in constructing and analyzing such a model. The task of choosing the optimal search method in the conditions of a normative model of the deterministic level is reduced to maximizing or minimizing the efficiency criterion. At the stochastic level, the normative model, in accordance with probabilistic regularities, is represented as a random process, the course and outcome of which are described by certain characteristics of random variables. Construction of a model at this level is possible if there is sufficient factual material to estimate the necessary probability distributions. In constructing a model at the stochastic level, the method of statistical trials is widely used in addition to the classical apparatus of probability theory, and the principle of optimality is based on the maximization of the mathematical expectation of the efficiency criterion. Thus, the task is practically transferred to the deterministic level. The indeterminate level is characterized by such a volume of information at which only a set of possible search situations is known, but without any a priori information about the probability of each of them. Usually, such a volume of information is characteristic of a conflict situation in which the object of the search and the observer pursue directly opposing goals, choosing a certain way of action to achieve them. Building a model and choosing the optimal way at the indeterminate level encounters some difficulties, since the principles of optimality in this case may not be entirely clear. Establishing such principles of optimality and finding solutions to such problems constitute the content of game theory. Game theory deals with the study of mathematical models of decision-making in conditions of conflict. To construct a formal mathematical model of decision-making in conditions of conflict, it is necessary to mathematically describe all possible actions of the participants in the conflict and the results of these actions. The results of the players' actions are evaluated using a numerical function called the payoff function. ## 2 Task formulation Let's move on to the description of the search process, which is the focus of the main part of this work. A ship-interceptor equipped with a hydro locator has detected the periscope of a submarine, which immediately disappeared in an unknown direction. A hydro locator is a means of sound detection of underwater objects using acoustic radiation. It consists of a transceiver that sends sound pulses in the required direction and receives reflected pulses if the transmission, encountering any object on its path, is reflected from it. After the initial detection of the submarine, the task of the ship-interceptor is to catch the submarine in the shortest possible time. It is assumed that although the ship does not know the exact speed of the submarine, it knows a discrete set of speeds, one of which is the actual speed of the submarine. The formulated problem is a problem of secondary search for a moving object. The ship-interceptor will be referred to as the pursuer and the submarine as the evader, denoted as P and E, respectively. The work considers cases of continuous search, when the resistance of the boat and the ship is not considered, game problems of search, and the case of searching for n submarines. Thus, the goals of my research are the mathematical formalization of the process of search and interception of moving objects under various information conditions; the development of a procedure for finding the optimal solution; the implementation of the algorithm using the MAPLE 17 software package. ## 3 One Pursuer and one Evader Scenario Algorithm for finding the guaranteed capture time. Let us present an algorithm for finding the completion time of the search under conditions when the pursuer does not know the speed of the evader with certainty. To do this, let us first show the strategy of the pursuer's behavior. Assume that the speed of the pursuer is so much greater than the speed of the evading submarine that the completion of the search is guaranteed. At the initial moment \(\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ }\textbf{ \[\rho^{p}=\alpha,|\alpha|\leq\nu_{\rho}\] \[\dot{\varphi}^{p}=\beta,|\beta|\leq\nu_{\varphi}\] \[\nu^{p}=\sqrt{\left(\nu_{\rho}\right)^{2}+\left(\nu_{\varphi}\right)^{2}}\] Since the speed of the fleeing vessel is not known with certainty, the pursuer makes the assumption that E has a speed of \(\nu_{1}\in V^{E}\). To capture the submarine at time \(t_{0}\), the pursuer begins moving towards point O with a speed of \(\nu^{p}\) and continues until time \(t_{1}\), at which point both players are at the same distance from point O, i.e., the equation. \[\rho_{1}^{p}=\rho_{1}^{E}\] And \[\int_{t_{0}}^{t_{1}}\nu_{1}dt+\nu^{p}(t_{1}-t_{0})=D_{0}\] If the encounter did not occur, then now \(t_{1}\), the pursuer, choosing a direction of circumnavigation, continues to move around point O in such a way as to constantly remain at the same distance from point O as the fleeing ship. Let's find the trajectory of motion corresponding to this behavior strategy. We will consider the direction of circumnavigation coinciding with the positive direction of the polar angle. The speed of the interceptor ship can be decomposed into two components: radial \(\nu_{\rho}\) and tangential \(\nu_{\varphi}\).. The radial component is the speed at which the ship moves away from the pole, i.e. \[\nu_{\rho}=\dot{\rho}\] The tangential component is the linear speed of rotation relative to the pole, i.e. \[\nu_{\varphi}=\dot{\rho}\varphi\] In order for the encounter to occur, the pursuer moves at maximum speed, keeping the radial component of the velocity equal to the speed of the fleeing vessel. Then, to find the trajectory of the pursuer, it is necessary to solve the system of differential equations: \[\dot{\rho}=\nu_{1}\] \[\dot{\varphi}^{2}\rho^{2}=(\nu^{p})^{2}-(\nu_{1})^{2}\] The initial conditions for this system are. \[\varphi(t^{*})=0\] \[\rho(t_{1})=\nu_{1}t_{1}\] Solving it, we find: \[\varphi(t)=\frac{\sqrt{(\nu^{p})^{2}-(\nu_{1})^{2}}}{\nu_{1}}ln\frac{\nu_{1}t }{\nu_{1}t_{1}}\] \[\rho(t)=v_{1}t\] Then the search time can be expressed as a function of the polar angle: \[t(\varphi)=t_{1}exp\left(\frac{v_{1}\varphi}{\sqrt{(v^{P})^{2}-(v_{1})^{2}}}\right)\] Thus, the trajectory consists of straight-line segments and logarithmic spiral segments. By adhering to this behavior strategy, the pursuer will detect the submarine within a time not exceeding one spiral turn. Then, if the ship, having bypassed the spiral turn, does not find the submarine, it means that the initial assumption about the speed of the evader was incorrect. Therefore, it is necessary to choose the next speed\(v_{2}\in V^{E}\) and assume that it is the actual speed. The evader has covered a distance of \(\rho_{E}(t_{2})=v_{2}t_{2}\), during time t_2, while the pursuer has covered \(\rho_{P}(t_{2})=v_{1}t_{2}\). There are two cases. If \(\rho_{P}(t_{2})>\rho_{E}(t_{2})\), then the distance between the players will be equal to \(D_{2}=\rho_{P}(t_{2})-\rho_{E}(t_{2})\)and to find the time t_3, the equation must be solved: \[\int_{t_{2}}^{t_{3}}v_{2}dt+v^{P}(t_{3}-t_{2})=D_{2}\] If \(\rho_{P}(t_{2})<\rho_{E}(t_{2})\), then the distance between the players will be equal to \(D_{2}=\rho_{E}(t_{2})-\rho_{P}(t_{2})\), and to find the moment in time \(t_{3}\), we need to solve the equation: \[v^{P}(t_{3}-t_{2})-\int_{t_{2}}^{t_{3}}v_{2}dt=D_{2}\] This algorithm for computing the guaranteed capture time is implemented using the software package MAPLE 17. In our task, the number of speeds n is finite and known in advance, and each speed needs to be checked for validity. It is assumed that when creating the schedule, the processing can start with any of the speeds. The duration of checking each speed will depend on the created schedule. Using the algorithm for finding the guaranteed search time outlined above, we will construct a matrix of times \(T=(t_{ij})\),where \(t_{ij}\)represents the duration of checking the speed when speed \(v_{i}\) precedes speed \(v_{j}\). Then the time taken by the pursuer to check all speeds, i.e., Guaranteed search time, depends on the order. Denote by\(F_{max}=F_{[n]}=\sum_{t=1}^{n}t_{[i-1],[t]}\) maximum test duration. The task is to check each of the n speeds once and only once, and the order should be such as to minimize the maximum duration of the passage. It is necessary to find such a matrix \(X\) of order \(n\) with elements. \[\mathbf{x}_{ik}=\texttt{0}\texttt{0}\texttt{0}\texttt{0}\texttt{1}\texttt{1}\texttt{ },\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{ }\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{} \texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{}\texttt{} {}\texttt{}\texttt{}\texttt{}\texttt{}}\texttt{}\texttt{}{}\texttt{}\texttt{}\texttt{}\texttt{} {}\texttt{}\texttt{}\texttt{}{}\texttt{}\texttt{}{}\texttt{}\texttt{}\texttt{}}{\texttt{} {}\texttt{}\texttt{}{}\texttt{}}{\texttt{}\texttt{}{}\texttt{}}{\texttt{}}\texttt{{}} \texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{} {}\texttt{}{}\texttt{}{}\texttt{}{}\texttt{}{}}\texttt{{}}{\texttt{}{}}\texttt{ {}}\texttt{}{}{}\texttt{}{}{}\texttt{}{}}\texttt{{}}{\texttt{}{}}{\texttt{}}{\] \(\text{\text{\text{\text{\text{\text{\text Thus, the resulting matrices are characterized by an increasing lower bound and (or) a larger number of established steps. In addition, for each subsequent matrix, the number of checks is less than for the previous one, and eventually, a state is reached where the permutation is fully defined. The situations where the solution is obtained immediately, or the matrix is excluded are obvious. The essence of branching is the concepts of reduction and selection. The reduction aims to obtain at least one zero in each row and column of the original matrix T. Since each solution to the problem includes one and only one element from each row or column of matrix T, subtracting or adding a constant to each element of its column or row in the same degree changes all solutions and does not lead to a displacement of the optimum. Subtract a constant h from each element of a row or column of matrix T. Let the resulting matrix be \(\mathbf{T}^{\prime}\). Then the optimal solution found from \(\mathbf{T}^{\prime}\) is also optimal for T, i.e., both matrices have the same permutation that minimizes time. We can choose \(\mathbf{\nu}^{\prime}=\mathbf{h}\) as the lower bound for solutions obtained from \(\mathbf{T}^{\prime}\). Subtraction can continue until each column or row contains at least one zero (i.e., the minimum element in each row or column is zero). The sum of all reduction constants determines the lower bound Y for the original problem. The matrix T is reduced if it cannot be further reduced. In this case, finding route options is associated with studying a particular transition, say from i to j. As a result, instead of the original matrix, we consider two matrices: 1. Matrix \(\mathbf{T}_{\mathbf{ij}}\), which is associated with finding the best of all solutions given by matrix T and including the order (i, j). 2. Matrix \(\mathbf{T}_{\mathbf{n(ij)}}\), which is associated with choosing the best of all solutions not including the order (i, j). After fixing the transition from i to j, we need to exclude transitions from i to other speeds except j, and transitions to j from other speeds except i, by setting all elements of row i and column j, except \(\mathbf{t}_{\mathbf{ij}}\), to infinity. We also need to prohibit the order (j, i) in the future by setting\(\mathbf{t}_{\mathbf{ij}}=\infty\). This is because checking all speeds during a single pass cannot include both (i, j) and (j, i) simultaneously. Since these prohibitions may lead to the elimination of some zeros in matrix T, further reduction of T and obtaining a new, larger lower bound for solutions associated with matrix \(\mathbf{T}_{\mathbf{ij}}\) is not excluded. In the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\), it is prohibited to transition from i to j, i.e., \(\mathbf{t}_{\mathbf{ji}}\) is set to infinity. In this case, the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from \(\mathbf{T}_{\mathbf{n(ij)}}\) is not excluded. The choice of (i, j) should be such as to maximize the lower bound for \(\mathbf{T}_{\mathbf{n(ij)}}\), which may allow for the elimination of trajectories without further branching. To achieve this, all possible pairs (i, j) in the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) are examined, and the choice is made in such a way that the sum of two consecutive reducing constants is maximal. Obviously, transitions (i, j) corresponding to zero elements of matrix T should be prohibited first, since the choice with nonzero elements does not contribute to further reducing \(\mathbf{T}_{\mathbf{n(ij)}}\). The second way to order the enumeration of velocities is the method of dynamic programming. Without loss of generality, choose a certain velocity \(\mathbf{v}_{0}\) as the initial one. After that, divide the set of all velocities into four non-intersecting subsets: In the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) the transition from i to j is forbidden, i.e. \(\mathbf{t}_{\mathbf{ji}}\)\(=\infty\) is assumed. In this case, there is also the possibility of further reducing the matrix and the resulting increase in the lower bound for solutions obtained from \(\mathbf{T}_{\mathbf{n(ij)}}\). The choice of (i, j) should be such as to maximize the lower bound for \(\mathbf{T}_{\mathbf{n(ij)}}\), which may allow the exclusion of a number of trajectories without further branching. To achieve this, all possible pairs (i, j) in the matrix \(\mathbf{T}_{\mathbf{n(ij)}}\) are examined, and the choice is made in such a way that the sum of two consecutive leading constants is maximized. It is obvious that transitions (i, j) that correspond to zero elements of the matrix T should be prohibited in the first place since the choice of non-zero elements do not contribute to further reduction of \(\mathbf{T}_{n(t)}\). The second method for ordering the enumeration of speeds is the dynamic programming approach. Without loss of generality, we choose some speed \(\mathbf{\nu}_{0}\) as the initial speed. After that, we divide all the set of velocity into four disjoint subsets: \(\{\mathbf{v}_{0}\}\) - the set consisting only of the initial speed. \(\{\mathbf{v}_{i}\}\) - the set consisting only of one non-initial speed. \(\{\mathbf{V}_{\mathbf{k}}\}\) - the set consisting of k speeds, except for \(\mathbf{v}_{0}\) and \(\mathbf{v}_{l}\) \(\{\mathbf{V}_{n-k-2}\}\) - the set consisting of the remaining n-k-2 speeds. Let us assume that the optimal order of checking speeds is known, starting with speed \(\mathbf{v}_{0}\). Then we can choose speed \(\mathbf{v}_{0}\) and a subset \(\{\mathbf{V}_{\mathbf{k}}\}\) consisting of k speeds, in such a way that this optimal permutation begins with \(\{\mathbf{v}_{0}\}\) and includes the set \(\{\mathbf{V}_{n-k-2}\}\), then \(\{\mathbf{v}_{i}\}\), after which it checks the set \(\{\mathbf{V}_{\mathbf{k}}\}\). Now let us consider only the part of the permutation that lies between \(\{\mathbf{v}_{l}\}\) and \(\{\mathbf{v}_{0}\}\) with an intermediate check of \(\{\mathbf{V}_{\mathbf{k}}\}\). It can be noted that the minimum time for this segment is known. If this were not the case, then without changing the part of the permutation up to speed \(\mathbf{v}_{l}\), we could find the best guaranteed time for completing its check and, therefore, the minimum time for the whole. However, this is impossible, since it contradicts the initial assumption that the optimal permutation is known. Let \(\mathsf{f}(\mathbf{v}_{i};\{\mathbf{V}_{\mathbf{k}}\})\) be the time for checking the best permutation from \(\mathbf{v}_{l}\) to \(\mathbf{v}_{0}\), including the set \(\{\mathbf{V}_{\mathbf{k}}\}\). Note that when k=0, \(\mathbf{f}(\mathbf{v}_{i};\{\emptyset\})=\mathbf{s}_{i0}\) If T is an element of the matrix T, and k=n-1 and \(\mathbf{v}_{l}\) coincides with the start of the movement, then \(\mathsf{f}(\mathbf{v}_{0};\{\mathbf{V}_{n-1}\})\) is the time of the optimal permutation of the original problem. The idea of dynamic programming is to increment k step by step, starting from k=0. Starting from \(\mathbf{v}_{0}\), the permutation is traversed in reverse order to find the optimal solution. For the problem under consideration, the main functional equation of dynamic programming is given by: \(\mathbf{f}(\mathbf{v}_{i};\{\mathbf{V}_{\mathbf{k}}\})=\underset{\mathbf{v}_{j}\in\{\mathbf{V}_{\mathbf{k }}\}}{\mathbf{min}}[\mathbf{s}_{j}+\mathbf{f}(\mathbf{v}_{j};\{\mathbf{V}_{j}\}-\{\mathbf{v}_{j}\})]\) This equation shows that to find the best permutation starting from \(\mathbf{v}_{l}\) and ending with \(\mathbf{v}_{0}\), with k intermediate velocities, one needs to choose the best among k permutations, starting from the transition from \(\mathbf{v}_{l}\) to one of the k velocities and then moving the fastest way to \(\mathbf{v}_{0}\) with intermediate visits to k-1 others. Each of these k options, in turn, represents the fastest of k-1 permutations according to the equation mentioned earlier. Eventually, a point is reached where the right-hand side of the equation simply represents an element of T. The solution to the problem for five velocities will be considered as an example, with the fifth velocity taken as the starting point. Then, \(\mathsf{f}(\mathbf{v}_{5};\{\mathbf{v}_{1},\ \mathbf{v}_{2},\ \mathbf{v}_{3},\ \mathbf{v}_{4}\})\) represents the shortest time for the best permutation, and any sequence of checking velocities that leads to such time is optimal. At step 0, the solution is sought for five options with k=0: \(\mathbf{f}(\mathbf{v}_{1};\{\emptyset\})=\mathbf{t}_{16}\) \(\mathbf{f}(\mathbf{v}_{2};\{\emptyset\})=\mathbf{t}_{26}\) \(\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})=\mathbf{t}_{36}\) \(\mathbf{f}(\mathbf{v}_{4};\{\emptyset\})=\mathbf{t}_{46}\) At the first step, solutions for k=1 are expressed in terms of known solutions for k=0: \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2}\}) =\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3}\}) =\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4}\}) =\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1}\}) =\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\emptyset\})\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}) =\mathbf{t}_{23}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{3}\}) =\mathbf{t}_{43}+\mathbf{f}(\mathbf{v}_{3};\{\emptyset\})\] At the second step, solutions for k=2 are expressed in terms of known solutions for k=1: \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}),\mathbf{t}_{1 3}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{4}\}),\mathbf{t}_{1 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{2}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{4}\}),\mathbf{t}_{1 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4},\mathbf{v}_{5}\}) =\mathbf{min}\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{5}\}),\mathbf{t}_{1 5}+\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{4}\})]\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{4}\}),\mathbf{t}_{2 4}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1}\})]\] \[\ldots.\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{42}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3}\}),\mathbf{t}_{4 3}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2}\})]\] We proceed to the third step, using each of the solutions of the second step. \[\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{12}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{13}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{2},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{14}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{2},\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{21}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{23}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{24}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{3}\})]\] \[\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{31}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{4}\}),\mathbf{t}_{32}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{4}\}),\] \[\mathbf{t}_{34}+\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{2})\}]\] \[\mathbf{f}(\mathbf{v}_{4};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}\}) =\mathbf{min}\mathbf{t}_{41}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3}\}),\mathbf{t}_{42}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3}\}),\] \[\mathbf{t}_{43}+\mathbf{f}(\mathbf{v}_{3};\{\mathbf{v}_{1},\mathbf{v}_{2}\})]\] At the fourth step, the solution of the original problem is obtained. \[\mathbf{f}(\mathbf{v}_{5};\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}) =\mathbf{min}\mathbf{t}_{51}+\mathbf{f}(\mathbf{v}_{1};\{\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4}\}),\mathbf{t}_{52}+\mathbf{f}(\mathbf{v}_{2};\{\mathbf{v}_{1},\mathbf{v}_{3},\mathbf{v}_{4}\}),\] \[t_{53}+f(v_{3};\{v_{1},v_{2},v_{4}\}),t_{54}+f(v_{4};\{v_{1},v_{2},v_{3}\})]\] \(\mbox{Exists}\frac{(n-1)!}{k(n-k-2)!}\) variations. For k\(\geq\)1 there are k choices. The number of comparisons to be made between them. The total number of computations at all stages will be equal to this number. \[2\sum_{k=1}^{n-1}\frac{k(n-1)!}{k!\left(n-k-2\right)!}+(n-1)<n^{2}2^{n}\] As an example, let us consider solving the problem for six speeds\(V^{E}=\{\)10,20,30,40,50,60\(\}\). The initial matrix T is obtained by applying the algorithm for computing the guaranteed search time, implemented using the Maple software package. \begin{tabular}{r r r r r} 0.13 & 0.42 & 1.133.05 & 9.13 & 34.1 \\ 0.25 & 0.24 & 1.744.73 & 14.23 & 53.27 \\ 0.54 & 1.29 & 0.44 7.6 & 22.94 & 86.04 \\ 1.23 & 2.84 & 6 0.89 & 39.15 & 147.2 \\ 3.14 & 7.04 & 14.7 31.4 & 2 & 277.2 \\ 9.62 & 21.16 & 43.893.13 & 217.76 & 5.6 \\ \end{tabular} Theoretical game model of search and interception. To reduce the guaranteed interception time, it is advisable for the pursuer to order the search of escape speeds. However, if the escapee becomes aware of this, they can move at a speed that the pursuer intends to check last, which would allow the escapee to maximize their search time. Thus, the search problem can be considered a game problem in the conditions of opposition. The system G = (X, Y, K), where X and Y are non-empty sets and the function K: X \(\times\) Y \(\rightarrow\) R I is called an antagonistic game in normal form. Elements x \(\in\) X and y \(\in\) Y are called player 1 and player 2 strategies, respectively, in game G. The Cartesian product elements (i.e., strategy pairs (x, y), where x \(\in\) X and y \(\in\) Y) are called situations, and the function K is the function of player 1's gain. Player 2's gain in an antagonistic game in situation (x, y) is assumed to be [-K (x, y)], so the function K is also called the game's gain function, and game G is a zero-sum game. Let's define the game for the search problem under consideration. Let the escapee choose any speed from the set \(V^{E}=\{v_{1},...,v_{n}\}\) and any direction from the set \(\alpha=\{\alpha_{1},...,\alpha_{n}\}\). Then, the set of pure strategies for the escapee (player 1) will be the set of combinations of possible velocities \(v_{t}\) of their movement and movement directions \(\alpha_{i}\) and the set of pure strategies for the pursuer will be the set of all possible permutations of the escapee's velocities. The gain will be the time it takes to catch the escapee, which is found using the algorithm described above. The game G is interpreted as follows: players independently and simultaneously choose strategies x \(\in\) X and y \(\in\) Y. After that, player 1 receives a gain equal to K(x, y), and player 2 receives a gain equal to (-K(x, y)). Antagonistic games in which both players have finite sets of strategies are called matrix games. Let player 1 in a matrix game have a total of m strategies. We establish a one-to-one correspondence between the set X of strategies and the set M = {1, 2,..., m}. Similarly, if player 2 has n strategies, we can establish a one-to-one correspondence between the sets N = {1, 2,..., n} and Y. Then, the game G is fully determined by the matrix A = \(\{\alpha_{ij}\}\), where \(\alpha_{ij}=K(x_{i},y_{j})\), \((i,j)\in M\times N\), \((x_{i},y_{j})\in X\times Y,i\in M,j\in N\). In this case, the game G is played as follows: player 1 chooses a row i \(\in\) M, and player 2 (simultaneously with player 1) chooses a column j \(\in\) N. After that, player 1 receives a payoff of \(\alpha_{ij}\), and player 2 receives (-\(\alpha_{ij}\)). Each player aims to maximize their own winnings by choosing a strategy. However, for player 1, their winnings are determined by the function K(x,y), while for the second player it is (-K(x,y)), i.e. the players' goals are directly opposite. It should be noted that the winnings of player 1 (2) are determined in the situations (x,y)\(\in\) X\(\times\)Y that arise during the game. However, each situation, and therefore the winnings of a player, depend not only on their own choice, but also on what strategy their opponent will choose. Therefore, in seeking to obtain the maximum winnings possible, each player must take into account the behavior of their opponent. In game theory, it is assumed that both players act rationally, i.e., strive to achieve maximum winnings, assuming that their opponent acts in the best possible way for themselves. Let player 1 choose a strategy x. Then in the worst case, they will win \(min_{y}K(x,y)\). Therefore, player 1 can always guarantee themselves a win of \(max_{x}min_{y}K(x,y)\). If we abandon the assumption of the attainability of the extremum, then player 1 can always obtain winnings that are arbitrarily close to this value. \(\overline{v}=sup_{x\in X}Inf_{y\in Y}K(x,y)\) which is called the lower value of the game. If the external extremum is reached, then the value \({}^{-}\)v is also called the maximin, the principle of constructing the strategy x, based on maximizing the minimum payoff, is called the maximin principle, and the strategy x chosen in accordance with this principle is the maximin strategy of player 1. For player 2, similar reasoning can be applied. Suppose they choose strategy y. Then in the worst case, they will lose \(max_{x}K(x,y)\).. Therefore, the second player can always guarantee a loss of \(min_{y}max_{x}K(x,y)\). The number... (the text appears to be cut off here, so I am unable to translate the complete sentence). \(\overline{v}=inf_{y\in Y}sup_{x\in X}K(x,y)\) The upper value of the game G is called the maximum-minimum, and in the case of achieving an external extremum, it is called the minimax. The principle of constructing the strategy y, based on minimizing the maximum losses, is called the minimax principle, and the strategy y chosen in accordance with this principle is the minimax strategy of player 2. It should be emphasized that the existence of a minimax (maximin) strategy is determined by the achievability of an external extremum. In the matrix game G, the extremums are achieved, and the lower and upper values of the game are respectively equal. \(\overline{v}=min_{1\leq i\leq n}max_{1\leq i\leq n}\alpha_{ij}\) \(\overline{v}=max_{1\leq i\leq n}min_{1\leq i\leq n}\alpha_{ij}\) The minimax and maximin for the game G can be found as follows: \(\left[\begin{array}{cccc}\alpha_{11}&...&\alpha_{1n}\\ \vdots&\ddots&\vdots\\ \alpha_{m1}&...&\alpha_{mn}\end{array}\right]min_{i}\alpha_{mj}\) \(max_{i}min_{j}\alpha_{ij}=\overline{v}\) \(max_{i}max_{i}\alpha_{i1}\)... \(max_{i}\alpha_{in}\)) \(min_{j}max_{i}\alpha_{ij}=\overline{v}\) Let's consider the question of optimal behavior of players in an antagonistic game. It is natural to consider a situation \((x^{*},y^{*})\in X\times Y\) in game G=(X,Y,K) optimal if neither player has an incentive to deviate from it. Such a situation \((x^{*},y^{*})\) is called an equilibrium, and the optimality principle based on constructing an equilibrium situation is called the principle of equilibrium. For antagonistic games, the principle of equilibrium is equivalent to the principles of minimax and maximin. In an antagonistic game G=(X, Y,K), a situation \((\mathbf{x}^{*},\mathbf{y}^{*})\) is called an equilibrium or a saddle point if \[\mathbf{K}(\mathbf{x},\mathbf{y}^{*})\leq\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\] \[\mathbf{K}(\mathbf{x}^{*},\mathbf{y})\geq\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\] or all x \(\in\) X,y \(\in\) Y. For the matrix game G, we are talking about the saddle points of the payoff matrix A, i.e., points \((\mathbf{I}^{*},\mathbf{J}^{*})\) such that for all i \(\in\) M, j \(\in\) N the inequalities. \[\mathbf{\alpha}_{\mathbf{i}\mathbf{j}^{*}}\leq\mathbf{\alpha}_{\mathbf{i}\mathbf{j}^{*}}\leq\mathbf{\alpha }_{\mathbf{i}\mathbf{j}}\] Theorem. Let \((\mathbf{x}_{1}^{*},\mathbf{y}_{1}^{*})\) and \((\mathbf{x}_{2}^{*},\mathbf{y}_{2}^{*})\) be two arbitrary equilibrium situations in the antagonistic game G. Then \[\mathbf{K}(\mathbf{x}_{1}^{*},\mathbf{y}_{1}^{*})=\mathbf{K}(\mathbf{x}_{2}^{*},\mathbf{y}_{2}^{*}); \mathbf{K}(\mathbf{x}_{1}^{*},\mathbf{y}_{2}^{*})=\mathbf{K}(\mathbf{x}_{2}^{*},\mathbf{y}_{1}^{*})\] \((\mathbf{x}_{1}^{*},\mathbf{y}_{2}^{*})\in\mathbf{Z}(\mathbf{G}),(\mathbf{x}_{2}^{*},\mathbf{y}_{1}^{* })\in\mathbf{Z}(\mathbf{G})\)where \(\mathbf{Z}(\mathbf{G})\) the set of all equilibrium situations. Let \((\mathbf{x}^{*},\mathbf{y}^{*})\) be an equilibrium situation in game G. The number \(\mathbf{v}=\mathbf{K}(\mathbf{x}^{*},\mathbf{y}^{*})\) is called the value of game G. Now we establish a connection between the principle of equilibrium and the principles of minimax in an antagonistic game. Theorem. In order for there to exist an equilibrium situation in game \(\mathbf{G}=(\mathbf{X},\mathbf{Y},\mathbf{K})\), it is necessary and sufficient that the minimax and maximin \(\mathbf{min}_{\mathbf{y}}\mathbf{sup}_{\mathbf{x}}\mathbf{K}(\mathbf{x},\mathbf{y}),\mathbf{max}_{\mathbf{x}}\mathbf{ ln}\mathbf{f}_{\mathbf{y}}\mathbf{K}(\mathbf{x},\mathbf{y})\) exist, and the equality is satisfied: \[\overline{\mathbf{v}}=\mathbf{max}_{\mathbf{x}}\mathbf{inf}_{\mathbf{y}}\mathbf{K}(\mathbf{x},\mathbf{y})=\bm {min}_{\mathbf{y}}\mathbf{sup}_{\mathbf{x}}\mathbf{K}(\mathbf{x},\mathbf{y})=\overline{\mathbf{v}}\] If there exists a situation of equilibrium in a matrix game, then the minimax is equal to the maximin, and according to the definition of equilibrium situation, each player can communicate their optimal (maximin) strategy to their opponent, and from this neither player can gain any additional advantage. Now, suppose that in game G there is no situation of equilibrium. Then, we have \[\mathbf{min}_{\mathbf{j}}\mathbf{max}_{\mathbf{i}}\mathbf{\alpha}_{\mathbf{i}\mathbf{j}}-\underset{\mathbf{l} }{\max}\mathbf{min}_{\mathbf{j}}\mathbf{\alpha}_{\mathbf{i}\mathbf{j}}>0\] If there is an equilibrium situation in a matrix game, then the minimax is equal to the maximin, and according to the definition of the equilibrium situation, each player can inform their optimal (maximin) strategy to the opponent, and neither player can gain additional advantage from this. Now suppose there is no equilibrium situation in game G. In this case, the maximin and minimax strategies are not optimal. Moreover, it may not be beneficial for the players to adhere to them, as they may obtain a greater gain. However, informing the opponent about the choice of strategy can lead to even greater losses than in the case of the maximin or minimax strategy. In this case, it is reasonable for players to act randomly, which provides the greatest secrecy in choosing a strategy. The result of the choice cannot become known to the opponent, as the player themselves do not know it until the random mechanism is implemented. A random variable, whose values are the player's strategies, is called their mixed strategy. Since a random variable is characterized by its distribution, we will identify a mixed strategy x of player 1 in a game with an m-dimensional vector. \[\mathbf{x}=(\mathbf{\xi}_{1},...,\mathbf{\xi}_{\mathbf{m}})\in\mathbf{R}^{\mathbf{m}},\sum_{t=1}^{\bm {m}}\mathbf{\xi}_{t}=1,\mathbf{\xi}_{t}\geq 0,\mathbf{l}=1,...,\mathbf{m}\] Similarly, player 2's mixed strategy y is the n-dimensional vector. \[\mathbf{y}=(\mathbf{\eta}_{1},...,\mathbf{\eta}_{\mathbf{n}})\in\mathbf{R}^{\mathbf{n}},\sum_{t=1}^{ \mathbf{n}}\mathbf{\eta}_{t}=1,\mathbf{\eta}_{t}\geq 0,\mathbf{j}=1,...,\mathbf{n}\] In this case, \(\xi_{i}\)\(\geq\)0 and \(\eta_{j}\)\(\geq\)0 are the probabilities of choosing pure strategies i \(\in\) M, j \(\in\) N respectively when players use mixed strategies x and y. Let X and Y denote the sets of mixed strategies for the first and second players respectively. Let x=(\(\xi_{1}\),..., \(\xi_{m}\)) \(\in\) X be a mixed strategy. The set of mixed strategies for a player is an extension of their pure strategy space. A pair (x, y) of mixed strategies for players in matrix game G is called a situation in mixed strategies. Let's define the payoff of player 1 in the situation (x, y) in mixed strategies for the matrix game G as the mathematical expectation of their payoff given that players use mixed strategies x and y respectively. The players choose their strategies independently of each other, therefore the expected payoff K (x, y) in the situation (x, y) in mixed strategies x=(\(\xi_{1}\),..., \(\xi_{m}\)) and y= (\(\eta_{1}\),..., \(\eta_{n}\)) is equal to: \[K(x,y)=\sum_{l=1}^{m}\sum_{j=1}^{n}\alpha_{ij}\xi_{i}\eta_{j}\] The situation(\(x^{*},y^{*}\))is called an equilibrium situation if \[K(x,y^{*})\leq K(x^{*},y^{*})\] \[K(x^{*},y)\geq K(x^{*},y^{*})\] for all x \(\in\) X,y \(\in\) Y. Theorem. Every matrix game has a situation of equilibrium in mixed strategies. A common way to solve a matrix game is by reducing it to a linear programming problem. However, difficulties arise when solving matrix games of large dimensions. Therefore, the iterative Brown-Robinson method is often used to find a solution. The idea of the method is to repeatedly play a fictitious game with a given payoff matrix. One repetition of the game is called a round. Let A = \(\{\alpha_{ij}\}\) be an (m x n)-matrix game. In the first round, both players choose their pure strategies completely randomly. In the k-th round, each player chooses the pure strategy that maximizes their expected payoff against the observed empirical probability distribution of the opponent's moves in the previous (k-1) rounds. So, suppose that in the first k rounds, player 1 used the i-th strategy \(\xi_{i}^{k}\) times, and player 2 used the j-th strategy \(\eta_{j}^{k}\) times. Then in the (k+1)-th round, player 1 will use the \(t_{k+1}\)-th strategy, and player 2 will use their\(j_{k+1}\)strategy, where: \[\overline{v}^{k}=m_{l}x\sum_{j}\alpha_{ij}\eta_{j}^{k}=\sum_{j}\alpha_{i_{k+l }j}\eta_{j}^{k}\] \[\overline{v}^{k}=m_{l}n\sum_{i}\alpha_{ij}\xi_{j}^{k}=\sum_{j}\alpha_{i_{k+1 }}\xi_{j}^{k}\] Let v be the value of the matrix game G. Consider the relations \[\overline{v}^{k}/k=m_{l}ax\sum_{l}\alpha_{ij}\eta_{j}^{k}/k=\sum_{j}\alpha_{i_ {k+l}j}\eta_{j}^{k}/k\] \[\overline{v}^{k}/k=m_{j}\underset{k}{\sum}\alpha_{ij}\xi_{j}^{k}/k=\sum_{j}\alpha_{ ij_{k+1}}\xi_{j}^{k}/k\] \(\text{Vectors}\mathbf{x}^{k}=(\frac{\xi_{1}^{k}}{k},...,\frac{\xi_{m}^{k}}{k}) \text{ }n\text{ }\textbf We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\alpha}\)E={8,56,78} as the X-axis speed, and selects from the set \(\alpha\)={23,37,82} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\alpha}\)P=100 m/min Then the fugitive policy set is: \[(\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}},\bm {v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{\alpha_{ 2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2}})\] **Example 2.** Let the initial distance between the pursuer and the evader be 50 kilometers. The evader chooses a speed from the set \(\mathbf{V^{E}}\)= {4,10,16} and a direction from the set \(\alpha\)={8,10,16}. The maximum speed of the pursuer is \(\mathbf{V^{P}}\)=80 km/h. Then the set of strategies for the evader is: \((\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}}, \mathbf{v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{ \alpha_{2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2 }})\), and the set of strategies for the pursuer: \[(\mathbf{v_{1}},\mathbf{v_{2}},\mathbf{v_{3}}),(\mathbf{v_{1}},\mathbf{v_{3}},\mathbf{v_{2}}),(\mathbf{v_{ 2}},\mathbf{v_{1}},\mathbf{v_{3}}),(\mathbf{v_{2}},\mathbf{v_{3}},\mathbf{v_{1}}),(\mathbf{v_{3}},\mathbf{ v_{1}},\mathbf{v_{2}}),(\mathbf{v_{3}},\mathbf{v_{2}},\mathbf{v_{1}})\] The resulting game matrix looks like this: \begin{tabular}{r r r r r} 0,6 & 0,6 & 1,325,56 & 2,161 & 4,77 \\ 0,9 & 3,79 & 0,570,57 & 3,249 & 2,039 \\ 2,2 & 0,9 & 2,191,38 & 0,536 & 0,536 \\ 0,6 & 0,6 & 1,33 5,57 & 2,165 & 4,778 \\ 0,9 & 3,8 & 0,570,568 & 3,263 & 2,048 \\ 2,21 & 1 & 2,21 1,39 & 0,54 & 0,54 \\ \end{tabular} Figure 6: All optional paths and pursuit results of the quadrotor UAV We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Let the distance between the fugitive UAV and the ground be 100 meters, the fugitive UAV selects a speed from the set V\({}^{\alpha}\)E={4,10,16} as the X-axis speed, and selects from the set \(\alpha\)={8,10,16} A value as the Y-axis direction. The maximum speed of the chaser is V\({}^{\alpha}\)P=100 m/min Then the fugitive policy set is: \[(\mathbf{\alpha_{1}},\mathbf{v_{1}}),(\mathbf{\alpha_{1}},\mathbf{v_{2}}),(\mathbf{\alpha_{1}},\bm {v_{3}}),(\mathbf{\alpha_{2}},\mathbf{v_{1}}),(\mathbf{\alpha_{2}},\mathbf{v_{2}}),(\mathbf{\alpha_{ 2}},\mathbf{v_{3}}),(\mathbf{\alpha_{3}},\mathbf{v_{1}}),(\mathbf{\alpha_{3}},\mathbf{v_{2}})\] The game was solved using the method of Brown-Robinson, and the value of the game is 1.57. The strategy for the evader is (1/20, 0, 0, 0, 0, 0, 1/10, 1/4, 3/5), and the strategy for the pursuer is (9/20, 1/20, 3/20, 1/20, 1/4, 1/20). The solutions of the examples showed that the most probable speed for the evader will be the maximum of the possible speeds. Therefore, the pursuer should start checking the speeds from the maximum possible speed. ## 6 Task of one pursuer and group of fugitives Given a set J of n submarines, for which the times of their capture by the pursuer are known, so that \(|\textbf{J}|\) denotes the time of capture of J. For each fugitive, the readiness times of the submarines are also known (possibly times when they will reach a certain place Figure 7: All optional paths and pursuit results of the quadrotor UAV where pursuit cannot continue) \(\overline{D}(J)\) and the times required for their execution \(\overline{D}(J)\). For each fugitive, a weight coefficient w(J) is given, which participates in the objective function that needs to be optimized. T h e The end time of the search is denoted by \(t^{i}\). Thus, \(t^{i}=t_{i}+T_{i}\) F e Examples of criteria functions: h 1. Minimize the total penalty for delays. \[f_{1}=\sum_{k=1}^{n-1}w(J_{k})(t^{k}-\overline{D}_{k})^{+}\] h 2. Minimize the maximum penalty for delays. \[f_{2}=\underset{k}{\text{\emph{max}}}w(J_{k})(t^{k}-\overline{D}_{k})^{+}\] h 3. Minimize the amount of fines \[f_{3}=\sum_{k=1}^{n-1}w(J_{k})(t^{k}-\overline{D}_{k})\] h e 4.Minimize the amount of tied funds \(\underset{k}{\text{\emph{+}}}\neq\) denotes its positive part, defined by the formula \(x+=1Z(x+\underset{k}{\text{\emph{+}}})\)Then the delay in catching i of the evaders is equal \[f_{4}=\sum_{k=1}^{n-1}w(J_{k})t^{k}\] h e 4.Minimize the amount of tied funds \(\underset{k}{\text{\emph{+}}}\neq\) denotes its positive part, defined by the formula \(x+=1Z(x+\underset{k}{\text{\emph{+}}})\)Then the delay in catching i of the evaders is equal \[f_{4}=\sum_{k=1}^{n-1}w(J_{k})t^{k}\] h Decision by criterion \(f_{4}\) S Let's consider the solution for the function \(f_{4}\) only in the case where \(\overline{D}(J)=0\) for any J\(\in\)J. Let's consider the optimal sequence and swap two adjacent elements \(J_{k}\) and\(J_{k+1}\) in it. In this case, the capture time of the last one may increase (otherwise the considered solution is not optimal). However, the difference in the criterion function between searching for the first k+1 Ieeing members in the modified and optimal order does not exceed. \[\big{(}w(J_{k+1})|J_{k+1}|+w(J_{k})(|J_{k}|+|J_{k+1}|)\big{)}-(w(J_{k})|J_{k} |+w(J_{k+1})(|J_{k}|+|J_{k+1}|)\big{)}\geq 0\] i Hence, after reductions, we get. \[w(J_{k})|J_{k+1}|\geq w(J_{k+1})|J_{k}|\] d Therefore, for the optimal schedule for any k, we obtain the inequality of relations. \[\frac{|J_{k}|}{w(J_{k})}\leq\frac{|J_{k+1}|}{w(J_{k+1})}\] e Note that if this ratio is equal, the permutation of k+1 and k does not change the value of the criterion. Therefore, any ## 5 Consider the following situation. Suppose a intercepting ship, having n boats with depth bombs on board, at time t detects periscopes of n submarines at various distances from it on the surface of the sea, which at the same moment dived underwater and began to move in different directions at fixed speeds. It is required to send the boats to intercept the submarines in an optimal way, that is, so that the sum of the guaranteed times of interception of the submarines would be minimal. To solve the problem, we will create a matrix of efficiency A=(a i j), where each element is the guaranteed time of interception of submarine j by boat i, which consists of the time of reaching the periscope detection point by the boat and its total time of passing along the logarithmic interception spiral. Let \(\mathbf{x_{ij}}\) be variables that can take only 2 values 0 or 1 as follows. \[\mathbf{x_{ij}}=\begin{cases}1,assigned\ \ \mathbf{t}\ \text{ boat for }\ \mathbf{j}\ \text{ submarine}\\ 0,assigned\ \ \mathbf{t}\ \text{ boat for }\ \mathbf{j}\ \text{ submarine}\end{cases}\] It is necessary to find an assignment plan - a matrix X= {\(\mathbf{x_{ij}}\)}, i=1...m, j=1...n, which minimizes the search time, while ensuring that each boat is assigned to search for no more than one submarine, and each submarine can be searched by no more than one boat. Mathematical formulation of the optimal assignment problem. \[\mathbf{min\ z}=\mathbf{min}\sum_{t=1}^{m}\sum_{j=1}^{n}\mathbf{r_{ij}}*\mathbf{x_{ij}}\] \[\sum_{t=1}^{m}\mathbf{x_{ij}}\leq 1,\mathbf{j}=1..\mathbf{n}\] \[\sum_{j=1}^{n}\mathbf{x_{ij}}\leq 1,\mathbf{t}=1..\mathbf{m}\] \[\mathbf{x_{ij}}\geq 0\] In order for the optimal assignment problem to have an optimal solution, it is necessary and sufficient that the number of boats is equal to the number of submarines, i.e., n=m. Under this condition, the inequality constraints become equality constraints. \[\mathbf{min\ z}=\mathbf{min}\sum_{t=1}^{m}\sum_{j=1}^{n}\mathbf{r_{ij}}*\mathbf{x_{ij}}\] \[\sum_{t=1}^{n}\mathbf{x_{ij}}=1,\mathbf{j}=1..\mathbf{n}\] \[\sum_{j=1}^{n}\mathbf{x_{ij}}=1,\mathbf{t}=1..\mathbf{n}\] \[\mathbf{x_{ij}}\geq 0\] If n\(\neq\)m, then the assignment problem is unbalanced. Any assignment problem can be balanced by introducing the necessary number of dummy boats or submarines. The dual problem of the optimal assignment problem. \[\mathbf{max\omega=max(\sum_{i=1}^{n}i+\sum_{i=1}^{n}i)}\] \[i+i\geq\mathbf{\tau_{ij}},\mathbf{t=1..n},\mathbf{j=1..n}\] The Hungarian method can be used to solve the assignment problem. The essence of the method is as follows: In the original matrix A of performances, determine the minimum element in each row and subtract it from all other elements in the row. In the matrix obtained in the first step, determine the minimum element in each column and subtract it from all other elements in the column. If a feasible solution is not obtained after steps 1 and 2, perform: In the last matrix, draw the minimum number of horizontal and vertical lines through rows and columns to cross out all zero elements. Find the minimum non-crossed-out element and subtract it from all other non-crossed-out elements and add it to all elements at the intersection of the lines drawn in the previous step. If the new distribution of zero elements does not allow a feasible solution to be constructed, repeat step 2a. Otherwise, proceed to step 3. The optimal assignments will correspond to the zero elements obtained in step 2. Let's consider some numerical examples of solving the problem of distributing boats for catching several submarines. **Example 3**. Let a interceptor ship detect 4 submarines. The initial distance to each of them is 100 km, 200 km, 50 km, and 163 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 74 km/h, 90 km/h, 178 km/h, and 124 km/h, respectively. The first submarine moves along the straight line \(\mathbf{\alpha}_{1}\)=23, with the speed \(\mathbf{\nu}_{1}\)=23 km/h, the second one \(\mathbf{\alpha}_{2}\)=137, \(\mathbf{\nu}_{2}\)=50 km/h, the third one \(\mathbf{\alpha}_{3}\)=187, \(\mathbf{\nu}_{3}\)=67 km/h, and the fourth one \(\mathbf{\alpha}_{4}\)=50, \(\mathbf{\nu}_{4}\)=70 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{r r r} 1,18 & 0,980,52 & 0,73 \\ 14,43 & 7,061,77 & 3,3 \\ 373,78 & 12,120,77 & 2,13 \\ 14,43 & 3 & 0,96 & 1,53 \\ \end{tabular} We solve the game using the Hungarian method. The value of the objective function is 8.08, the final table looks like this. \begin{tabular}{r r r r} 0 & 0 & 2,37 & 1,22 \\ 9,63 & 2,46 & 0 & 0,17 \\ 369,98 & 8,52 & 0 & 0 \\ 11,22 & 0 & 0,79 & 0 \\ \end{tabular} We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 74 km/h, 90 km/h, 178 km/h and 124 km/h respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=23m/min, the maximum speed of the Y-axis a_1=23m/min, the height is 100 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=50m/min, a maximum speed of Y-axis a_2=137m/min, and a height of 200 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=67m/min, the maximum speed of the Y-axis a_3=7m/min, and a height of 50 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=70m/min. Y-axis maximum speed a_4=50m/min, height 163 meters matching matrix: \[0 0 1 0\] \[1 0 0 0\] \[0 1 0 0\] \[0 0 0 1\] The value of the objective function is **3.0888** **Example 4**. Let an intercept ship detect 4 submarines. The initial distance to each of them is 30 km, 11 km, 62 km, and 8 km, respectively. The pursuer has 4 boats for catching the submarines. The maximum speed of each boat is 60 km/h, 65 km/h, 95 km/h, and 105 km/h, respectively. The first submarine moves along the straight line \(\boldsymbol{\alpha}_{1}\)=7, with the speed \(\boldsymbol{v}_{1}\)=7 km/h, the second one \(\boldsymbol{\alpha}_{2}\)=11, \(\boldsymbol{v}_{2}\)=11 km/h, the third one \(\boldsymbol{\alpha}_{3}\)=30, \(\boldsymbol{v}_{3}\)=30 km/h, and the fourth one \(\boldsymbol{\alpha}_{4}\)=44, \(\boldsymbol{v}_{4}\)=44 km/h. Then the matrix for the assignment problem looks as follows: \begin{tabular}{c c c c} 0,46 & 0,420,297 & 0,27 \\ 0,16 & 0,15 0,11 & 0,097 [7] \\ 0,93 & 0,860,59 & 0,54 \\ 0,18 & 0,150,09 & 0,08 \\ \end{tabular} We solve the game using the Hungarian method. The value of the objective function is 1.147, the final table looks like this. \begin{tabular}{c c c c} 0,093 & 0,063 & 0 & 0 \\ 0 & 0 & 0,02 & 0,034 \\ 0,29 & 0,230,023 & 0 \\ 0,02 & 0 & 0 & 0,017 \\ \end{tabular} We transform the topic into the search and pursuit between quadrotor UAVs. Modify the topic slightly Suppose an intercepting quadcopter detects 4 intruding quadcopters. Chaser has 4 ships to chase the submarine. The maximum speed of each ship in XYZ axis is 60 m/min, 65 m/min, 95 m/min and 105 m/min respectively. The first invasion quadrotor UAV, the maximum speed of the X-axis v_1=7m/min, the maximum speed of the Y-axis \(\alpha\)_1=7m/min, the height is 30 meters The second invading quadrotor UAV has a maximum speed of X-axis v_2=11m/min, a maximum speed of Y-axis \(\alpha\)_2=11m/min, and a height of 11 meters. The third invading quadrotor UAV, the maximum speed of the X-axis v_3=30m/min, the maximum speed of the Y-axis \(\alpha\)_3=30m/min, and a height of 62 meters The fourth intrusion quadrotor UAV, the maximum speed of the X-axis v_4=44m/min. Y-axis maximum speed \(\alpha\)_4=44m/min, height 44 meters matching matrix: \begin{tabular}{c c c c} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{tabular} The value of the objective function is: 0.8390 ## 6 Conclusion With reasonable parameters of chasing UAVs, considering the success rate and interception efficiency, after calculating all UAV motion parameters using the Hungarian algorithm, all escaped quadrotor UAVs can be chased and successfully intercepted considering the performance parameters such as the speed of each interceptor and the compatibility between the quadrotor UAVs in the interceptor UAV camp and the UAVs in the escaped UAV camp.
無人機(UAV)は、軍事作戦から民間利用など、様々な領域でますます普及しています。しかし、UAVの proliferation は、その利用の不正やセキュリティ上の脅威に関する懸念を招いています。その結果、UAVの捜索と追跡は、警察機関やセキュリティ組織にとって重要な課題となっています。この論文では、ゲーム理論的なアプローチを用いて、潜水艦の捜索と追跡の問題を分析し、その問題をUAVの捜索と追跡問題に変換します。ゲーム理論は、複数の意思決定者間の戦略的相互作用をモデル化し、分析するための数学的枠組みを提供します。ゲーム理論を適用することで、捜索と追跡の問題を分析することで、UAVの検出と捕捉戦略の有効性を高めることを目指します。まず、問題をゲームとして表し、UAVが逃走者、捜索・追跡チームが追跡者として表現します。各プレイヤーの目的は
2306.17397
Variational approach to the two-dimensional Bose polaron
An impurity particle interacting with a Bose-Einstein condensate (BEC) leads to the formation of a quasiparticle known as the Bose polaron. We investigate the properties of the two-dimensional Bose polaron, applying a variational ansatz that contains up to three Bogoliubov excitations of the BEC. Similar to its three-dimensional counterpart, we observe the existence of two quasiparticle branches, namely the attractive and the repulsive polarons, at different coupling strengths. We find that their energies agree well with recent quantum Monte Carlo calculations. In particular, we observe that the inclusion of three excitations is crucial to capture the attractive polaron energy towards the regime of strong attraction, where the quasiparticle properties are dominated by few-body correlations. We also calculate the attractive polaron effective mass and residue, where we find significant differences between considering a weakly interacting Bose medium and taking the non-interacting limit, signalling enhanced impurity dressing by excitations in the latter case. By contrast, the spectral weight of the metastable repulsive polaron is largely insensitive to the interactions in the BEC and the number of Bogoliubov excitations. Our model may be experimentally realized in dilute atomic vapors and atomically thin semiconductors.
Yasufumi Nakano, Meera M. Parish, Jesper Levinsen
2023-06-30T04:34:13
http://arxiv.org/abs/2306.17397v2
# Variational approach to the two-dimensional Bose polaron ###### Abstract An impurity particle interacting with a Bose-Einstein condensate (BEC) leads to the formation of a quasiparticle known as the Bose polaron. We investigate the properties of the two-dimensional Bose polaron, applying a variational ansatz that contains up to two Bogoliubov excitations of the BEC. Similar to its three-dimensional counterpart, we observe the existence of two quasiparticle branches, namely the attractive and the repulsive polarons, at different coupling strengths. We find that their energies agree well with recent quantum Monte Carlo calculations. In particular, we observe that the inclusion of two excitations is crucial to capture the attractive polaron energy towards the regime of strong attraction, where the quasiparticle properties are dominated by few-body correlations. We also calculate the attractive polaron effective mass and residue, where we find significant differences between considering a weakly interacting Bose medium and taking the non-interacting limit, signalling enhanced impurity dressing by excitations in the latter case. By contrast, the spectral weight of the metastable repulsive polaron is largely insensitive to the interactions in the BEC and the number of Bogoliubov excitations. Our model may be experimentally realized in dilute atomic vapors and atomically thin semiconductors. ## I Introduction The interaction of an impurity particle with a quantum mechanical medium, the so-called quantum impurity problem, plays a key role in understanding quantum many-body physics. One example is the polaron, originally proposed by Landau and Peker [1], that is formed by a mobile electron dressed by a cloud of virtual phonons in an ionic crystal. The concept of the polaron has since been widely extended: In the case of ultracold atomic gases, the presence of an impurity atom in a Bose-Einstein condensate (BEC) gives rise to the formation of the _Bose polaron_, where the impurity becomes dressed by Bogoliubov excitations of the BEC [2]. Here, one can take advantage of a magnetically tunable Feshbach resonance that enables the precise control of the coupling strength between the impurity and the medium, thus allowing one to investigate the polaron quasiparticle properties, as recently demonstrated in experiment [3; 4; 5]. This poses an intriguing test for theoretical modelling, and indeed the Bose polaron has been studied using a variety of theoretical methods, ranging from field-theoretical diagrammatic approaches [6; 7] and variational methods [8; 9; 10; 11; 12; 13; 14], to quantum Monte Carlo (QMC) methods [15; 16], renormalization group theory [17], and a high-temperature virial expansion [18]. A particularly interesting aspect of the Bose polaron is its relationship to few-body bound clusters involving the impurity and several bosons from the medium. Unlike the widely studied Fermi polaron [19; 20; 21]--an impurity immersed in a fermionic medium--the Bose polaron does not feature any transitions in its ground state, and thus the polaron can continuously change its character from a weakly dressed impurity to a state that has strong few-body correlations. In a three-dimensional (3D) system, where three particles can bind to form Efimov trimers [22], it was shown in Ref. [11] that the behavior of the Bose polaron at strong interactions is characterized by the size of the ground-state trimer and that the polaron energy is a universal function of the Efimov three-body parameter [23] for sufficiently low boson densities. The existence of Efimov trimers depends strongly on dimensionality, being absent in the ideal two-dimensional (2D) case [24; 25] and suppressed under realistic transverse confinements [26]. This means that the properties of the 2D Bose polaron may be expected to be independent of microscopic short-range details, and fully characterized by the interparticle spacing of the medium relative to the impurity-boson and boson-boson scattering lengths. On the other hand, the system supports the formation of two- and three-body bound states at any scattering length [24; 27; 28], unlike the 3D case, and one might therefore wonder how these bound states affect the polaron. The 2D Bose polaron has been previously investigated using the Frohlich model [29; 30], a variational approach for light-matter coupled polarons [31], a mean-field approach [32], a non-self consistent \(T\)-matrix approximation approach [33], QMC [34], and renormalization group theory [35]. However, the detailed connection between few- and many-body physics for the 2D Bose polaron in an ultracold atomic gas remains unexplored. In this paper, we investigate the properties of the 2D Bose polaron by applying a variational ansatz that contains up to two Bogoliubov excitations of the BEC. Similarly to the 3D Bose polaron [20], we observe two distinct quasiparticle branches: the attractive and repulsive polarons characterized by negative and positive energies, respectively. For a large range of weak to intermediate attractive interaction strengths, we observe that including two excitations of the BEC as well as interactions within the BEC is necessary to accurately reproduce QMC results [34] for the attractive polaron energy. Furthermore, this level of approximation gives reasonable agreement also for the quasiparticle residue and effective mass of the attractive polaron. Including additional excitations of the medium would likely substantially improve the agreement in the strongly interacting regime where the interparticle spacing is comparable to the size of the impurity-boson bound state. For the repulsive polaron, we find that the energy is well reproduced already with a single excitation; however the residue behaves very differently to that in the QMC, indicating that the metastability of the repulsive polaron plays an important role and requires further study. This paper is organized as follows. In Sec. II we describe our model of an impurity in a 2D BEC and outline our variational approach. In Sec. III we discuss the polaron properties including our results for the energy, residue, and effective mass, and compare these with results from QMC [34]. We conclude in Sec. IV. ## II Model and variational approach ### Model We consider an impurity particle immersed in a weakly interacting 2D BEC at zero temperature. We model the system using a two-channel description of a Feshbach resonance [36], similarly to how the Bose polaron was described in the three-dimensional case in Refs. [9; 11]. Measuring the energy with respect to that of the BEC, the Hamiltonian is given by (setting \(\hbar\) and the area to 1): \[\begin{split}&\hat{H}=\sum_{\mathbf{k}}\Big{[}E_{\mathbf{k}} \beta_{\mathbf{k}}^{\dagger}\beta_{\mathbf{k}}+\epsilon_{\mathbf{k}}c_{ \mathbf{k}}^{\dagger}c_{\mathbf{k}}+(\epsilon_{\mathbf{k}}^{d}+\nu_{0})d_{ \mathbf{k}}^{\dagger}d_{\mathbf{k}}\Big{]}\\ &+g\sqrt{n_{0}}\sum_{\mathbf{k}}\Big{(}d_{\mathbf{k}}^{\dagger} c_{\mathbf{k}}+h.c.\Big{)}+g\sum_{\mathbf{k},\mathbf{q}}\left(d_{\mathbf{q}}^{ \dagger}c_{\mathbf{q}-\mathbf{k}}b_{\mathbf{k}}+h.c.\right).\end{split} \tag{1}\] Here, \(b_{\mathbf{k}}^{\dagger}\) and \(c_{\mathbf{k}}^{\dagger}\) correspond to the creation operators of a boson and the impurity, respectively, with momentum \(\mathbf{k}\). \(\epsilon_{\mathbf{k}}=\frac{k^{2}}{2m}\) is the corresponding single-particle energy dispersions, where we assume that the impurity and bosons have equal masses \(m\). \(d_{\mathbf{k}}^{\dagger}\) corresponds to the creation operator of a closed-channel molecule, which is formed from the impurity and a boson when they interact, and we have \(\epsilon_{\mathbf{k}}^{d}=\frac{k^{2}}{4m}\). The bare detuning \(\nu_{0}\) corresponds to the energy of the closed channel relative to the open impurity-boson channel. In writing the Hamiltonian, we have applied the Bogoliubov theory of the weakly interacting Bose gas [37], where we have \(n_{0}a_{\mathrm{B}}^{2}\ll 1\) in terms of the (positive) 2D boson-boson scattering length \(a_{\mathrm{B}}\) and the condensate density \(n_{0}\). The Bogoliubov dispersion is given by \[E_{\mathbf{k}}=\sqrt{\epsilon_{\mathbf{k}}(\epsilon_{\mathbf{k}}+2\mu)}, \tag{2}\] with the chemical potential \(\mu=\frac{4\pi n_{0}/m}{\ln(1/n_{0}a_{\mathrm{B}}^{2})}\)[38]. The bare boson operator \(b_{\mathbf{k}}\) is related to the Bogoliubov operator via the Bogoliubov transformation: \[b_{\mathbf{k}}=u_{\mathbf{k}}\beta_{\mathbf{k}}-v_{\mathbf{k}}\beta_{- \mathbf{k}}^{\dagger}, \tag{3}\] with the coherence factors: \[\begin{split} u_{\mathbf{k}}=\sqrt{\frac{1}{2}\left(\frac{ \epsilon_{\mathbf{k}}+\mu}{E_{\mathbf{k}}}+1\right)},\quad v_{\mathbf{k}}& =\sqrt{\frac{1}{2}\left(\frac{\epsilon_{\mathbf{k}}+\mu}{E_{ \mathbf{k}}}-1\right)}.\end{split} \tag{4}\] The presence of the impurity adds the scattering length \(a_{\mathrm{2D}}\) to the system, which characterizes the interactions between the impurity and the medium. We assume that both the boson-boson and impurity-boson interactions are contact, \(s\)-wave interactions. This assumption is valid since the average interparticle distance and the thermal wavelength far exceed the range of the underlying van der Waals interactions. In applying the Bogoliubov theory of the weakly interacting Bose gas, we have already implicitly assumed that the boson-boson interactions are short-ranged. To be specific, the second line in Eq. (1) describes the interactions between the impurity and a boson with coupling constant \(g\), which proceed via the formation of a closed-channel molecule. The coupling constant \(g\) and the bare detuning \(\nu_{0}\) can be related to the most general form of the low-energy \(s\)-wave 2D scattering amplitude [39]: \[f_{\mathrm{2D}}(k)=\frac{4\pi}{-\ln(k^{2}a_{\mathrm{2D}}^{2})+R_{\mathrm{2D}} ^{2}k^{2}+i\pi}, \tag{5}\] where \(R_{\mathrm{2D}}\) is the 2D range parameter. Calculating the scattering amplitude within the two-channel model (1) and carrying out the renormalization procedure [28], we obtain the scattering amplitude and range parameter [40]: \[a_{\mathrm{2D}}=\frac{1}{\Lambda}e^{\frac{2\pi\nu_{0}}{mg^{2}}},\quad R_{ \mathrm{2D}}=\sqrt{\frac{4\pi}{m^{2}g^{2}}}, \tag{6}\] where \(\Lambda\) is an ultraviolet momentum cut-off on the relative impurity-boson momentum. For the results presented in this paper, we take \(R_{\mathrm{2D}}\to 0\), which corresponds to taking the limit of \(g\rightarrow\infty\) and \(\Lambda\rightarrow\infty\) while adjusting \(\nu_{0}\) to keep \(a_{\mathrm{2D}}\) finite. We note that, although the interacting part of the Hamiltonian in Eq. (1) is written in terms of the bare boson operator, it is always related to the Bogoliubov operator via the Bogoliubov transformation in Eq. (3). Importantly, the impurity-boson interaction always features a bound state in our two-dimensional setting. This so-called dimer state corresponds to the pole of the scattering amplitude, i.e., \(f_{\mathrm{2D}}^{-1}=0\). In the limit of \(R_{\mathrm{2D}}\to 0\), the dimer binding energy \(\varepsilon_{B}\) takes the form: \(\varepsilon_{B}=1/ma_{\mathrm{2D}}^{2}\). Likewise, there exists a three-body bound state consisting of the impurity and two bosons [24], with energy \(E_{\mathrm{T}}=-2.39\varepsilon_{B}\) in the limit \(R_{\mathrm{2D}}\to 0\) and \(a_{\mathrm{B}}\to 0\). Larger multi-body clusters are also predicted to exist in this system [41] but these will not be considered here. In experiment, the quasi-2D geometry may be achieved by applying a harmonic confining potential to a 3D atomic gas along the direction perpendicular to the 2D plane [42]. In this case, the scattering properties of the 3D system can be mapped to the quasi-2D system [40]. The 2D limit is achieved when the size of the two-body bound state, average interparticle spacing, and thermal wavelength are all much larger than the confinement length. ### Variational Ansatz To investigate the properties of the polaron, we consider a variational state that describes the impurity and its dressing by up to two Bogoliubov excitations of the medium. Considering for simplicity a variational state of zero total momentum, this takes the form: \[\ket{\Psi}=\ket{\psi_{0}}+\ket{\psi_{1}}+\ket{\psi_{2}}, \tag{7}\] where \(\ket{\psi_{N}}\) denotes a state with \(N\) Bogoliubov excitations: \[\ket{\psi_{0}} =\alpha_{0}c_{0}^{\dagger}\ket{\Phi},\] \[\ket{\psi_{1}} =\left(\sum_{\mathbf{k}}\alpha_{\mathbf{k}}c_{-\mathbf{k}}^{ \dagger}\beta_{\mathbf{k}}^{\dagger}+\gamma_{0}d_{0}^{\dagger}\right)\ket{\Phi },\] \[\ket{\psi_{2}} =\left(\frac{1}{2}\sum_{\mathbf{k}_{1}\mathbf{k}_{2}}\alpha_{ \mathbf{k}_{1}\mathbf{k}_{2}}c_{-\mathbf{k}_{1}-\mathbf{k}_{2}}^{\dagger} \beta_{\mathbf{k}_{1}}^{\dagger}\beta_{\mathbf{k}_{2}}^{\dagger}+\sum_{ \mathbf{k}}\gamma_{\mathbf{k}}d_{-\mathbf{k}}^{\dagger}\beta_{\mathbf{k}}^{ \dagger}\right)\ket{\Phi}. \tag{8}\] Here, \(\ket{\Phi}\) corresponds to the weakly interacting 2D BEC. The first line of Eq. (8) describes the bare impurity, while the second line is a superposition of the impurity dressed by a single Bogoliubov excitation, and a term where the impurity has bound a particle from the BEC to form a closed-channel dimer. The third line describes a superposition of the impurity dressed by two Bogoliubov excitations and a closed-channel dimer dressed by a single Bogoliubov excitation. In this paper, we refer to calculations with the variational state up to \(N=1\) bosons as two-body correlations, and up to \(N=2\) as three-body correlations. \(\alpha_{0}\), \(\alpha_{\mathbf{k}}\), \(\alpha_{\mathbf{k}_{1}\mathbf{k}_{2}}\), \(\gamma_{0}\), and \(\gamma_{\mathbf{k}}\) are variational parameters that are normalized according to \(\bra{\Psi}=1\). These are determined by considering the stationary condition \(\bra{\partial\Psi}\hat{H}-E\ket{\Psi}=0\), where the derivative is taken with respect to each variational parameter, yielding a set of coupled linear integral equations for the variational parameters. These equations are identical to those that were originally derived in Ref. [9] in the three-dimensional case, and therefore their explicit form is relegated to Appendix A. In the case of the 3D Bose polaron, the variational ansatz in Eq. (7) was shown in Ref. [9] to capture the physics associated with the exotic three-body Efimov bound states [43]. Although Efimov trimers do not exist in the 2D system, we still have a trimer bound state at any scattering length, where the trimer binding energy can be calculated by taking the few-body limit, \(n_{0}\to 0\), of the coupled equations in Eq. (11). ### Impurity Spectral Function Of particular interest in experiments is the spectral response of the impurity in the medium. To form the polaron, a radio frequency (RF) pulse is used to transfer the impurity from a non-interacting (auxilliary) state into the interacting state. The transition probability is proportional to the impurity spectral function: \[A(\omega)=\sum_{j}\big{|}\bra{\Psi_{0}}\!\phi_{j}\big{\rangle}\big{|}^{2} \delta(\omega-E_{j}), \tag{9}\] where \(\ket{\Psi_{0}}\) denotes the non-interacting polaron state: \(\ket{\Psi_{0}}=c_{0}^{\dagger}\ket{\Phi}\). Here, \(\ket{\phi_{j}}\) denotes the eigenstates of the Hamiltonian in Eq. (1) truncated to the Hilbert space described by the variational ansatz in Eq. (7), and \(E_{j}\) denotes their corresponding eigenvalues. To incorporate a broadening of the spectrum due to the finite duration of the RF pulse in experiment, we convolve the spectral function with a Gaussian of Fourier width \(\sigma_{\mathrm{rf}}\): \[I_{0}(\omega)=\int d\omega^{\prime}A(\omega-\omega^{\prime})\frac{1}{\sqrt{2 \pi}\sigma_{\mathrm{rf}}}e^{-\omega^{\prime 2}/2\sigma_{\mathrm{rf}}^{2}}. \tag{10}\] Inserting Eq. (9) in Eq. (10), we obtain the broadened impurity spectral function: \[I_{0}(\omega)=\sum_{j}\big{|}\bra{\Psi_{0}}\!\phi_{j}\big{\rangle}\big{|}^{2} \frac{1}{\sqrt{2\pi}\sigma_{\mathrm{rf}}}e^{-(\omega-E_{j})^{2}/2\sigma_{ \mathrm{rf}}^{2}}. \tag{11}\] As we shall see, this spectral function allows us to capture the peaks of the attractive and repulsive polarons as well as the continuum of states across a range of coupling strengths. In the present work, we use a Fourier broadening \(\sigma_{\mathrm{rf}}=0.4n_{0}/m\). ## III Polaron Properties We now discuss the quasiparticle properties of the 2D Bose polaron. In our variational approach, we obtain the energy spectrum by expressing the coupled equations in (11) as an eigenvalue equation and numerically evaluating the eigenvalues and eigenvectors on a discrete grid [44]. This also allows us to obtain the quasiparticle residue \(Z\), i.e., the squared overlap between interacting and non-interacting states. To evaluate the polaron effective mass \(m^{*}\), we extend our variational ansatz to the Bose polaron with a finite momentum (see Appendix B). Note that the variational ansatz with one Bogoliubov excitation (i.e., two-body correlations) is formally equivalent to the non-self-consistent \(T\)-matrix approximation (NSCT). Indeed a very recent work [33] applied NSCT to the 2D Bose polaron and obtained qualitatively similar results to our lowest order ansatz [45]. However, our full variational ansatz includes three-body correlations and bound states that go beyond this diagrammatic approach. In the following, we compare our results with those from QMC [34] and perturbation theory [30]. In particular, the energy, residue, and effective mass within the weak-coupling perturbative limit \(|\ln\bigl{(}\sqrt{n_{0}}a_{2D}\bigr{)}|\gg 1\) take the forms [30]: \[\frac{E}{n_{0}/m}=-\frac{2\pi}{\ln\left(\sqrt{n_{0}}a_{\rm 2D}\right)}, \tag{12}\] \[Z=\left[1-\frac{1}{2}\frac{\ln\left(\sqrt{n_{0}}a_{\rm B}\right)}{\ln^{2} \left(\sqrt{n_{0}}a_{\rm 2D}\right)}\right]^{-1}, \tag{13}\] \[\frac{m}{m^{*}}=1+\frac{1}{4}\frac{\ln\left(\sqrt{n_{0}}a_{\rm B}\right)}{\ln ^{2}\left(\sqrt{n_{0}}a_{\rm 2D}\right)}, \tag{14}\] respectively. ### Polaron Energy In Fig. 1(a), we display the polaron energy obtained from solving the coupled integral equations involving two- and three-body correlations with \(\mu=0\) (corresponding to taking the limit of a non-interacting BEC, i.e., \(a_{\rm B}\to 0^{+}\)). Similarly to the 3D polaron, we observe two branches: the ground-state attractive branch and the metastable repulsive branch. We first consider the attractive branch. In the weak-coupling regime where the size of the bound state greatly exceeds the interparticle spacing, \(\ln\bigl{(}\sqrt{n_{0}}a_{\rm 2D}\bigr{)}\gg 1\), we observe that our variational results from both two- and three-body correlations are in excellent agreement with both QMC [34] and perturbation theory [30] (which were both calculated at \(\mu=0.136n_{0}/m\)). With increasing coupling strength, our variational results obtained from two- and three-body correlations begin to deviate, as multi-body correlations become more important. However, even in the strong-coupling regime where \(a_{\rm B}\sim n_{0}^{-1/2}\), our variational results from three-body correlations are still in good agreement with QMC, which is reminiscent of the case for the 3D Bose polaron [13; 11; 46]. In this regime, our variational results lie slightly above QMC, and near-perfect agreement could likely be fixed by including four-body correlations in the variational ansatz as in 3D [11]. While at a qualitative level all methods agree well for weak to moderate interaction strengths, we can gain further insight by taking a closer look at the results in this regime, as shown in Fig. 1(b). First, we see that perturbation theory is well reproduced already by the variational ansatz limited to two-body correlations (which we find are nearly completely independent of \(\mu\)). Incorporating three-body correlations, we see that the Bose gas chemical potential matters more in this case, and that the inclusion of a finite \(\mu\) allows us to almost perfectly reproduce the QMC results of Ref. [34]. Therefore, we can conclude that incorporating the chemical potential is important in this regime, while the polaron energy is dominated by few-body correlations in the strong-coupling regime. We now turn to the repulsive branch energy, as shown in Fig. 1(a), where we obtain the energy by finding the peak of the spectral function at positive energy. Similarly to the attractive branch, our variational results from two- and three-body correlations have excellent agreement Figure 1: Energy of the Bose polaron as a function of the coupling strength \(\ln\bigl{(}\sqrt{n_{0}}a_{\rm 2D}\bigr{)}\). (a) The blue and purple lines show the results obtained from two- and three-body correlations, respectively, for the ideal BEC with \(\mu=0\). The red dots and grey dash-dotted line show the results obtained from QMC [34] and perturbation theory [30], respectively, with \(\mu=0.136n_{0}/m\). (b) Same as in (a) where the additional purple dashed line shows the results obtained from three-body correlations for a weakly interacting BEC with \(\mu=0.136n_{0}/m\) (the variational energy obtained from two-body correlations at this \(\mu\) is not included as it is indistinguishable from the result for \(\mu=0\) on this scale). with QMC and perturbation theory in the weak-coupling regime, \(\ln(\sqrt{n_{0}}a_{\rm 2D})\ll-1\). Towards the strong-coupling regime, the polaron energy becomes comparable to the lifetime of the polaron, indicating that the repulsive polaron ceases to correspond to a well-defined quasiparticle. This results in the discrepancy in our evaluation of the repulsive polaron energy around \(\ln(\sqrt{n_{0}}a_{\rm 2D})\simeq-1\). In Fig. 2, we show the spectral response obtained from (a) two-body correlations and (b) three-body correlations. In both panels we observe that the repulsive branch gets broadened towards the strong-coupling regime, indicating a higher uncertainty of the polaron energy and an associated shorter lifetime. Thus, decay processes including many-body dephasing [47] and relaxation into the lower-lying continuum of states dominate in this regime. In (b), we also observe the emergence of an additional excited state of the attractive polaron. While the lower attractive branch corresponds to a single eigenstate involving the impurity dressed by up to two excitations of the medium, the upper attractive branch corresponds to a continuum of states involving the impurity dressed by at most one excitation, moving with respect to another excitation. ### Residue The residue \(Z\) quantifies the squared overlap of the interacting state with the non-interacting state. Thus, for the attractive polaron which consists of only a single state, the residue reads: \[Z=|\alpha_{0}|^{2}. \tag{15}\] For the repulsive branch which corresponds to a continuum of states, we define the residue as the sum of all states with positive energy, \(Z=\sum_{j,E_{j}>0}|\alpha_{0}^{(j)}|^{2}\). In Fig. 3(a), we display the attractive polaron residue obtained from our variational approach including two Figure 3: Residue of (a) the attractive Bose polaron and (b) the repulsive Bose polaron as a function of the coupling strength \(\ln(\sqrt{n_{0}}a_{\rm 2D})\). The blue and purple solid lines show the results obtained from two- and three-body correlations, respectively, for the ideal BEC with \(\mu=0\). The red dots and grey dash-dotted line show the results obtained from QMC [34] and perturbation theory [30], respectively, with \(\mu=0.136n_{0}/m\). The purple dashed line shows the results from three-body correlations for a weakly interacting BEC with \(\mu=0.136n_{0}/m\) (for the attractive polaron obtained from two-body correlations and for the repulsive polaron, we do not see a visible difference between \(\mu=0\) and \(\mu=0.136n_{0}/m\) on this scale). Figure 2: Spectral response of the impurity in the BEC obtained from (a) two-body correlations and (b) three-body correlations with \(\mu=0\). The spectral response for a weakly interacting BEC with \(\mu=0.136n_{0}/m\) is not included as it is almost identical to that for the ideal BEC, with slight broadening in the excited attractive branch for three-body correlations. and three-body correlations. For the case of the ideal BEC, we observe a clear difference between two- and three-body correlations in the weak-coupling regime, \(\ln(\sqrt{n_{0}}a_{\rm 2D})\gg 1\). The residue in the case of two-body correlations remains close to 1, while in the case of three-body correlations, it appears significantly lowered. There are two reasons for this: First, the existence of the trimer state enhances impurity dressing by the excitations of the medium, and second the infinite compressibility of the ideal BEC means that the impurity can excite an arbitrary number of low-energy excitations, thus leading to the so-called orthogonality catastrophe [48; 49; 11]. While the latter effect cannot be captured within our variational approach (being limited to few-body correlations), it still suppresses the residue when considering three-body correlations. Unlike the case of the polaron energy, we find that the residue strongly depends on the medium chemical potential, which changes the compressibility of the BEC. Indeed, upon including a small chemical potential, we observe in the weak-coupling regime that the residue obtained from three-body correlations recovers values close to those from two-body correlations, QMC and perturbation theory. On the other hand, in the strong-coupling regime, the residue is almost completely unaffected by the chemical potential of the BEC and is instead determined by few-body bound states. For the repulsive polaron shown in Fig. 3(b), we find that the result is nearly completely independent of the medium chemical potential and is insensitive to the number of excitations, unlike the attractive case. However, we see that our variational results strongly deviate from QMC and perturbation theory. This suggests that the metastability of the repulsive polaron might play a strong role in the behavior of the spectral weight, since this is not accounted for in the QMC. We also note that our calculations yield attractive and repulsive polaron residues that nearly sum to 1, as expected, while QMC finds that substantial spectral weight shifts elsewhere in the spectrum. This discrepancy warrants additional future studies. ### Effective Mass Thus far, we have considered states with zero total momentum, which is relevant for the spectrum at zero temperature. However, our approach also contains information about the behavior at small momentum \(\mathbf{p}\) and thus the dynamical response of the system. For the case of the attractive polaron, where there is a well-defined quasiparticle, the polaron energy can be expanded as: \[E(p)=E(0)+\frac{p^{2}}{2m^{*}}+\mathcal{O}(p^{4}), \tag{16}\] where \(m^{*}\) is the polaron effective mass. This modified mass arises from the interactions between the impurity and the medium, which affects the mobility of the impurity. We calculate the polaron effective mass using our variational ansatz, as outlined in Appendix B. Figure 4 shows the effective mass of the attractive polaron obtained from two- and three-body correlations. Similarly to the residue of the attractive polaron in the weak-coupling regime, the effective mass obtained from three-body correlations in the case of the ideal BEC appears substantially larger than the results from two-body correlations. This signals enhanced impurity dressing by excitations of the medium, which can be compensated by including weak interactions in the BEC (corresponding to a small chemical potential). The latter is seen to match perturbation theory in the weak-coupling regime and reasonably agrees with QMC for \(\ln(\sqrt{n_{0}}a_{2D})\gtrsim 4\). Towards the strong-coupling limit, the effective mass in our variational ansatz converges to \(m^{*}=(N+1)m\), where \(N\) is the number of bosons included in the evaluations, and thus cannot capture the dressing by many bosons seen in the QMC. However, our calculation might be able to describe an excited metastable polaron with a spectral weight greater than that of the ground-state polaron [whose residue vanishes in this regime according to QMC, as shown Fig. 3(a)]. ## IV Conclusions In conclusion, we have extensively examined the properties of the 2D Bose polaron using a variational approach that incorporates up to two excitations of the medium. We have shown that considering three-body correlations Figure 4: Effective mass \(m^{*}\) of the attractive Bose polaron as a function of the coupling strength \(\ln(\sqrt{n_{0}}a_{\rm 2D})\). The blue and purple lines show the results obtained from two- and three-body correlations, respectively, for the ideal BEC with \(\mu=0\). The red dots and grey dash-dotted line show the results obtained from QMC [34] and perturbation theory [30], respectively, with \(\mu=0.136n_{0}/m\). The purple dashed line shows the results from three-body correlations for a weakly interacting BEC with \(\mu=0.136n_{0}/m\) (the corresponding result for two-body correlations at this \(\mu\) is not included as it is indistinguishable from the result for \(\mu=0\) on this scale). and a weak BEC chemical potential is crucial to achieve agreement with the results obtained from QMC calculations. Particularly in the weak-coupling regime, our variational approach, accounting for three-body correlations in the presence of a weakly interacting BEC, exhibits excellent agreement with QMC results for the polaron energy. However, as we approach the strong-coupling regime, discrepancies arise between the energy values obtained from our variational approach and QMC due to strong few-body correlations. Furthermore, we have found that the residue and effective mass of the attractive polaron strongly depend on the level of correlations we incorporate in our ansatz, as well as the interactions within the BEC. On the other hand, the residue of the repulsive branch is remarkably insensitive to the number of Bogoliubov excitations and appears to be strongly influenced by the metastable nature of the repulsive polaron. Our results can potentially be probed in ultracold atomic gas experiments, similarly to how the 2D Fermi polaron was previously investigated [50; 51; 52]. The Bose polaron may also be investigated in 2D semiconductor microcavities with exciton polaritons (bound electron-hole pairs strongly coupled to light). Here, exciton-polariton "impurities" in a given circular polarization are dressed by a medium of exciton polaritons in the opposite polarization [31]. Indeed, this intriguing extension of traditional polaron physics has recently been realized in a MoSe\({}_{2}\) monolayer [53]. These experimental platforms offer exciting opportunities to probe and further investigate the properties and behavior of the Bose polaron. ###### Acknowledgements. We acknowledge insightful discussions with Luis A. Pena Ardila and we thank him for sharing the QMC data from Ref. [34]. We also acknowledge discussions with Arturo Camacho-Guardian. We acknowledge support from the Australian Research Council Centre of Excellence in Future Low-Energy Electronics Technologies (CE170100039). MMP and JL are also supported through the Australian Research Council Future Fellowships FT160100244 and FT200100619, respectively. ## Appendix A Coupled integral equations The stationary condition \(\left\langle\partial\Psi\right|\left(\hat{H}-E\right)\left|\Psi\right\rangle=0\) with respect to the variational parameters \(\alpha_{0}^{*}\), \(\alpha_{\mathbf{k}}^{*}\), \(\alpha_{\mathbf{k}_{1}\mathbf{k}_{2}}^{*}\), \(\gamma_{0}^{*}\), and \(\gamma_{\mathbf{k}}^{*}\) yields five coupled equations [9]: \[E\alpha_{0} =g\sqrt{n_{0}}\gamma_{0}-g\sum_{\mathbf{k}}v_{\mathbf{k}}\gamma_{ \mathbf{k}},\] \[(E-\epsilon_{\mathbf{k}}-E_{\mathbf{k}})\alpha_{\mathbf{k}} =gu_{\mathbf{k}}\gamma_{0}+g\sqrt{n_{0}}\gamma_{\mathbf{k}},\] \[(E-E_{\mathbf{k}_{1}\mathbf{k}_{2}})\alpha_{\mathbf{k}_{1} \mathbf{k}_{2}} =g(\gamma_{\mathbf{k}_{1}}u_{\mathbf{k}_{2}}+\gamma_{\mathbf{k}_ {2}}u_{\mathbf{k}_{1}}),\] \[(E-\nu_{0})\gamma_{0} =g\sqrt{n_{0}}\alpha_{0}+g\sum_{\mathbf{k}}u_{\mathbf{k}}\alpha_{ \mathbf{k}},\] \[(E-\epsilon_{\mathbf{k}}^{d}-\nu_{0}-E_{\mathbf{k}})\gamma_{ \mathbf{k}} =g\sqrt{n_{0}}\alpha_{\mathbf{k}}-gv_{\mathbf{k}}\alpha_{0}+g\sum _{\mathbf{k}^{\prime}}u_{\mathbf{k}^{\prime}}\alpha_{\mathbf{k}\mathbf{k}^{ \prime}}, \tag{10}\] where \(E_{\mathbf{k}_{1}\mathbf{k}_{2}}\equiv E_{\mathbf{k}_{1}}+E_{\mathbf{k}_{2}}+ \epsilon_{\mathbf{k}_{1}+\mathbf{k}_{2}}\). Using the first three equations to remove the \(\alpha\) parameters, we obtain the reduced coupled equations: \[\mathcal{T}^{-1}(E,\mathbf{0})\gamma_{0} =\frac{n_{0}}{E}\gamma_{0}+\sqrt{n_{0}}\sum_{\mathbf{k}}\left( \frac{u_{\mathbf{k}}\gamma_{\mathbf{k}}}{E-\epsilon_{\mathbf{k}}-E_{\mathbf{k}} }-\frac{v_{\mathbf{k}}\gamma_{\mathbf{k}}}{E}\right),\] \[\mathcal{T}^{-1}(E-E_{\mathbf{k}},\mathbf{k})\gamma_{\mathbf{k}} =\sqrt{n_{0}}\left(\frac{u_{\mathbf{k}}}{E-\epsilon_{\mathbf{k}}- E_{\mathbf{k}}}-\frac{v_{\mathbf{k}}}{E}\right)\gamma_{0}+\frac{n_{0}}{E- \epsilon_{\mathbf{k}}-E_{\mathbf{k}}}\gamma_{\mathbf{k}}+\sum_{\mathbf{q}} \left(\frac{u_{\mathbf{k}}u_{\mathbf{q}}\gamma_{\mathbf{q}}}{E-E_{\mathbf{k} \mathbf{q}}}+\frac{v_{\mathbf{k}}v_{\mathbf{q}}\gamma_{\mathbf{q}}}{E}\right), \tag{11}\] where we have defined the medium \(T\) matrix \(\mathcal{T}(E,\mathbf{k})\): \[\mathcal{T}^{-1}(E,\mathbf{k})=\frac{m^{2}R_{\text{2D}}^{2}}{4\pi}\left(E+ \varepsilon_{B}-\epsilon_{\mathbf{k}}^{d}\right)-\frac{m}{4\pi}\ln\left(\frac{- E}{\varepsilon_{B}}\right)-\sum_{\mathbf{q}}\left(\frac{u_{\mathbf{q}}^{2}}{E-E_{ \mathbf{q}}-\epsilon_{\mathbf{k}+\mathbf{q}}}+\frac{1}{2\varepsilon_{\mathbf{ q}}-E}\right). \tag{12}\] ## Appendix B Effective mass To evaluate the polaron effective mass, we extend our variational ansatz to include a finite momentum \(\mathbf{p}\): \[\left|\Psi^{(\mathbf{p})}\right\rangle=\left(\alpha_{0}^{(\mathbf{p})}c_{ \mathbf{p}}^{\dagger}+\sum_{\mathbf{k}}\alpha_{\mathbf{k}}^{(\mathbf{p})}c_{ \mathbf{p}-\mathbf{k}}^{\dagger}\beta_{\mathbf{k}}^{\dagger}+\frac{1}{2}\sum_ {\mathbf{k}_{1}\mathbf{k}_{2}}\alpha_{\mathbf{k}_{1}\mathbf{k}_{2}}^{(\mathbf{ p})}c_{\mathbf{p}-\mathbf{k}_{1}-\mathbf{k}_{2}}^{\dagger}\beta_{\mathbf{k}_{1}}^{ \dagger}\beta_{\mathbf{k}_{2}}^{\dagger}+\gamma_{0}^{(\mathbf{p})}d_{\mathbf{ p}}^{\dagger}+\sum_{\mathbf{k}}\gamma_{\mathbf{k}}^{(\mathbf{p})}d_{\mathbf{p}- \mathbf{k}}^{\dagger}\beta_{\mathbf{k}}^{\dagger}\right)\left|\Phi\right\rangle. \tag{10}\] The impurity momentum breaks rotational symmetry and the variational parameters now depend on the angle between the total momentum and each wavevector of the Bogoliubov excitations. Taking the stationary condition, we obtain the modified coupled integral equations: \[(E(p)-\epsilon_{\mathbf{p}})\alpha_{0}^{(\mathbf{p})} = g\sqrt{n_{0}}\gamma_{0}^{(\mathbf{p})}-g\sum_{\mathbf{k}}v_{ \mathbf{k}}\gamma_{\mathbf{k}}^{(\mathbf{p})},\] \[(E(p)-\epsilon_{\mathbf{k}-\mathbf{p}}-E_{\mathbf{k}})\alpha_{ \mathbf{k}}^{(\mathbf{p})} = gu_{\mathbf{k}}\gamma_{0}^{(\mathbf{p})}+g\sqrt{n_{0}}\gamma_{ \mathbf{k}}^{(\mathbf{p})},\] \[(E(p)-E_{\mathbf{k}_{1}}-E_{\mathbf{k}_{2}}-\epsilon_{\mathbf{k }_{1}+\mathbf{k}_{2}-\mathbf{p}})\alpha_{\mathbf{k}_{1}\mathbf{k}_{2}}^{( \mathbf{p})} = g(\gamma_{\mathbf{k}_{1}}^{(\mathbf{p})}u_{\mathbf{k}_{2}}+ \gamma_{\mathbf{k}}^{(\mathbf{p})}u_{\mathbf{k}_{1}}),\] \[(E(p)-\epsilon_{\mathbf{p}}^{d}-\nu_{0})\gamma_{0}^{(\mathbf{p})} = g\sqrt{n_{0}}\alpha_{0}^{(\mathbf{p})}+g\sum_{\mathbf{k}}u_{ \mathbf{k}}\alpha_{\mathbf{k}}^{(\mathbf{p})},\] \[(E(p)-\epsilon_{\mathbf{k}-\mathbf{p}}^{d}-\nu_{0}-E_{\mathbf{k}} )\gamma_{\mathbf{k}}^{(\mathbf{p})} = g\sqrt{n_{0}}\alpha_{\mathbf{k}}^{(\mathbf{p})}-gv_{\mathbf{k} }\alpha_{0}^{(\mathbf{p})}+g\sum_{\mathbf{k}^{\prime}}u_{\mathbf{k}^{\prime}} \alpha_{\mathbf{k}\mathbf{k}^{\prime}}^{(\mathbf{p})}, \tag{11}\] where \(E(p)\) is the energy dispersion of the Bose polaron with a total momentum \(\mathbf{p}\). For a small momentum \(\mathbf{p}\), the energy dispersion can be expanded as: \[E(p)=E(0)+\frac{p^{2}}{2m^{*}}+\mathcal{O}(p^{4}), \tag{12}\] where \(m^{*}\) is the polaron effective mass. In principle, the effective mass can be obtained by solving Eqs. (11) and (12) at various momenta. However, this procedure is numerically cumbersome due to the absence of rotational symmetry. We avoid this problem by employing the perturbative approach introduced in Ref. [54] (see also Ref. [11]). Since the coupled integral equations in Eq. (11) are all linear, they can be expressed as the eigenvalue problem: \[\hat{X}(E(p),\mathbf{p})\left|\eta\right\rangle=0, \tag{13}\] where \(\left|\eta\right\rangle\) denotes the ordered set of variational parameters \(\alpha_{0}^{(\mathbf{p})}\), \(\alpha_{\mathbf{k}}^{(\mathbf{p})}\), \(\alpha_{\mathbf{k}_{1}\mathbf{k}_{2}}^{(\mathbf{p})}\), \(\gamma_{0}^{(\mathbf{p})}\), and \(\gamma_{\mathbf{k}}^{(\mathbf{p})}\). Without loss of generality, we assume that the momentum \(\mathbf{p}\) is oriented along the x-axis. The energy dispersion of the ground state polaron corresponds to the zero-crossing of the lowest (or highest, depending on the sign convention of the operator \(\hat{X}\)) eigenvalue of \(\hat{X}\). Assuming that the momentum \(\mathbf{p}\) is small, the Taylor series of \(\hat{X}\) up to \(\mathcal{O}(p^{2})\) reads: \[\hat{X}(E(p),\mathbf{p})=\hat{X}_{0}+p\hat{X}_{p}+\frac{p^{2}}{2}\left(\hat{X} _{pp}+\frac{1}{m^{*}}\hat{X}_{E}\right)+\mathcal{O}(p^{3}), \tag{14}\] where we have defined operators: \[\hat{X}_{0}=\hat{X}(E(0),0),\quad\hat{X}_{p}=\left.\frac{\partial\hat{X}}{ \partial p}\right|_{E=E(0)},\quad\hat{X}_{pp}=\left.\frac{\partial^{2}\hat{X}}{ \partial p^{2}}\right|_{E=E(0)},\quad\hat{X}_{E}=\left.\frac{\partial\hat{X}}{ \partial E}\right|_{E=E(0)}. \tag{15}\] Treating the operators proportional to \(p\) in Eq. (14) as small perturbations to \(\hat{X}_{0}\), i.e., the coupled integral equations for a \(\mathbf{p}=0\) impurity, the lowest eigenvalue \(\lambda(p)\) of \(\hat{X}(E(p),\mathbf{p})\) can be expressed as: \[\lambda(p)=\lambda_{0}+\lambda_{1}p+\lambda_{2}p^{2}+\mathcal{O}(p^{3}) \tag{16}\] The first- and second-order corrections \(\lambda_{i}\) are calculated from perturbation theory: \[\lambda_{1} = \left\langle\eta^{(0)}\right|\hat{X}_{p}\left|\eta^{(0)}\right\rangle = 0, \tag{17}\] \[\lambda_{2} = \frac{1}{2}\left\langle\eta^{(0)}\right|\left(\hat{X}_{pp}+\frac{ 1}{m^{*}}\hat{X}_{E}\right)\left|\eta^{(0)}\right\rangle+\sum_{i>0}\frac{ \left|\left\langle\eta^{(i)}\right|\hat{X}_{p}\left|\eta^{(0)}\right\rangle \right|^{2}}{\lambda_{0}^{(0)}-\lambda_{0}^{(i)}}, \tag{18}\] where \(\lambda_{0}^{(i)}\) and \(\left|\eta^{(i)}\right\rangle\) denote the \(i\)th eigenvalue and its corresponding eigenvector of \(\hat{X}_{0}\), and \(i=0\) indicates the ground state. Recalling that the ground state energies of the polaron with \(\mathbf{p}=0\) and \(\mathbf{p}\neq 0\) are given, respectively, by the zero-crossing of the lowest eigenvalues of \(\hat{X}_{0}\) and \(\hat{X}(E(p),\mathbf{p})\), we can set \(\lambda_{0}^{(0)}=0\) and \(\lambda(p)=0\). In calculating the first-order correction \(\lambda_{1}=0\), we have taken the \(s\)-wave average of the expectation value. Thus, we conclude that \(\lambda_{2}=0\) and obtain the inverse effective mass: \[\frac{1}{m^{*}} =\left\langle\eta^{(0)}\right|\hat{X}_{E}\left|\eta^{(0)}\right\rangle ^{-1}\left[\sum_{i>0}\frac{2}{\lambda_{0}^{(i)}}\left|\left\langle\eta^{(i)} \right|\hat{X}_{p}\left|\eta^{(0)}\right\rangle\right|^{2}-\left\langle\eta^{ (0)}\right|\hat{X}_{pp}\left|\eta^{(0)}\right\rangle\right]\] \[=\left\langle\eta^{(0)}\right|\hat{X}_{E}\left|\eta^{(0)}\right\rangle ^{-1}\left[2\left\langle\eta^{(0)}\right|\hat{X}_{p}\hat{Q}\hat{X}_{0}^{-1} \hat{Q}\hat{X}_{p}\left|\eta^{(0)}\right\rangle-\left\langle\eta^{(0)}\right| \hat{X}_{pp}\left|\eta^{(0)}\right\rangle\right], \tag{10}\] where \(\hat{Q}=\mathbb{1}-\left|\eta^{(0)}\right\rangle\left\langle\eta^{(0)}\right|\).
impurity 粒子とボース・イェーレン・コンデンサ(BEC)との相互作用は、ボースポラオンと呼ばれる準粒子形成をもたらします。私たちは、BECの3次元対に類似して、2次元ボースポラオンの性質を調査し、多変数演算法を使って、BECの3つのボゴリオubov励起を含めます。異なる結合力に対して、2つの準粒子枝、つまり、 atractivo ポラオンと、repulsive ポラオンが見出されます。これらのエネルギーは、最近の量子モンテカルロ計算とよく一致しています。特に、魅力的なポラオンのエネルギーを強く引き付ける領域に捕捉するために、3つの励起の導入が重要な役割を果たしています。 quasiparticle proprietes は、数体相互作用が支配的です。また、吸引ポラオンの有効質量と残量も計算しました。弱相互作用するボース媒質と非相互
2309.07701
Semantic reconstruction of continuous language from MEG signals
Decoding language from neural signals holds considerable theoretical and practical importance. Previous research has indicated the feasibility of decoding text or speech from invasive neural signals. However, when using non-invasive neural signals, significant challenges are encountered due to their low quality. In this study, we proposed a data-driven approach for decoding semantic of language from Magnetoencephalography (MEG) signals recorded while subjects were listening to continuous speech. First, a multi-subject decoding model was trained using contrastive learning to reconstruct continuous word embeddings from MEG data. Subsequently, a beam search algorithm was adopted to generate text sequences based on the reconstructed word embeddings. Given a candidate sentence in the beam, a language model was used to predict the subsequent words. The word embeddings of the subsequent words were correlated with the reconstructed word embedding. These correlations were then used as a measure of the probability for the next word. The results showed that the proposed continuous word embedding model can effectively leverage both subject-specific and subject-shared information. Additionally, the decoded text exhibited significant similarity to the target text, with an average BERTScore of 0.816, a score comparable to that in the previous fMRI study.
Bo Wang, Xiran Xu, Longxiang Zhang, Boda Xiao, Xihong Wu, Jing Chen
2023-09-14T13:19:53
http://arxiv.org/abs/2309.07701v1
# Semantic Reconstruction of Continuous Language from MEG Signals ###### Abstract Decoding language from neural signals holds considerable theoretical and practical importance. Previous research has indicated the feasibility of decoding text or speech from invasive neural signals. However, when using non-invasive neural signals, significant challenges are encountered due to their low quality. In this study, we proposed a data-driven approach for decoding semantic of language from Magnetoencephalography (MEG) signals recorded while subjects were listening to continuous speech. First, a multi-subject decoding model was trained using contrastive learning to reconstruct continuous word embeddings from MEG data. Subsequently, a beam search algorithm was adopted to generate text sequences based on the reconstructed word embeddings. Given a candidate sentence in the beam, a language model was used to predict the subsequent words. The word embeddings of the subsequent words were correlated with the reconstructed word embedding. These correlations were then used as a measure of the probability for the next word. The results showed that the proposed continuous word embedding model can effectively leverage both subject-specific and subject-shared information. Additionally, the decoded text exhibited significant similarity to the target text, with an average BERTScore of 0.816, a score comparable to that in the previous fMRI study. Bo Wang\({}^{l}\), Xiran Xu\({}^{l}\), Longxiang Zhang\({}^{l}\), Boda Xiao\({}^{2}\), Xihong Wu\({}^{l,3}\), Jing Chen\({}^{l,3}\)\({}^{1}\)Key Laboratory of Machine Perception (Ministry of Education), Speech and Hearing Research Center, School of Intelligence Science and Technology, Peking University \({}^{2}\)Academy for Advanced Interdisciplinary Studies, Peking University \({}^{3}\)National Biomedical Imaging Center, College of Future Technology, Peking University _janechenjing@pku.edu.cn_ Semantic decoding, MEG, brain-computer interface, text generation ## 1 Introduction Every year, a considerable number of people lose their ability to speak due to cerebral stroke or ALS (Amyotrophic Lateral Sclerosis). Over the past few decades, brain-computer interfaces (BCIs) have made great progress in language decoding. Previous studies have demonstrated that acoustic information [1, 2], articulatory movements [3, 4], and semantic information [5] could be effectively decoded from intracranial recordings, offering hope for restoring communication to the patients. However, since invasive, these BCIs were not suitable for the majority of the patients. Decoding semantic of language from non-invasive recordings, on the other hand, remains a major challenge. Research utilizing functional magnetic resonance imaging (fMRI) has demonstrated the robust decoding of semantic information from the blood-oxygen-level dependent (BOLD) response [6, 7]. Nonetheless, fMRI lacks portability and is unable to capture rapid changes, making it impractical for real-time applications in daily life [8]. Instead, Electro-/Magnetoencephalography (EEG/MEG) measures neural activity at millisecond resolution with a safe and potentially wearable setup [9], making them more suitable for BCI application. However, decoding semantic from EEG/MEG is extremely challenging due to its low signal-to-noise ratio (SNR) and low spatial resolution. Previous studies typically used a subject-specific or common linear decoder to regress semantic features from EEG/MEG data evoked by a single word [10, 11, 12, 13, 14]. Subsequently, the most probable word from a small closed set was selected as the decoding output based on the distance between the reconstructed features and the true features. In work of Hulten et al. [11], MEG data were recorded when subjects were reading the written text of 118 nouns. Subject-specific ridge linear models were adopted to predict word2vec features from MEG data using leave-two-out pairwise comparisons (the model was trained on 116 of the words and tested on the remaining 2). A maximum mean pair-wise decoding accuracy of 66% (chance accuracy: 50%) was reported with a decoding window of 100 ms. In a more recent work of Ghazaryan et al. [12], the similar paradigm was used but with a smaller word set (60 words), and the decoding accuracy significantly higher than chance level was only shown on 6 of 19 subjects. When averaging MEG data across subjects to improve signal SNR, the decoding accuracy was up to 78%. The limited amount of data and simple linear models impose constraints on the effective modeling of the intricate relationship between neural responses and features. Moreover, since previous studies showed that neural representation of semantic shared certain Figure 1: Architecture of generating word sequences from MEG with a CWER model and beam search decoding. commonalities but also displayed a diversity across individuals [15], [16], the common or subject-specific decoder was unable to capture both shared and subject-specific neural response patterns to the same stimulus. As a result, the decoding performance was limited, and it was only possible to select words from a closed set, rather than open vocabulary decoding. To address these issues, we proposed a novel framework to decode semantic from MEG signals which were recorded when the subjects were listening to speech, i.e., spoken stories. Our framework consists of two parts, as shown in Fig. 1. Firstly, a continuous word embedding reconstruction (CWER) model is trained to reconstruct continuous word embeddings from MEG. The continuous word embeddings were derived from temporal sequence of spoken words and transformer-based pre-trained language model (PLM) which encodes important linguistic information, including semantic features, syntactic features, and long-distant dependencies [17], [18]. Neuroimaging studies have indicated that these word embeddings could be effectively mapped to the neural activity measured during natural speech processing [5], [19], [20]. Subsequently, the similarity was measured between the reconstructed word embeddings and the word embeddings of the subsequent words which are predicted by a language model (LM) for a candidate word sequence. The similarity was then used as a measure of the probability of the next word. By incorporating this probability into a beam search algorithm, we can generate the most probable sequence of words. Compared with the previous studies [10]-[14], our framework models the relationship between neural responses and semantic features in a data-driven manner through neural networks. By adding a subject embedding layer, the reconstruction model captures both the subject-specific and the common patterns of the neural responses to the same piece of speech. In terms of the decoding output, instead of selecting a word from a small closed set, our framework could generate open vocabulary continuous word sequence. ## 2 Methods ### Continuous word embedding reconstruction model The CWER model aimed to reconstruct word embeddings from a sequence of high-dimensional MEG signals. These MEG signals were recorded when healthy subjects were listening passively to stories in their native language. The story text was fed into PLM, and the hidden layers output was utilized as the target of word embeddings. With the temporal boundaries of words, continuous word embeddings can be created by filling word embeddings in their corresponding period. Consider \(X\in R^{C\times T}\) as a segment of MEG data from subject \(s\in[S]\), where \(C\) represents the number of MEG sensors, \(T\) denotes the number of time samples, and \(S\) is the subject number. Let \(Z\in R^{D\times T}\) be a segment of continuous word embeddings of the story text that was corresponding to the MEG segment. Here, \(D\) represents the dimensionality of the word embeddings, and both \(X\) and \(Z\) share the same time samples. The CWER model takes \(X\) and \(s\) as input and outputs the estimated \(Z\). Previous study revealed that MEG response to spoken words showed dynamic spatial patterns [21]. As \(1D\) convolution can effectively capture temporal dependencies while extracting spatial patterns among channels, it was used in the present study. An overview diagram of the CWER model is shown in Fig. 2. The MEG data \(X\) is first fed in a linear layer and followed by a \(1\times 1\) convolution layer to transform MEG signals into a higher hidden space with the dimension of \(D_{1}\). Then, a subject embedding layer is added, conditioning on the subject's one-hot label, to learn a linear transformation \(M_{s}\in R^{D_{1}\times D_{2}}\) along the channel dimension for each subject. This allows us to account for inter-subject variation. Subsequently, a stack of five blocks of three convolutional layers was applied to extract MEG spatial-temporal features. The convolutional layer was the same as that in previous work [22] but with a kernel size of 9 over the time axis to further increase the receptive field. Finally, the output of convolutional blocks was sequentially fed into a \(1\times 1\) convolution layer with \(2D_{2}\) channels, followed by a GELU activation, and then another \(1\times 1\) convolution layer with \(D\) channels to output the estimated \(Z\). The reconstructed embeddings were expected to be similar to the target embeddings while being dissimilar to the non-target embeddings, enabling effective selection of target words from the set during the subsequent beam search decoding. Therefore, a contrastive loss was adopted. More specifically, for a segment of MEG data \(X\), the corresponding word embedding \(Z\) is considered as a positive sample (denoted as \(Z^{1}\)), and \(N-1\) non-target word embeddings segments \(\{Z^{2},...,Z^{N}\}\) sampled from the training set are the negative samples. Here, \(Z^{i}\) ( \(i=1,2,...,N\)) comprises a series of \(T\)\(D\)-dimensional vectors, i.e., \(Z^{i}=\left(z_{1}^{i},z_{2}^{i},...,\ z_{T}^{i}\right)\). The InfoNCE loss is considered in this paper: \[L=-\sum_{t=1}^{T}\log\frac{\exp(\text{sim}(\mathcal{z}_{t},z_{t}^{1})/\tau)}{ \sum_{t=1}^{N}\exp(\text{sim}\left(\mathcal{z}_{t},z_{t}^{i}\right)/\tau)} \tag{1}\] where \(\text{sim}(\cdot)\) is Pearson correlation between two vectors, and \(\tau\) is the temperature parameter in InfoNCE [23]. Intuitively, this loss is the log loss of an N-way softmax-based classifier that tries to classify \(Z\) as \(Z^{1}\). ### Word sequence generation The most likely word sequence could be estimated using a beam search algorithm [6]. Specifically, the beam contains \(k\) candidates. For each candidate, the language model uses the last 8-s generated words \((s_{n-i},..,s_{n-1})\) to predict the next word distribution \(P(s_{n}|s_{n-i},..,s_{n-1})\) over the decoder vocabulary. Nucleus sampling is used to identify subsequent words that belong to the top \(p\) percent of the probability mass and have a probability within a factor \(r\) of the most likely word. For each continuation \(s_{n}\), a sequence \((s_{n-k-1},..,s_{n})\) consisting of the word itself and its preceding \(L\)-\(1\) words is fed into the PLM model, to get the word embedding for \(s_{n}\). Given the neural response \(X\), the CWER model outputs an estimated embedding \(Z\). With the known onset time \(t_{on}\) and offset time \(t_{off}\) of the next word, the reconstructed word embedding for the continuations is given as: \[\hat{z}=\frac{1}{t_{off}-t_{on}}\sum_{t=t_{on}}^{t_{off}}\hat{z_{t}} \tag{2}\] Figure 2: Architecture of the CWER model. As a result, the Pearson correlation between \(2\) and embeddings of all continuations are measured as the next word probability. The \(k\) most likely continuous across all candidates are retained in the beam. After iterating through all the words along time, the candidate with the highest probability was output as the decoded sequence. ## 3 Experiments ### Dataset The dataset used is from Donders Institute for Brain [24]. The MEG data were recorded with a 275-channel axial gradiometer CTF system at a sampling of 1200 Hz from 3 participants, while they listened to audiobooks of _The Adventures of Sherlock Holmes_ in English. For each participant, the data were recorded in 10 separate sessions, consisting of 66 trials in total, with a total duration of approximately 10 hours. ### MEG preprocessing and continuous word embedding preparation The MEG data underwent average re-referencing and were subjected to band-pass filtering (1-40 Hz), notch filtering (49-51, 99-101, and 149-151 Hz), and down-sampling to 120 Hz. Subsequently, independent component analysis (ICA) was applied for each trial to remove eye-blink artifacts. In our pilot study, we observed that MEG exhibited the highest efficiency in predicting word embedding within the low-frequency range. The MEG data were further low-pass filtered at a cut-off frequency of 4 Hz, downsampled to 40 Hz to reduce computational costs, and standardized along the time axis for each sensor. To get continuous word embedding, each word along with its preceding _L_-1 words is fed into GPT [25]. The hidden output of the 9th layer was used as word embedding. The GPT was initialized with the model weight from the model that was used in [6] and fine-tuned on another 4 books (_The Case-Book of Sherlock Holmes, The Memoirs of Sherlock Holmes, The Return of Sherlock Holmes_ and _His Last Bow_) from _Conan Doyle_. The temporal onset and offset of each spoken word were included in the dataset. To create continuous word embedding, a multivariate time series was constructed with each word embedding filling in its corresponding time slot. To match the MEG data, the continuous word embedding was at a sampling rate of 40 Hz and 4 Hz low-pass filtered. ### Experiment setup For each subject, the MEG data from session 4 (8 trials) were used as the testing set, and the data from the remaining 9 sessions (58 trials) were used as the training set. To compensate the delay between stimulus and its corresponding brain response, the MEG data were shifted backward by 0.25 s firstly. The MEG data and the paired continuous word embeddings in the training set were further split into segments with a duration of 10 seconds and an overlap of 80%. The data in the testing set were split into segments with a duration of 10 seconds without overlapping. During the training of the CWER model, an Adam optimizer with a learning rate of \(5\times 10^{-5}\) was used. The batch size was set to 32, and the negative sample \(N\) was set to 128. The hidden size for both _D_\({}_{1}\) and _D_\({}_{2}\) was set to 256. The dropout rate for residual convolutional layers was set to 0.5. The temperature parameter in InfoNCE was set to 0.025. Training was stopped when no loss reduction was found for 2 consecutive training epochs in the testing set. For comparison, a CWER model without subject layer and three subject-specific CWER models for every subject were also trained. Besides, a linear model was used as the baseline. The linear mapping was trained with ridge regression to reconstruct word embedding \(\boldsymbol{z}_{t}\) from the time series of MEG data \(\{\boldsymbol{x}_{t+\tau_{1}},...,\boldsymbol{x}_{t},...,\boldsymbol{x}_{t+ \tau_{2}}\}\). Here, \(\tau_{1}\) and \(\tau_{2}\) were set to \(-40\) and \(60\), corresponding to 1-s before and 1.5-s after stimuli respectively, as a previous study suggested that brain signal robustly encoded upcoming words starting \(-1\)-s before and ending 1.5-s after the onset of spoken words [5]. The ridge parameter was determined by cross-validation procedure with a grid search (20 grid values were logarithmically spaced ranging from \(10^{-3}\) to \(10^{5}\)). More details about the linear model can be found in [26]. For comparison, a common linear model and three subject-specific linear models were trained, respectively. All the models are implemented with Pytorch. During the beam search decoding, the fine-tuned GPT in _Continuous word embedding preparing_ was served as the language model and the word embedding extraction model. The beam width was set to 200, and the nucleus sampling parameters \(p\) and \(r\) were set to 0.9 and 0.1, respectively. The words that occurred at least twice in the train set were selected, forming a decoder vocabulary consisting of 3,660 unique words. The constraint on the continuation number for each candidate is the same as that in [6]. ### Evaluation #### 3.4.1 Segment-level evaluation The evaluation at the segment level was treated as an assessment of a retrieval task. With the trained CWER model, the continuous word embedding was reconstructed for each trial in the testing set. These reconstructed continuous word embeddings were then split into segments with a duration of 3, 5, and 10 s without overlapping, resulting in 1,210, 723, and 359 segments for each duration condition, respectively. The same segmentation manipulation was also applied to the true continuous word embeddings, thereby creating a set of candidates. For each reconstructed segment, the Pearson correlations between the reconstructed segment and each segment in the candidate set were calculated, and then the top 10 segments in the candidate set with the highest correlation were retrieved. Top-10 accuracy was defined as the percentage of reconstructed segments whose target segment was in the retrieval set. Meanwhile, rank accuracy was also calculated as the previous study did [7]. For each reconstructed segment, the rank of its target segment among the candidate set was normalized into rank accuracy using equation (3). \[\text{rank}_{\text{acc}}=1-\frac{<\text{rank}>-1}{<\text{segment number}>-1} \tag{3}\] #### 3.4.2 Sequence-level evaluation To assess the quality of the generated text, BERTScore [27] was used to evaluate semantic similarity between the generated text and the target text. Window similarity was measured by scoring the predicted and target words with BERTScore within a 20-s window around every second of a trial. Trial similarity was then calculated by averaging BERTScore across all windows in a trial. To get the chance level of decoding score, null sequences were generated by using only language model. During beam search decoding, the next word probability was randomly assigned instead of being estimated from MEG data. For each trial in the testing set, 500 null sequences were generated. The _p_-value was given by the proportion of null distribution scores that were higher than the brain decoded. The decoding accuracy for trial (window) was defined as the percentage of trial (window) whose _p_-value was smaller than 0.05. ## 4 Result and discussion The segment-level accuracy for each model as a function of the decoding duration is shown in Table 1. The accuracy for the random model was calculated by using a random continuous word embedding and the results were averaged over 500 runs. As expected, the accuracies of the common linear model were higher than that of the random model, indicating that the semantic features could be reconstructed from MEG with a linear decoder. The subject-specific linear model outperformed the common linear model, suggesting significant individual differences among the subjects. Our common CWER model (without subject layer) and subject-specific CWER model outperformed the common linear model and subject-specific linear model, respectively. This showed that the utilization of a nonlinear network can enhance decoding accuracy. Moreover, the CWER model outperformed both the common CWER model and the subject-specific CWER model, which revealed that the CWER model can effectively leverage both subject-specific and subject-shared information. Compared to the previous studies of semantic decoding with MEG or fMRI [7, 10, 11, 12, 13, 14, 28], the performance of our proposed models showed the promising advantage on the segment-level evaluation. The decoding performance improvement can be attributed to the fact that large amount of the training data and nonlinear CWER model were used in the current work. To determine whether words can be distinguished as concrete or abstract from the reconstructed word embeddings, 24 concrete words and 25 abstract words from the testing set were selected. The target and reconstructed word embeddings were decomposed via a 2-dimensional tSNE [29]. The result is shown in Fig. 3. As expected, the concrete words and abstract words can be well distinguished in the reconstructed word embeddings tSNE space, confirming the good performance on semantic reconstruction from MEG signals. ### Word sequence generation Metrics scores for sequence-level evaluation of each model are shown in Table 2. Scores for Null were computed by averaging metrics for 500 null sequences. For the CWER model, the averaged BERTScore was 0.816, that was higher than the score for the null model and was also higher than that in the previous fMIR work where a BERTScore of 0.810 was reported [6]. The similarity between the generated and target text was significant for all trials, indicating effective semantic reconstruction from the brain. Additionally, the CWER model outperformed the CWER model without a subject layer. This revealed that improving continuous word embedding reconstruction performance can help generate text with better contextual similarity. The decoding accuracy was 20.8% for windows, which indicated that 20.8% windows had a significant BERTScore. The value was lower than that in [6], where 72-82% timepoints had a significantly higher BERTScore than chance level. This is because the semantic of language is spatially distributed in the cerebral cortex [16]. The previous study utilized fMRI data, which has a significantly higher spatial resolution compared to the MEG data used in this study. It should be noted that the actual timing information of words was used during sequence generation. However, in BCI applications, the timing information of words is typically unknown. In such instances, a model can be developed as previous studies to predict the onset of words from MEG data, as those methods used in the previous studies [4, 6]. Simultaneously, given the significant advantage of MEG in temporal resolution compared to fMRI, it is more effective in extracting highly dynamic acoustic features or articulatory movements. This information would assist in generating text from the brain. It is worthy to be investigated in future work. ## 5 Conclusion In the present work, we proposed a framework to reconstruct semantic of language from MEG data. The semantic feature reconstruction performance was greatly improved by using a neural network that leveraged both subject-specific and subject-shared information. By using a beam search language model, text sequences were generated from the reconstructed features. The generated sequences demonstrated significant similarity with the target sequences. The result of this work could support the semantic decoding from non-invasive MEG. ## 6 Acknowledgements This work was supported by the National Key Research and Development Program of China (No.2021ZD0201503), a National Natural Science Foundation of China (No.12074012), and the High-performance Computing Platform of Peking University. \begin{table} \begin{tabular}{l c c c c c c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Top-10 acc (\%)} & \multicolumn{3}{c}{Rank acc (\%)} \\ \cline{2-7} & \multicolumn{3}{c}{Duration (s)} & \multicolumn{3}{c}{Duration (s)} \\ \cline{2-7} & 3 & 5 & 10 & 3 & 5 & 10 \\ \hline Random & 0.1 & 1.3 & 2.7 & 50.1 & 50.1 & 50.2 \\ Linear (common) & 10.7 & 25.8 & 64.8 & 82.5 & 88.4 & 95.3 \\ Linear (s. s.) & 17.2 & 39.2 & 83.2 & 87.0 & 92.4 & 97.8 \\ CWER (w.o. s. l.) & 24.4 & 49.7 & 85.4 & 85.0 & 93.0 & 98.1 \\ CWER (s. s.) & 26.3 & 50.2 & 85.6 & **90.7** & 94.5 & **98.3** \\ CWER & **30.0** & **55.7** & **89.0** & 90.3 & **94.7** & **98.3** \\ \hline \end{tabular} \end{table} Table 1: Decoding accuracy for segment level evaluation of each model. (s. s.: subject specific; w.o. s.l.: without subject layer) \begin{table} \begin{tabular}{c c c c} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Score} & \multicolumn{3}{c}{Acc (\%)} \\ \cline{3-4} & & Window & Trial \\ \hline Null & 0.812 & \(-\) & \(-\) \\ CWER (w.o. s. l.) & 0.815 & 18.1 & 79.8 \\ CWER & **0.816** & **20.8** & **100** \\ \hline \end{tabular} \end{table} Table 2: Similarity scores between the generated word sequence and the target word sequence, and the decoding accuracy for windows and trials. The scores and accuracies were averaged across all windows or trials of all subjects. Figure 3: TSNE plot visualizing of abstract and concrete words for the target and the reconstructed word embedding.
**脳神経信号からの言語の解読は理論的・実用的に重要な役割を果たしています。過去の研究は、脳内侵入性の信号からテキストや音声の解読が実現可能であることを示しています。しかし、非侵入性の脳神経信号を使用すると、低品質の影響で課題が生じます。本研究では、磁気共振encephalography(MEG)信号を聴覚する際の連続した音声に記録した、データに基づいたアプローチを提案しました。まず、対照的学習を用いて多人数の解読モデルをトレーニングしました。これは、MEGデータから連続的な単語埋め込みを再構築することを目的としています。その後、ビームサーチアルゴリズムを採用し、再構築された単語埋め込みに基づいてテキストシーケンスを生成しました。候補文がBEAMに含まれている場合、言語モデルを用いて次の単語を予測しました。次の単語の単語埋め込みは、再構築された単語埋め込みと関連付け
2309.08432
Topological K-theory of quasi-BPS categories of symmetric quivers with potential
In previous works, we introduced and studied certain categories called quasi-BPS categories associated to symmetric quivers with potential, preprojective algebras, and local surfaces. They have properties reminiscent of BPS invariants/ cohomologies in enumerative geometry, for example they play important roles in categorical wall-crossing formulas. In this paper, we make the connections between quasi-BPS categories and BPS cohomologies more precise via the cycle map for topological K-theory. We show the existence of filtrations on topological K-theory of quasi-BPS categories whose associated graded are isomorphic to the monodromy invariant BPS cohomologies. Along the way, we also compute the topological K-theory of categories of matrix factorizations in terms of the monodromy invariant vanishing cycles (a version of this comparison was already known by work of Blanc-Robalo-To\"en-Vezzosi), prove a Grothendieck-Riemann-Roch theorem for matrix factorizations, and prove the compatibility between the Koszul equivalence in K-theory and dimensional reduction in cohomology. In a separate paper, we use the results from this paper to show that the quasi-BPS categories of K3 surfaces recover the BPS invariants of the corresponding local surface, which are Euler characteristics of Hilbert schemes of points on K3 surfaces.
Tudor Pădurariu, Yukinobu Toda
2023-09-15T14:32:44
http://arxiv.org/abs/2309.08432v2
# Topological K-theory of quasi-BPS categories of symmetric quivers with potential ###### Abstract. In previous work, we studied quasi-BPS categories (of symmetric quivers with potential, of preprojective algebras, of surfaces) and showed they have properties analogous to those of BPS invariants/ cohomologies. For example, quasi-BPS categories are used to formulate categorical analogues of the PBW theorem for cohomological Hall algebras (of Davison-Meinhardt) and of the Donaldson-Thomas/BPS wall-crossing for framed quivers (of Meinhardt-Reineke). The purpose of this paper is to make the connections between quasi-BPS categories and BPS cohomologies more precise. We compute the topological K-theory of quasi-BPS categories for a large class of symmetric quivers with potential. In particular, we compute the topological K-theory of quasi-BPS categories for a large class of preprojective algebras, which we use (in a different paper) to compute the topological K-theory of quasi-BPS categories of K3 surfaces. A corollary is that there exist quasi-BPS categories with topological K-theory isomorphic to BPS cohomology. We also compute the topological K-theory of categories of matrix factorizations for smooth affine quotient stacks in terms of the monodromy invariant vanishing cohomology, prove a Grothendieck-Riemann-Roch theorem for matrix factorizations, and check the compatibility between the Koszul equivalence in K-theory and dimensional reduction in cohomology. ## 1. Introduction The BPS invariants are integer virtual counts of semistable (compactly supported) coherent sheaves on a smooth complex Calabi-Yau 3-fold. They are fundamental enumerative invariants which determine many other enumerative invariants of interest for Calabi-Yau 3-folds, such as Gromov-Witten, Donaldson-Thomas (DT), or Pandharipande-Thomas invariants [14, Section 2 and a half]. Let \(X\) be a smooth Calabi-Yau 3-fold, let \(v\in H^{\cdot}(X,\mathbb{Z})\), let \(\sigma\) be a stability condition, let \(M^{\sigma}_{X}(v)\) be the good moduli space of \(\sigma\)-semistable (compactly supported) sheaves on \(X\) of support \(v\), and let \(\Omega^{\sigma}_{X}(v)\) be the corresponding BPS invariant. An important problem in enumerative algebraic geometry is to define a natural BPS cohomology theory for \(M^{\sigma}_{X}(v)\) which recovers \(\Omega^{\sigma}_{X}(v)\) as its Euler characteristic. Even more, one could attempt to construct a natural dg-category \[\mathcal{B}\mathcal{P}\mathcal{S}^{\sigma}_{X}(v) \tag{1.1}\] which recovers a 2-periodic version of BPS cohomology (via periodic cyclic homology or topological K-theory [1]), and thus also the BPS invariant \(\Omega^{\sigma}_{X}(v)\). The BPS cohomology, the BPS category (1.1), and the K-theory of (1.1) (BPS K-theory) are alternatives to their classical counterparts for \(M^{\sigma}_{X}(v)\). By dimensional reduction, one also obtains a BPS cohomology/ category/ K-theory for moduli of semistable sheaves on a surface. One may hope that the constructed BPS theories are more tractable than their classical counterparts, and that they will have applications in ###### Contents * 1 Introduction * 2 Preliminaries * 3 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.2 The BPS cohomology of a BPS category * 3.3 The BPS cohomology of a BPS category * 3.4 The BPS cohomology of a BPS category * 3.5 The BPS cohomology of a BPS category * 3.6 The BPS cohomology of a BPS category * 3.7 The BPS cohomology of a BPS category * 3.8 The BPS cohomology of a BPS category * 3.9 The BPS cohomology of a BPS category * 3.1 The BPS cohomology of a BPS category * 3.1.1 The BPS cohomology of a BPS category * 3.1.2 The BPS cohomology of a BPS category * 3.1.3 The BPS cohomology of a BPS category * 3.1.4 The BPS cohomology of a BPS category * 3.1.5 The BPS cohomology of a BPS category * 3.1.6 The BPS cohomology of a BPS category * 3.1.7 The BPS cohomology of a BPS category * 3.1.8 The BPS cohomology of a BPS category * 3.1.9 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2 The BPS cohomology of a BPS category * 3.2.3 The BPS cohomology of a BPS category * 3.2.4 The BPS cohomology of a BPS category * 3.2.5 The BPS cohomology of a BPS category * 3.2.6 The BPS cohomology of a BPS category * 3.2.7 The BPS cohomology of a BPS category * 3.2.8 The BPS cohomology of a BPS category * 3.2.9 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.2 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.3 The BPS cohomology of a BPS category * 3.2.1 The BPS cohomology of a BPS category * 3.2.2.4 The BPS cohomology of a BPS category * 3.2.5 The BPS cohomology of a BPS category * 3.2.1.1 The BPS cohomology of a BPS category * 3.2.1.2 The BPS cohomology of a BPS category * 3.2.1.3 The BPS cohomology of a BPS category * 3.2.2.5 The BPS cohomology of a BPS category * 3.2.1.6 The BPS cohomology of a BPS category * 3.2.1.7 The BPS cohomology of a BPS category * 3.2.1.8 The BPS cohomology of a BPS category * 3.2.1.9 The BPS cohomology of a BPS category * 3.2.2.1.9 The BPS cohomology of a BPS category * 3.2.2.1.1 The BPS cohomology of a BPS category * 3.2.2.2.2.2.3 The BPS cohomology of a BPS category * 3.2.2.4 The BPS cohomology of a BPS category * 3.2.1.1.1 The BPS cohomology of a BPS category * 3.2.2.5 The BPS cohomology of a BPS category * 3.2.2.6 The BPS cohomology of a BPS category * 3.2.7 The BPS cohomology of a BPS category * 3.2.2.8 The BPS cohomology of a BPS category * 3.2.9.1 The BPS cohomology of a BPS category * 3.2.1.1.2.1.1.2.3.4 The BPS cohomology of a BPS category * 3.2.1.1.5.1.6 The BPS cohomology of a BPS category * 3.2.1.7.1.8 The BPS cohomology of a BPS category such as a BPS cohomology theory which recovers \(\Omega^{\sigma}_{X}(v)\) as its Euler characteristic. Davison-Meinhardt [10] defined BPS cohomology for all symmetric quivers with potentials (thus for all local models), and Davison-Hennecart-Schlegel Mejia [12] defined it for \(X=\operatorname{Tot}_{S}K_{S}\), where \(S\) is a Calabi-Yau surface. For a general CY 3-fold, up to the existence of a certain orientation data, the BPS cohomology is defined in [14, Definition 2.11]. By dimensional reduction, we also regard BPS cohomology as a cohomology theory for good moduli spaces of objects in categories of dimension 2, for example of the good moduli spaces \(P(d)\) of the classical truncation of the quasi-smooth stack \(\mathcal{P}(d)\) of dimension \(d\) representations of the preprojective algebra of \(Q^{\circ}\). For a quiver \(Q^{\circ}\), there is a perverse sheaf \[\mathcal{BPS}^{p}_{d}\in\operatorname{Perv}(P(d))\] whose cohomology is the BPS cohomology of the preprojective algebra \(Q^{\circ}\). ### Categorical DT theory We are interested in constructing a category (1.1) which recovers (and has analogous properties to) the BPS invariants/ cohomology. If there are no strictly \(\sigma\)-semistable sheaves of support \(v\), such a category will recover the DT invariants by taking the Euler characteristic of its periodic cyclic homology, see [14] for a definition for local surfaces and [HHR] for work in progress addressing the general case. In previous work, we introduced and studied quasi-BPS categories: * for symmetric quivers with potential (1.2), * for preprojective algebras, which we denote by (1.4) \[\mathbb{T}(d)_{v}\subset D^{b}(\mathcal{P}(d)),\] * for points on smooth surfaces [13, PTc], and * for semistables sheaves on K3 surfaces [PTd]. These categories have analogous properties to BPS cohomology. Indeed, there are semiorthogonal decompositions of the categorical Hall algebras (of symmetric quivers with potential, and thus also of preprojective algebras, or of K3 surfaces) [13, Theorem 1.1] and [14, PTe, PTd], or of Donaldson-Thomas categories (of symmetric quivers with potential) [PTa, PTe] in products of quasi-BPS categories. These semiorthogonal decompositions are analogous to the PBW theorem for cohomological Hall algebras [10, DHSMb], or of the DT/ BPS wall-crossing of Meinhardt-Reineke for framed quivers [15]. For weights \(v\in\mathbb{Z}\) as in Theorem 1.1, we proved categorical versions of the Davison support lemma for BPS sheaves [14, PTe, PTd]. However, we observed in [14] that quasi-BPS categories do not categorify BPS cohomology for every \(v\in\mathbb{Z}\). ### Matrix factorizations and vanishing cycles Locally, BPS sheaves are vanishing cycles of IC sheaves of coarse spaces of smooth quotient stacks, see (1.9). We thus first study vanishing cycles for regular function \[f\colon\mathcal{X}\to\mathbb{C},\] where \(\mathcal{X}=X/G\) is a smooth quotient stack, where \(G\) is a reductive group and \(X\) is a smooth affine variety. It is well-known that the category of matrix factorizations \(\operatorname{MF}(\mathcal{X},f)\) is a categorification of vanishing cohomology \(H\left(\mathcal{X},\varphi_{f}\mathbb{Q}\chi\right)\), see [10, 11]. Let \(\mathrm{T}\) be the monodromy operator and let \(\varphi_{f}^{\mathrm{inv}}\mathbb{Q}\chi\) be the cone of the endomorphism \(1-\mathrm{T}\) on \(\varphi_{f}\mathbb{Q}\chi\). Inspired by [11, 12, 13], we construct a Chern character map for \(f\) quasi-homogeneous: \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathcal{X},f))\to\bigoplus_{j\in\mathbb{Z}}H^{i+2j}(\mathcal{X},\varphi_{f}^{ \operatorname{inv}}\mathbb{Q}_{\mathcal{X}}), \tag{1.5}\] which is an isomorphism if \(\mathcal{X}\) is a variety, see (4.14). We note that the construction of (1.5) is fairly elementary: by the Koszul equivalence and dimensional reduction, both sides are isomorphic to relative theories; under this identification, (1.5) is the Chern character from relative topological K-theory to relative singular cohomology. The Chern character map (1.5) induces a cycle map on an associated graded of topological K-theory: \[\operatorname{c}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{X},f))\to H^{2\dim\mathcal{X}(d)-i-2\ell}( \mathcal{X},\varphi_{f}^{\operatorname{inv}}\mathbb{Q}_{\mathcal{X}}[-2]). \tag{1.6}\] In Section 4, we discuss functoriality of (1.5) and (1.6), in particular we prove a Grothendieck-Riemann-Roch theorem, see Theorem 4.7. ### Quasi-BPS categories for symmetric quivers with potential We briefly explain the construction of the quasi-BPS categories (1.2). Consider a symmetric quiver \(Q\) with potential \(W\). For any \(v\in\mathbb{Z}\), Spenko-Van den Bergh [10] constructed twisted non-commutative resolutions \[\mathbb{M}(d)_{v}\subset D^{b}(\mathcal{X}(d))_{v} \tag{1.7}\] of \(X(d)\). The category \(\mathbb{M}(d)_{v}\) is generated by certain vector bundles corresponding to lattice points inside a polytope. Then \[\mathbb{S}(d)_{v}:=\operatorname{MF}(\mathbb{M}(d)_{v},\operatorname{Tr}W) \subset\operatorname{MF}(\mathcal{X}(d),\operatorname{Tr}W)\] is the category of matrix factorizations \((\alpha\colon A\rightleftarrows B\colon\beta)\), where \(A,B\) are direct sums of the generating vector bundles of \(\mathbb{M}(d)_{v}\), and \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(\operatorname{Tr}W\). For \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) and \(v\in\mathbb{Z}\), we define a set \(S_{v}^{d}\) of partitions of \(d\) from the combinatorics of the polytope used to define (1.7). For each partition \(A\in S_{v}^{d}\), there is a corresponding constructible sheaf \(\mathcal{BPS}_{A}\). Let \[\mathcal{BPS}_{d,v}:=\bigoplus_{A\in S_{v}^{d}}\mathcal{BPS}_{A}. \tag{1.8}\] If \(\gcd{(\underline{d},v)}=1\) and \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex, then \(S_{v}^{d}\) consists only of the one term partition of \(d\) and then \[\mathcal{BPS}_{d,v}=\mathcal{BPS}_{d}:=\begin{cases}\varphi_{\operatorname{ Tr}W}\mathrm{IC}_{X(d)}[-1],\text{ if }R(d)^{\operatorname{st}}\neq\emptyset,\\ 0,\text{ otherwise.}\end{cases} \tag{1.9}\] There is thus a monodromy action on the cohomology of \(\mathcal{BPS}_{d,v}\) induced from the monodromy of vanishing cycles. **Theorem 1.2**.: (Theorems 6.2 and 6.3) _Let \(Q\) be a symmetric quiver, let \(W\) be a quasi-homogeneous potential, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\). For any \(i,\ell\in\mathbb{Z}\), there is an injective cycle map induced from (1.6):_ \[\operatorname{c}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}} \left(\mathbb{S}(d)_{v}\right)\hookrightarrow H^{\dim\mathcal{X}(d)-2\ell-i}( X(d),\mathcal{BPS}_{d,v}^{\operatorname{inv}}[-1]). \tag{1.10}\] _If \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex, then the map (1.10) is an isomorphism._ A first ingredient in the proof of Theorem 1.2 is the explicit computation of the pushforward \(\pi_{*}\mathrm{IC}_{\mathfrak{X}(d)}\) as a sum of shifted perverse sheaves, where \(\pi\colon\mathcal{X}(d)\to X(d)\) is the good moduli space map, due to Meinhardt-Reineke [10] and Davison-Meinhardt [11]. The main ingredient in the proof of Theorem 1.2 is the construction of a cycle map from the topological K-theory of quasi-BPS category to BPS cohomology, see Theorem 6.3. We construct coproduct-like maps on the topological K-theory of the quasi-BPS category \(K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\) which we use to restrict the image of \(\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\subset\mathrm{gr}_{ \ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X}(d),\mathrm{Tr}\,W))\) under (1.6). Finally, to show that (1.10) is an isomorphism under the hypothesis above, we use a categorification of the (Meinhardt-Reineke) \(\mathrm{DT}/\mathrm{BPS}\) wall-crossing, which we prove in [PTe]. The case of zero potential of Theorem 1.2 is related to the categorification of intersection cohomology for good moduli spaces of smooth symmetric stacks pursued in [2]. **Theorem 1.3**.: (Theorem 6.6 and Corollary 6.7) _Assume \(Q\) has an even number of edges between any two different vertices and an odd number of loops at every vertex. Let \(d\in\mathbb{N}^{I}\) and let \(v\in\mathbb{Z}\) such that \(\gcd{(\underline{d},v)}=1\). Then \(K_{1}^{\mathrm{top}}(\mathbb{M}(d)_{v})=0\) and there is an isomorphism of \(\mathbb{Q}\)-vector spaces for all \(\ell\in\mathbb{Z}\):_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v}) \xrightarrow{\sim}\mathrm{IH}^{\dim\mathcal{X}(d)-2\ell-1}(X(d)).\] ### Quasi-BPS categories for preprojective algebras Theorem 1.2 can be also used, in conjunction with dimensional reduction, to compute the topological K-theory of quasi-BPS categories for preprojective algebras. For a quiver \(Q^{\circ}\), consider its tripled quiver with potential \((Q,W)\). The subcategory (1.4) is Koszul equivalent [15] to the subcategory of graded matrix factorizations with summands in \(\mathbb{M}(d)_{v}\): \[\mathbb{S}^{\mathrm{gr}}(d)_{v}:=\mathrm{MF}^{\mathrm{gr}}\left(\mathcal{X}(d ),\mathrm{Tr}\,W\right).\] We define constructible sheaves \(\mathcal{BPS}^{p}_{d,v}\) on \(P(d)\) as in (1.8). There is a cycle map \[\mathrm{c}\colon\mathrm{gr}_{\ell}G_{0}^{\mathrm{top}}(\mathcal{P}(d))\to H _{2\ell}^{\mathrm{BM}}(\mathcal{P}(d)) \tag{1.11}\] induced from the Chern character map of \(\mathcal{P}(d)\). **Theorem 1.4**.: (Corollary 7.3 and Theorem 7.6) _Let \(Q^{\circ}\) be a quiver, let \(d\in\mathbb{N}^{I}\), and let \(v,\ell\in\mathbb{Z}\). Then \(K_{1}^{\mathrm{top}}(\mathbb{T}(d)_{v})=0\) and the cycle map (1.11) induces an injective map:_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}) \hookrightarrow H^{-2\ell}(P(d),\mathcal{BPS}^{p}_{d,v}). \tag{1.12}\] _Next, assume that for any two different vertices of \(Q^{\circ}\), there is an even number of unoriented edges between them. Then the map (1.12) is an isomorphism._ The preprojective algebras which locally model the moduli of semistable sheaves on a K3 surface are of quivers \(Q^{\circ}\) with the property that, for any two different vertices, there is an even number of unoriented edges between them. In Section 9, we prove a version of Theorem 1.4 for etale covers of stacks of representations of preprojective algebra of such quivers, which suffices to compute the topological K-theory of quasi-BPS categories of K3 surfaces [PTd]. ### Weight-independence We revisit the discussion from Subsection 1.4, but the same observations apply in the setting of Subsection 1.5. Let \(Q=(I,E)\) be a quiver with an even number of edges between any two different vertices and an odd number of loops at every vertex, and let \(W\) be a quasi-homogeneous potential of \(Q\). Note that there are equivalences, where \(k\in\mathbb{Z}\): \[\mathbb{S}(d)_{v}\simeq\mathbb{S}(d)_{v+k\underline{d}},\ \mathbb{S}(d)_{v} \simeq\mathbb{S}(d)_{-v}^{\mathrm{op}}\] given by tensoring with the \(k\)th power of the determinant line bundle and by taking the derived dual, respectively. There are no obvious other relations between \(\mathbb{S}(d)_{v}\) and \(\mathbb{S}(d)_{v^{\prime}}\) for \(v,v^{\prime}\in\mathbb{Z}\). However, by Theorem 1.4, we obtain: **Corollary 1.5**.: _Let \(v,v^{\prime}\in\mathbb{Z}\) be such that \(\gcd(v,\underline{d})=\gcd(v^{\prime},\underline{d})\). Let \(i\in\mathbb{Z}\). Then there is an equality of dimensions:_ \[\dim_{\mathbb{Q}}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})=\dim_{\mathbb{Q}}K_{ i}^{\mathrm{top}}(\mathbb{S}(d)_{v^{\prime}}).\] Note that the statement is reminiscent to the \(\chi\)-independence phenomenon [10], [11], see especially [11, Corollary 1.5]. We observed an analogous statement for quasi-BPS categories of K3 surfaces in [20]. We do not know whether a stronger categorical statement, or at the level of algebraic K-theory, should hold for quivers with potential, see [20, Conjecture 1.4] for a conjecture in the case of K3 surfaces. It is natural to ask whether one can use a coproduct to define a primitive part \(\mathrm{P}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\subset K_{i}^{\mathrm{top}}( \mathbb{S}(d)_{v})\) of dimension equal to the dimension of the (total) monodromy invariant BPS cohomology, and thus independent of \(v\in\mathbb{Z}\). We defined such spaces in the localized equivariant algebraic K-theory for the tripled quiver with potential in [20]. We do not pursue this idea further in this paper. ### Complements In Section 3 we review the Chern character, the cycle map, and the (topological) Grothendieck-Riemann-Roch theorem for quotient stacks. In Section 5, we compare the Koszul equivalence [14] for dg-categories (and its induced isomorphism in K-theory) with the dimensional reduction theorem in cohomology [13]. In particular, we construct a Chern character map from the topological K-theory of a class of graded matrix factorizations to vanishing cohomology. In Section 8, we discuss some explicit computations of the topological K-theory of quasi-BPS categories. We mention two examples. First, let \(g\geqslant 0\). The coarse space of representations of the \(2g+1\) loop quiver is the variety of matrix invariants \(X(d)=\mathfrak{gl}(d)^{2g+1}/\!\!/G(d)\). By Theorem 1.3, we obtain a combinatorial formula for the dimensions of the intersection cohomology \(\mathrm{IH}^{\bullet}(X(d))\), which recovers a formula of Reineke [15]. Second, we compute the topological K-theory for quasi-BPS categories of points in \(\mathbb{C}^{3}\), see Proposition 8.11. It would be interesting to extend the methods in this paper and obtain computations beyond the case of quivers satisfying Assumption 2.1. ### Acknowledgements We thank Andrei Okounkov and Spela Spenko for useful discussions. T. P. is grateful to Columbia University in New York and to Max Planck Institute for Mathematics in Bonn for their hospitality and financial support during the writing of this paper. Y. T. is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan. ## 2. Preliminaries For a \(\mathbb{Z}\)-graded space \(V=\bigoplus_{j\in\mathbb{Z}}V^{j}\), let \(\widetilde{V}^{i}:=\prod_{j\in\mathbb{Z}}V^{i+2j}\). For a set \(S\), let \(\#S\) be the cardinal of \(S\). We list the main notation used in the paper in Table (1). Figure 1. Notation introduced in the paper ### Stacks and semiorthogonal decompositions The spaces \(\mathcal{X}=X/G\) considered are quasi-smooth (derived) quotient stacks over \(\mathbb{C}\), where \(G\) is a reductive group. The classical truncation of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{\mathrm{cl}}=X^{\mathrm{cl}}/G\). We assume that \(X^{\mathrm{cl}}\) is quasi-projective. We denote by \(\mathbb{L}_{\mathcal{X}}\) the cotangent complex of \(\mathcal{X}\). For \(G\) a reductive group and \(X\) a dg-scheme with an action of \(G\), denote by \(X/G\) the corresponding quotient stack. When \(X\) is affine, we denote by \(X/\!\!/G\) the quotient dg-scheme with dg-ring of regular functions \(\mathcal{O}_{X}^{G}\). We will consider semiorthogonal decompositions \[D^{b}(\mathcal{X})=\langle\mathbb{A}_{i}\mid i\in I\rangle, \tag{2.1}\] where \(I\) is a partially ordered set. Consider a morphism \(\pi\colon\mathcal{X}\to S\). We say the semiorthogonal decompositions (2.1) is _\(S\)-linear_ if \(\mathbb{A}_{i}\otimes\pi^{*}\mathrm{Perf}(S)\subset\mathbb{A}_{i}\) for all \(i\in I\). Same as in the papers [PTd, PTe], we use the terminology of _good moduli spaces_ of Alper, see [1, Example 8.3]. ### Constructible sheaves For \(\mathcal{X}=X/G\) a quotient stack, denote by \(D^{b}_{\mathrm{con}}(\mathcal{X})\) the category of bounded complexes of constructible sheaves on \(\mathcal{X}\), see [10], and by \(\mathrm{Perv}(\mathcal{X})\subset D^{b}_{\mathrm{con}}(\mathcal{X})\) the abelian category of perverse sheaves on \(\mathcal{X}\), see [10]. We denote by \[{}^{p}\tau^{\leq\bullet}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to D^{b}_{ \mathrm{con}}(\mathcal{X})\] the truncation functors with respect to the perverse t-structure and by \[{}^{p}\mathcal{H}^{\bullet}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to \mathrm{Perv}(\mathcal{X})\] the perverse cohomology sheaves. For \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\), consider its total perverse cohomology: \[{}^{p}\mathcal{H}\!\cdot\!(F):=\bigoplus_{i\in\mathbb{Z}}{}^{p}\mathcal{H}^{i }(F)[-i].\] We say \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\) is _a shifted perverse sheaf in degree \(\ell\)_ if \(F[\ell]\in\mathrm{Perv}(\mathcal{X})\) and _a shifted perverse sheaf_ if there exists \(\ell\in\mathbb{Z}\) such that \(F[\ell]\in\mathrm{Perv}(\mathcal{X})\). Let \(\mathbb{D}\) denote the Verdier duality functor on a stack \(\mathcal{X}\). Let \(\omega_{\mathcal{X}}:=\mathbb{D}\mathbb{Q}_{\mathcal{X}}\). When \(\mathcal{X}\) is a smooth stack, equidimensional of dimension \(d\), then \(\omega_{\mathcal{X}}=\mathbb{Q}_{\mathcal{X}}[2d]\). For \(\mathcal{X}=X/G\), denote by \(H^{i}(\mathcal{X}):=H^{i}(\mathcal{X},\mathbb{Q})=H^{i}_{G}(X,\mathbb{Q})\) the singular cohomology of \(\mathcal{X}\) and by \(H^{\mathrm{BM}}_{i}(\mathcal{X})=H^{\mathrm{BM}}_{i}(\mathcal{X},\mathbb{Q})= H^{\mathrm{BM}}_{i,G}(X,\mathbb{Q})\) the Borel-Moore homology of \(\mathcal{X}\) with rational coefficients. For \(F\in D^{b}_{\mathrm{con}}(\mathcal{X})\), we use the notation \(H^{\bullet}(\mathcal{X},F)\) for individual cohomology spaces (that is, for \(\bullet\) an arbitrary integer) and \(H^{\cdot}(\mathcal{X},F)\) for the total cohomology \(H^{\cdot}(\mathcal{X},F):=\bigoplus_{i\in\mathbb{Z}}H^{i}(\mathcal{X},F)\). ### Nearby and vanishing cycles For \(\mathcal{X}\) a smooth quotient stack and \[f\colon\mathcal{X}\to\mathbb{C} \tag{2.2}\] a regular function, consider the vanishing and nearby cycle functors: \[\varphi_{f},\psi_{f}\colon D^{b}_{\mathrm{con}}(\mathcal{X})\to D^{b}_{ \mathrm{con}}(\mathcal{X}). \tag{2.3}\] In this paper, we consider regular functions (2.2) such that \(0\) is the only critical value, equivalently that \(\mathrm{Crit}(f)\subset\mathcal{X}_{0}:=f^{-1}(0)\). Note that we consider the pushforward along \(\iota\colon\mathcal{X}_{0}:=f^{-1}(0)\hookrightarrow\mathcal{X}\) of the usual vanishing and nearby functors. There is an exact triangle: \[\iota_{*}\iota^{*}\bullet\to\psi_{f}\bullet\to\varphi_{f}\bullet\to\iota_{*} \iota^{*}\bullet[1].\] The functors (2.3) restrict to functors \[\varphi_{f}[-1],\psi_{f}[-1]\colon\operatorname{Perv}(\mathscr{X})\to \operatorname{Perv}(\mathscr{X}).\] Further, \(\varphi_{f}[-1]\) and \(\psi_{f}[-1]\) commute with \(\mathbb{D}\). We will abuse notation and let \(\varphi_{f}:=\varphi_{f}\mathbb{Q}_{\mathscr{X}}\), \(\psi_{f}:=\psi_{f}\mathbb{Q}_{\mathscr{X}}\), \(\varphi_{f}\mathrm{IC}:=\varphi_{f}\mathrm{IC}_{\mathscr{X}}\), \(\psi_{f}\mathrm{IC}:=\psi_{f}\mathrm{IC}_{\mathscr{X}}\). We may drop \(f\) from the notation if there is no danger of confusion. For more details on vanishing cycles on quotient stacks, see [1, Subsection 2.2], [1, Proposition 2.13]. ### Topological K-theory For a dg-category \(\mathscr{D}\), Blanc [1] defined the topological K-theory spectrum \[K^{\mathrm{top}}(\mathscr{D}).\] For \(i\in\mathbb{Z}\), consider its (rational) homotopy groups, which are \(\mathbb{Q}\)-vector spaces (we drop \(\mathbb{Q}\) from the notation): \[K^{\mathrm{top}}_{i}(\mathscr{D}):=K^{\mathrm{top}}_{i}(\mathscr{D})\otimes_ {\mathbb{Z}}\mathbb{Q}:=\pi_{i}(K^{\mathrm{top}}(\mathscr{D}))\otimes_{ \mathbb{Z}}\mathbb{Q}.\] We have that \(K^{\mathrm{top}}_{i}(\mathscr{D})\cong K^{\mathrm{top}}_{i+2}(\mathscr{D})\) for every \(i\in\mathbb{Z}\) by multiplication with a Bott element, see [1, Definition 1.6]. The topological K-theory spectrum sends exact triangles of dg-categories to exact triangles of spectra [1, Theorem 1.1(c)]. We denote the total topological K-theory of \(\mathscr{D}\) by: \[K^{\mathrm{top}}_{\cdot}(\mathscr{D})=K^{\mathrm{top}}_{0}(\mathscr{D})\oplus K ^{\mathrm{top}}_{1}(\mathscr{D}).\] Given a filtration (indexed by integers) on \(K^{\mathrm{top}}_{i}(\mathscr{D})\) for some \(i\in\mathbb{Z}\), we consider the associated graded pieces \(\operatorname{gr}_{\bullet}K^{\mathrm{top}}_{i}(\mathscr{D})\) for \(\bullet\in\mathbb{Z}\) and we let \[\operatorname{gr}_{\cdot}K_{i}(\mathscr{D}):=\bigoplus_{j\in\mathbb{Z}} \operatorname{gr}_{j}K_{i}(\mathscr{D}).\] Consider a quotient stack \(\mathscr{X}=X/G\) such that \(G\) is reductive and \(X^{\mathrm{cl}}\) is quasi-projective. Let \(M\subset G\) be a compact Lie group such that \(G\) is the complexification of \(M\). Denote by \(K^{\mathrm{top}}_{\bullet}(\mathscr{X}):=K^{\mathrm{top}}_{\bullet,M}(X)\) the \(M\)-equivariant topological K-theory of \(X\) (defined by Atiyah and Segal [11]) and by \(G^{\mathrm{top}}_{\bullet}(\mathscr{X}):=G^{\mathrm{top}}_{\bullet,M}(X)\) the \(M\)-equivariant K-homology of \(X\) (also referred to as the dual of compactly supported equivariant topological K-theory in the literature, defined by Thomason [13]). We refer to [1] for a brief review of properties of topological K-theory, K-homology of varieties, and Grothendieck-Riemann-Roch theorems for varieties. For references on K-homology, see [1] for the non-equivariant case and [1, Subsection 2.1.2] for the equivariant case. By [1, Theorem C and the remark following it], we have that: \[K^{\mathrm{top}}_{\bullet}(\operatorname{Perf}(\mathscr{X}))\cong K^{ \mathrm{top}}_{\bullet}(\mathscr{X}),\ K^{\mathrm{top}}_{\bullet}(D^{b} \mathrm{Coh}(\mathscr{X}))\cong G^{\mathrm{top}}_{\bullet}(\mathscr{X}). \tag{2.4}\] Note that \(K^{\mathrm{top}}_{\bullet}(\mathscr{X})=G^{\mathrm{top}}_{\bullet}(\mathscr{ X})\) if \(\mathscr{X}\) is smooth. For a quotient stack \(\mathscr{X}\), there are Chern character maps for \(i\in\mathbb{Z}\): \[\operatorname{ch}\colon K^{\mathrm{top}}_{i}(\mathscr{X})\to\prod_{j\in \mathbb{Z}}H^{i+2j}(\mathscr{X}),\ \operatorname{ch}\colon G^{\mathrm{top}}_{i}(\mathscr{X})\to\prod_{j\in \mathbb{Z}}H^{\mathrm{BM}}_{i+2j}(\mathscr{X}).\] If \(\mathscr{X}\) is a scheme, then \(\mathscr{X}\) is a finite CW complex, and the above Chern character maps are isomorphisms. The first one is the usual Atiyah-Hirzebruch theorem. The second one follows as the dual of the analogous isomorphism for compactly supported topological K-theory, see [1, Section 3.6 and Section 6] or [1, Section 11]. The above Chern characters can be also obtained from the Chern character \[K_{i}^{\mathrm{top}}\to\mathrm{HP}_{i}\] from topological K-theory to periodic cyclic homology applied to the dg-categories \(\mathrm{Perf}(\mathcal{X})\) and \(D^{b}\mathrm{Coh}(\mathcal{X})\), respectively, see [1, Section 4.4]. The Chern character maps are not isomorphisms (in general) for \(\mathcal{X}\) a quotient stack. In Section 3.1 we review the approximation of the Chern character of quotient stacks by Chern characters for varieties following Edidin-Graham [1]. Note that both cohomology (with coefficients in a constructible sheaf \(F\)) and topological K-theory depend only on the underlying classical stack. Let \[l\colon\mathcal{X}^{\mathrm{cl}}\to\mathcal{X} \tag{2.5}\] The pushforward functor induces isomorphisms: \[l_{*}\colon H^{\mathrm{BM}}_{\bullet}(\mathcal{X}^{\mathrm{cl}})\xrightarrow{ \sim}H^{\mathrm{BM}}_{\bullet}(\mathcal{X}),\,l_{*}\colon G^{\mathrm{top}}_{ \bullet}(\mathcal{X}^{\mathrm{cl}})\xrightarrow{\sim}G^{\mathrm{top}}_{\bullet }(\mathcal{X}). \tag{2.6}\] The pullback functor induces isomorphisms: \[l^{*}\colon H^{\bullet}(\mathcal{X})\xrightarrow{\sim}H^{\bullet}(\mathcal{X }^{\mathrm{cl}}),\,l^{*}\colon K^{\mathrm{top}}_{\bullet}(\mathcal{X}) \xrightarrow{\sim}K^{\mathrm{top}}_{\bullet}(\mathcal{X}^{\mathrm{cl}}). \tag{2.7}\] ### Approximation of stacks by varieties In the study of cohomology theories for quotient stacks, it is useful to approximate quotient stacks by varieties. We use the method of Totaro [15], Edidin-Graham [1]. We exemplify the method for Borel-Moore homology and singular cohomology, but it can be applied in many other situations (such as equivariant Chow groups, see loc. cit., approximation of the algebraic or topological Chern character, see loc. cit. and Subsection 3.1, or vanishing cohomology, see [14, Subsection 2.2]). Let \(\mathcal{X}=X/G\) be a quotient stack with \(G\) an algebraic group and \(X\) quasi-projective scheme with a \(G\)-linearized action. Choose representations \(V_{n}\twoheadrightarrow V_{n-1}\) such that \[\mathrm{codim}(S_{n}\text{ in }V_{n})\geqslant n,\] where \(S_{n}\subset V_{n}\) is the closed set of points with non-trivial stabilizer. Further, we may choose \(V_{n}\) such that, for \(U_{n}:=V_{n}\setminus S_{n}\), the quotient \(U_{n}/G\) is a scheme [1, Lemma 9]. Then the quotient \((X\times U_{n})/G\) is also a scheme because \(X\) is quasi-projective [1, Proposition 23]. For \(\ell\) fixed and for \(n\) large enough, there are isomorphisms induced by pullback maps: \[H^{\mathrm{BM}}_{\ell}(\mathcal{X})\xrightarrow{\sim}H^{\mathrm{ BM}}_{\ell+2\dim V_{n}}((X\times V_{n})/G)\xrightarrow{\sim}H^{\mathrm{BM}}_{\ell+2 \dim V_{n}}\left((X\times U_{n})/G\right),\] \[H^{\ell}(\mathcal{X})\xrightarrow{\sim}H^{\ell}((X\times V_{n} )/G)\xrightarrow{\sim}H^{\ell}\left((X\times U_{n})/G\right).\] ### The Grothendieck-Riemann-Roch theorem We state the (topological) Grothendieck-Riemann-Roch (GRR) theorem for lci morphisms of (classical and possibly singular) varieties of Baum-Fulton-MacPherson [1, 1]. Let \(\mathcal{X}\) be a classical quotient stack. Recall that there is an intersection product \[H^{i}(\mathcal{X})\otimes H^{\mathrm{BM}}_{j}(\mathcal{X})\to H^{ \mathrm{BM}}_{j-i}(\mathcal{X})\] for all \(i,j\in\mathbb{Z}\). Further, if \(\mathcal{X}^{\prime}\hookrightarrow\mathcal{X}\) a closed immersion, consider the topological K-theory and the Borel-Moore homology with closed supports \(G^{\mathrm{top}}_{\bullet,\mathcal{X}^{\prime}}(\mathcal{X})\cong G^{ \mathrm{top}}_{\bullet}(\mathcal{X}^{\prime})\) and \(H^{\mathrm{BM}}_{\bullet,\mathcal{X}^{\prime}}(\mathcal{X})\cong H^{\mathrm{ BM}}_{\bullet}(\mathcal{X}^{\prime})\). **Theorem 2.1**.: _Assume \(\mathscr{X}\) and \(\mathscr{Y}\) are classical quotient stacks and let \(f\colon\mathscr{X}\to\mathscr{Y}\) be an lci morphism. Let \(\mathscr{X}^{\prime}\subset\mathscr{X}\) and \(\mathscr{Y}^{\prime}\subset\mathscr{Y}\) be closed quotient substacks._ _(a) Assume that \(f^{-1}(\mathscr{Y}^{\prime})\subset\mathscr{X}^{\prime}\). Then the following diagram commutes:_ (2.8) _(b) Assume that \(f(\mathscr{X}^{\prime})\subset\mathscr{Y}^{\prime}\). Let \(T_{f}\) be the virtual tangent bundle of \(f\) and consider its Todd class \(\operatorname{td}(T_{f})\in\widetilde{H}^{0}(\mathscr{X}):=\prod_{i\geqslant 0 }H^{2i}(\mathscr{X})\). Assume \(f\) is proper and define \(f^{\prime}_{*}(-):=f_{*}(\operatorname{td}(T_{f})\cdot(-))\). The following diagrams commute:_ (2.9) Proof.: (a) There are such pullback (Gysin) functors for any quasi-smooth morphism for Borel-Moore homology [10, Construction 3.4] and for derived categories [13], and thus for topological K-theory by (2.4). Such maps have also been constructed for lci morphisms of classical schemes in [1, Section 4.4], see also [1, Section 4.2 and 5]. The commutativity of the diagram (2.8) follows from standard properties of the Chern character [1, Section 5]. (b) For \(f\) a proper and lci morphism, there are pushforward (Gysin) maps \(f_{*}\colon K_{\bullet}^{\operatorname{top}}(\mathscr{X})\to K_{\bullet}^{ \operatorname{top}}(\mathscr{Y})\) and \(f_{*}\colon H_{\bullet}^{\operatorname{BM}}(\mathscr{X})\to H_{\bullet}^{ \operatorname{BM}}(\mathscr{Y})\), see [1, Section 4.4], [1, Section 5, Remark (2)]. The diagrams commute by the Grothendieck-Riemann-Roch for lci morphisms, see [1, Section 5, Remark (2)] (note that there are typos in the statement of the diagram for \(K_{\bullet}^{\operatorname{top}}\), see [10] for a statement in the algebraic case). These are the topological versions of the usual algebraic GRR theorems, see [1, Section 4.3], [10]. Note that Baum-Fulton-MacPherson state the above theorems only for \(\bullet=0\) because they are interested in stating a result for \(K_{0}^{\operatorname{alg}}\) or \(G_{0}^{\operatorname{alg}}\) obtained by composing with the natural map to \(K_{0}^{\operatorname{top}}\) or \(G_{0}^{\operatorname{top}}\), respectively. However, the same proofs (based on deformation to the normal cone and the excess intersection formula) apply for \(\bullet=1\) as well. Finally, note that Baum-Fulton-MacPherson treat the case when \(\mathscr{X}\) and \(\mathscr{Y}\) are schemes, but the extension to stacks is obtained using the approximation from Subsection 2.5, see also Subsection 3.1. ### Matrix factorizations We refer to [12, Subsection 2.6] for complete definitions and references related to categories of matrix factorizations. Let \(\mathscr{X}=X/G\) be a quotient stack with \(X\) affine smooth and \(G\) reductive. For a regular function \(f\colon\mathscr{X}\to\mathbb{C}\), we denote the corresponding category of matrix factorizations by \[\operatorname{MF}(\mathscr{X},f).\] Its objects are tuples \((\alpha\colon E\rightleftarrows F\colon\beta)\) such that \(E,F\in\operatorname{Coh}(\mathscr{X})\) and \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(f\). If \(\mathbb{M}\subset D^{b}(\mathscr{X})\) is a subcategory, let \[\operatorname{MF}(\mathbb{M},f)\subset\operatorname{MF}(\mathscr{X},f)\] the subcategory of totalizations of tuples \((E,F,\alpha,\beta)\) with \(E,F\in\mathbb{M}\). The category \(\mathbb{M}\) has a description in terms of categories of singularities [PTa, Subsection 2.6]. In this paper, we consider categories \(\mathbb{M}\) generated by a collection \(\mathcal{C}\) of vector bundles, then \(\operatorname{MF}(\mathbb{M},f)\) is the category of matrix factorizations with summands \(E,F\) which are direct sums of vector bundles in \(\mathcal{C}\), see [PTa, Lemma 2.3]. Assume there exists an extra action of \(\mathbb{C}^{*}\) on \(X\) which commutes with the action of \(G\) on \(X\), and trivial on \(\mathbb{Z}/2\subset\mathbb{C}^{*}\). Assume that \(f\) is weight two with respect to the above \(\mathbb{C}^{*}\)-action. Denote by (1) the twist by the character \(\operatorname{pr}_{2}\colon G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\). Consider the category of graded matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{X},f)\). It has objects pairs \((P,d_{P})\) with \(P\) an equivariant \(G\times\mathbb{C}^{*}\)-sheaf on \(X\) and \(d_{P}\colon P\to P(1)\) a \(G\times\mathbb{C}^{*}\)-equivariant morphism. Note that as the \(\mathbb{C}^{*}\)-action is trivial on \(\mathbb{Z}/2\), we have the induced action of \(\mathbb{C}^{*}=\mathbb{C}^{*}/(\mathbb{Z}/2)\) on \(X\) and \(f\) is weight one with respect to the above \(\mathbb{C}^{*}\)-action. The objects of \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{X},f)\) can be alternatively described as tuples \[(E,F,\alpha\colon E\to F(1)^{\prime},\beta\colon F\to E), \tag{2.10}\] where \(E\) and \(F\) are \(G\times\mathbb{C}^{*}\)-equivariant coherent sheaves on \(X\), \((1)^{\prime}\) is the twist by the character \(G\times\mathbb{C}^{*}\to\mathbb{C}^{*}\), and \(\alpha\) and \(\beta\) are \(\mathbb{C}^{*}\)-equivariant morphisms such that \(\alpha\circ\beta\) and \(\beta\circ\alpha\) are multiplication by \(f\). For a subcategory \(\mathbb{M}\subset D^{b}_{\mathbb{C}^{*}}(\mathcal{X})\), we define \[\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\subset\operatorname{MF}^ {\operatorname{gr}}(\mathcal{X},f)\] the subcategory of totalizations of tuples \((E,F,\alpha,\beta)\) with \(E,F\in\mathbb{M}\). Alternatively, if \(\mathbb{M}\) is generated by a collection of \(\mathbb{C}^{*}\)-equivariant vector bundles \(\mathcal{C}\), then \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{M},f)\) is the category of matrix factorizations with summands \(E,F\) which are direct sums of vector bundles in \(\mathcal{C}\). In this paper, we will consider either ungraded categories of matrix factorizations or graded categories which are Koszul equivalent (see Subsection 2.8) to derived categories of bounded complexes of coherent sheaf on a quasi-smooth stack. When considering the product of two categories of matrix factorizations, which is in the context of the Thom-Sebastiani theorem, we consider the product of dg-categories over \(\mathbb{C}(\!(\beta)\!)\) for \(\beta\) of homological degree \(-2\) in the ungraded case, see [Pre, Theorem 4.1.3], and the product of dg-categories over \(\mathbb{C}\) in the graded case, see [BFK14, Corollary 5.18] (alternatively in the graded case, one can use the Koszul equivalence). ### The Koszul equivalence Let \(\mathcal{X}\) be a smooth quotient stack, let \(\eta\colon\mathcal{E}\to\mathcal{X}\) be a rank \(r\) vector bundle with a section \(s\in\Gamma(\mathcal{X},\mathcal{E})\), let \[j\colon\mathcal{K}:=s^{-1}(0)\hookrightarrow\mathcal{X} \tag{2.11}\] be the derived zero locus of \(s\), and let \[f\colon\mathcal{E}^{\vee}\to\mathbb{C} \tag{2.12}\] be the regular function defined by \(f(x,v_{x})=\langle s(x),v_{x}\rangle\) for \(x\in\mathcal{X}\) and \(v_{x}\in\mathcal{E}^{\vee}|_{x}\). Let \(\mathcal{E}^{\vee}_{0}\) be the derived zero locus of \(f\). We use the following diagram (2.13) Let \(\mathbb{C}^{*}\) act with weight \(2\) on the fibers of \(\mathcal{E}^{\vee}\) and consider the corresponding graded category of matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\). The Koszul equivalence says that [13, 14, 15]: \[\kappa\colon D^{b}(\mathcal{K})\xrightarrow{\sim}\operatorname{MF}^{ \operatorname{gr}}(\mathcal{E}^{\vee},f). \tag{2.14}\] Note that \(\kappa\) restricts to an equivalence: \[\kappa\colon\operatorname{Perf}(\mathcal{K})\xrightarrow{\sim}\operatorname{ MF}^{\operatorname{gr}}_{\mathcal{X}}(\mathcal{E}^{\vee},f).\] Consider the natural closed immersion \(l\colon\mathcal{K}^{\operatorname{cl}}\hookrightarrow\mathcal{K}\). The pushforward map induces a weak equivalence \(l_{*}\colon G^{\operatorname{top}}(\mathcal{K}^{\operatorname{cl}}) \xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{K})\). The functor \(\kappa\) has the following explicit description on complexes from the classical stack, see the formula for \(\kappa\) in [15, Section 2.3.2]. **Proposition 2.2**.: _The composition_ \[D^{b}(\mathcal{K}^{\operatorname{cl}})\xrightarrow{l_{*}}D^{b}(\mathcal{K}) \xrightarrow{\kappa}\operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\] _is isomorphic to the functor \(j^{\prime}_{*}\eta^{\prime*}l_{*}\) and it induces a weak equivalence_ \[j^{\prime}_{*}\eta^{\prime*}l_{*}\colon G^{\operatorname{top}}(\mathcal{K}^{ \operatorname{cl}})\xrightarrow{\sim}K^{\operatorname{top}}\left( \operatorname{MF}^{\operatorname{gr}}(\mathcal{E}^{\vee},f)\right).\] _There is thus also a weak equivalence:_ \[j^{\prime}_{*}\eta^{\prime*}\colon G^{\operatorname{top}}(\mathcal{K}) \xrightarrow{\sim}K^{\operatorname{top}}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{E}^{\vee},f)\right).\] ### Quivers Let \(Q=(I,E)\) be a quiver and let \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) be a dimension vector. Denote by \[\mathcal{X}(d)=R(d)/G(d)\] the stack of representations of \(Q\) of dimension \(d\), alternatively the stack of representations of dimension \(d\) of the path algebra \(\mathbb{C}[Q]\). Here \(R(d)\), \(G(d)\) are given by \[R(d)=\bigoplus_{(i\to j)\in E}\operatorname{Hom}(V^{i},V^{j}),\ G(d)=\prod_{i \in I}GL(V^{i}).\] Consider its good moduli space map: \[\pi_{d}:=\pi_{X,d}\colon\mathcal{X}(d)\to X(d).\] We denote by \(T(d)\) a maximal torus of \(G(d)\), by \(M(d)\) the weight lattice of \(T(d)\), and by \(\mathfrak{g}(d)\) the Lie algebra of \(G(d)\). Let \(M(d)_{\mathbb{R}}=M(d)\otimes_{\mathbb{Z}}\mathbb{R}\). We drop \(d\) from notation when there is no danger of ambiguity. Let \(\mathfrak{S}_{a}\) be the permutation group on \(a\in\mathbb{N}\) letters. Let \(W_{d}:=\times_{i\in I}\mathfrak{S}_{d^{i}}\) be the Weyl group of \(G(d)\). For \(i\in I\) and \(d^{i}\in\mathbb{N}\), denote by \(\beta_{a}^{i}\) for \(1\leqslant a\leqslant d^{i}\) the weights of the standard representation of \(T(d^{i})\). Let \(M(d)_{\mathbb{R}}^{+}\subset M(d)_{\mathbb{R}}\) be the dominant chamber consisting of weights \[\chi=\sum_{i\in I}\sum_{a=1}^{d^{i}}c^{i}_{a}\beta_{a}^{i}\text{ such that }c^{i}_{a}\geqslant c^{i}_{b}\text{ for all }i\in I,d^{i}\geqslant a \geqslant b\geqslant 1.\] For \(\chi\in M(d)^{+}\), we denote by \(\Gamma_{G(d)}(\chi)\) the irreducible representation of \(G(d)\) with highest weight \(\chi\). Let \(\rho_{d}\) be half the sum of positive roots of \(\mathfrak{g}(d)\). We denote by \(1_{d}\) the diagonal cocharacter of \(T(d)\) (which acts on \(\beta_{a}^{i}\) by weight one). For \(d=(d^{i})_{i\in I}\), denote by \(\underline{d}=\sum_{i\in I}d^{i}\). Define the weights \[\sigma_{d}:=\sum_{i\in I,1\leqslant i\leqslant d^{i}_{a}}\beta_{a}^{i}\in M( d),\ \tau_{d}:=\frac{\sigma_{d}}{\underline{d}}\in M(d)_{\mathbb{R}}.\] We denote the cocharacter lattice by \(N(d)\). We denote by \(\langle\,,\,\rangle\colon N(d)\times M(d)\to\mathbb{Z}\) the natural pairing, and we use the same notation for its real version. If \(\lambda\) is a cocharacter of \(T(d)\) and \(V\) is a \(T(d)\)-representation, we may abuse notation and write \[\langle\lambda,V\rangle=\langle\lambda,\det(V)\rangle\] to ease notation. ### Framed quivers Let \(Q=(I,E)\) be a quiver. Define the framed quiver \(Q^{f}=(I^{f},E^{f})\) with vertices \(I^{f}=I\sqcup\{\infty\}\) and edges \(E^{f}=E\sqcup\{e_{i}\mid i\in I\}\), where \(e_{i}\) is an edge from \(\infty\) to \(i\in I\). For \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\), let \(V(d)=\bigoplus_{i\in I}V^{i}\), where \(\dim V^{i}=d^{i}\). Denote by \[R^{f}(d)=R(d)\oplus V(d)\] the affine space of representations of \(Q^{f}\) of dimension \(d\) and consider the moduli stack of framed representations of \(Q\): \[\mathcal{X}^{f}(d):=R^{f}(d)/G(d).\] We consider GIT stability on \(Q^{f}\) given by the character \(\sigma_{\underline{d}}\). It coincides with the King stability condition on \(Q^{f}\) such that the (semi)stable representations of dimension \((1,d)\) are the representations of \(Q^{f}\) with no subrepresentations of dimension \((1,d^{\prime})\) for \(d^{\prime}\) different from \(d\), see [Tod, Lemma 5.1.9]. Consider the smooth variety obtained as a GIT quotient, which is a smooth quasi-projective variety: \[\mathcal{X}^{f}(d)^{\rm ss}:=R^{f}(d)^{\rm ss}/G(d).\] ### Double quivers and preprojective algebras Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. For an edge \(e\) of \(Q\), denote by \(\overline{e}\) the edge with opposite orientation. Consider the multiset \(E^{\circ,d}=\{e,\overline{e}\mid e\in E\}\). Define _the doubled quiver_ \[Q^{\circ,d}=(I,E^{\circ,d}).\] Let \(\mathcal{I}\subset\mathbb{C}[Q^{\circ,d}]\) be the two-sided ideal generated by \(\sum_{e\in E^{\circ}}[e,\overline{e}]\). The preprojective algebra of \(Q^{\circ}\) is \(\Pi_{Q^{\circ}}:=\mathbb{C}[Q^{\circ,d}]/\mathcal{I}\). For \(d\in\mathbb{N}^{I}\), recall the stack of representations of dimension \(d\) of \(Q^{\circ}\): \[\mathcal{X}^{\circ}(d)=R^{\circ}(d)/G(d).\] The stack of representations of \(Q^{\circ,d}\) is: \[\mathcal{Y}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee})/G(d).\] The stack of representations of the preprojective algebra \(\pi_{Q^{\circ}}\) is: \[\mathcal{P}(d):=T^{*}\left(\mathcal{X}^{\circ}(d)\right):=\mu^{-1}(0)/G(d),\] where \[\mu\colon T^{*}R^{\circ}(d)=R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\to \mathfrak{g}(d)^{\vee}\cong\mathfrak{g}(d)\] is the moment map and \(\mu^{-1}(0)\) is the derived zero of \(\mu\). The image of \(\mu\) lies in the traceless Lie subalgebra \(\mathfrak{g}(d)_{0}\subset\mathfrak{g}(d)\), and thus induces a map \(\mu_{0}\colon T^{*}R^{\circ}(d)\to\mathfrak{g}(d)_{0}\). We define the reduced stack to be \[\mathcal{P}(d)^{\rm red}:=\mu_{0}^{-1}(0)/G(d).\] Note that \(\mathcal{P}(d)^{\rm red,cl}=\mathcal{P}(d)^{\rm cl}\). Consider the good moduli space map: \[\pi_{P,d}\colon\mathcal{P}(d)^{\rm cl}\to P(d).\] ### Tripled quivers with potential Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver and consider its doubled quiver \(Q^{\circ,d}=(I,E^{\circ,d})\). _The tripled quiver with potential_ \[(Q,W)\] is defined as follows. The quiver \(Q=(I,E)\) has set of edges \(E=E^{\circ,d}\sqcup\{\omega_{i}\mid i\in I\}\), where \(\omega_{i}\) is a loop at the vertex \(i\in I\). The potential \(W\) is \[W:=\left(\sum_{i\in I}\omega_{i}\right)\left(\sum_{e\in E^{\circ}}[e,\overline {e}]\right)\in\mathbb{C}[Q].\] We say \((Q,W)\) is a tripled quiver with potential if it is obtained as above for some quiver \(Q^{\circ}\). Consider the stack of representations of \(Q\) of dimension \(d\): \[\mathcal{X}(d)=R(d)/G(d)=\left(T^{*}R^{\circ}(d)\oplus\mathfrak{g}(d)\right)/G (d)=\left(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\oplus\mathfrak{g}(d)\right)/ G(d).\] The potential \(W\) induces a regular function: \[\operatorname{Tr}W\colon\mathcal{X}(d)\to\mathbb{C}.\] Consider the grading on \(\mathcal{X}(d)\) which scales with weight \(2\) the linear maps corresponding to the loops \(\{\omega_{i}\mid i\in I\}\) and fixes the linear maps in \(E^{\circ,d}\). The Koszul equivalence (2.11) says that: \[\kappa\colon D^{b}\left(\mathcal{P}(d)\right)\stackrel{{\sim}}{{ \to}}\operatorname{MF}^{\operatorname{gr}}\left(\mathcal{X}(d),\operatorname {Tr}W\right). \tag{2.15}\] ### Quasi-BPS categories Consider a symmetric quiver \(Q=(I,E)\). Let \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) be a dimension vector and let \(w\in\mathbb{Z}\) be a weight. Consider the multiset of \(T(d)\)-weights on \(R(d)\): \[\mathcal{A}:=\{\beta_{a}^{i}-\beta_{b}^{j}\mid i,j\in I,(i\to j)\in E,1\leqslant a \leqslant d^{i},1\leqslant b\leqslant d^{j}\}.\] Define the polytope \[\mathbf{W}(d):=\frac{1}{2}\mathrm{sum}_{\beta\in\mathcal{A}}[0,\beta]\subset M (d)_{\mathbb{R}}, \tag{2.16}\] where the sums above are Minkowski sums in the space of weights \(M(d)_{\mathbb{R}}\). Let \(\lambda\) be an antidominant cocharacter with associated partition \((d_{a})_{a=1}^{k}\) of \(d\in\mathbb{N}^{I}\), meaning that \[\mathcal{X}(d)^{\lambda}=\times_{a=1}^{k}\mathcal{X}(d_{a}).\] The multiplication for the categorical Hall algebra of \(Q\), or of \((Q,W)\) for a potential \(W\) of \(Q\) and a possible grading, is defined as the functor [10]: \[p_{\lambda*}q_{\lambda}^{*} \colon\ \mathbb{S}_{a=1}^{k}\;D^{b}(\mathcal{X}(d_{a}))\to D^{b}( \mathcal{X}(d)),\] \[p_{\lambda*}q_{\lambda}^{*} \colon\ \mathbb{S}_{a=1}^{k}\;\mathrm{MF}^{\bullet}(\mathcal{X}(d_{a}), \operatorname{Tr}W)\to\mathrm{MF}^{\bullet}(\mathcal{X}(d),\operatorname{Tr}W), \tag{2.17}\] where \(\bullet\in\{\emptyset,\operatorname{gr}\}\) and \(p_{\lambda}\), \(q_{\lambda}\) are the maps \[\mathcal{X}(d)^{\lambda}=\times_{i=1}^{k}\mathcal{X}(d_{i})\stackrel{{ d\lambda}}{{\longleftarrow}}\mathcal{X}(d)^{\lambda\geqslant 0} \stackrel{{ p_{\lambda}}}{{\longrightarrow}}\mathcal{X}(d).\] Define the sets of weights \[\mathcal{A}_{\lambda} :=\{\beta\in\mathcal{A}\mid\langle\lambda,\beta\rangle>0\},\] \[\mathfrak{g}_{\lambda} :=\{\beta_{a}^{i}-\beta_{b}^{i}\mid i\in I,1\leqslant a,b \leqslant d^{i},\langle\lambda,\beta_{a}^{i}-\beta_{b}^{i}\rangle>0\}. \tag{2.18}\] For a weight \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\), let \[\mathbb{M}(d;\delta_{d})\subset D^{b}(\mathcal{X}(d)) \tag{2.19}\] be the full subcategory of \(D^{b}(\mathscr{X}(d))\) generated by vector bundles \(\mathcal{O}_{\mathscr{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) such that \[\chi+\rho-\delta_{d}\in\mathbf{W}(d).\] For \(\lambda\) a cocharacter of \(T(d)\), define \[n_{\lambda}=\big{\langle}\lambda,\det\big{(}\mathbb{L}_{\mathscr{X}(d)}|_{0}^{ \lambda>0}\big{)}\big{\rangle}=\big{\langle}\lambda,\det\big{(}(R(d)^{\vee})^ {\lambda>0}\big{)}\big{\rangle}-\big{\langle}\lambda,\det\big{(}(\mathfrak{g}(d )^{\vee})^{\lambda>0}\big{)}\big{\rangle}. \tag{2.20}\] Note that any complex \(F\in D^{b}(B\mathbb{C}^{*})\) splits as a direct sum \(F=\bigoplus_{w\in\mathbb{Z}}F_{w}\) such that \(\mathbb{C}^{*}\) acts with weight \(w\) on \(F_{w}\). We say \(w\in\mathbb{Z}\)_is a weight of \(F\)_ if \(F_{w}\neq 0\). The category (2.19) has the following alternative description. **Lemma 2.3**.: ([7, Corollary 3.11]) _The category \(\mathbb{M}(d;\delta_{d})\) is the subcategory of \(D^{b}(\mathscr{X}(d))\) of objects \(F\in D^{b}(\mathscr{X}(d))\) such that, for any \(\nu:B\mathbb{C}^{*}\to\mathscr{X}(d)\), the weights of \(\nu^{*}F\) are contained in the set \(\big{[}-\frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle,\frac{1}{2}n_ {\lambda}+\langle\lambda,\delta_{d}\rangle\big{]}\). Here \(\nu\) corresponds to a point \(x\in R(d)\) and a cocharacter \(\lambda\colon\mathbb{C}^{*}\to T(d)\) which fixes \(x\)._ Given a potential \(W\) for the quiver \(Q\), and possibly a grading as in Subsection 2.7, we define the quasi-BPS categories: \[\mathbb{S}^{\bullet}(d;\delta_{d}):=\operatorname{MF}^{\bullet}\left(\mathbb{ M}(d;\delta_{d}),\operatorname{Tr}W\right)\text{ for }\bullet\in\{\emptyset,\operatorname{gr}\}. \tag{2.21}\] If \(\delta_{d}=v\tau_{d}\), we use the notations: \[\mathbb{M}(d)_{v}:=\mathbb{M}(d;v\tau_{d})\text{ and }\mathbb{S}(d)_{v}:= \mathbb{S}(d;v\tau_{d}).\] In the setting of Subsection 2.11, there is a subcategory \(\mathbb{T}(d;\delta_{d})\subset D^{b}(\mathscr{P}(d))\) such that, under the Koszul equivalence (2.15), we have that: \[\kappa\colon\mathbb{T}(d;\delta_{d})\xrightarrow{\sim}\mathbb{S}^{ \operatorname{gr}}(d;\delta_{d}), \tag{2.22}\] see also [7, Definition 2.14] for an alternative description of \(\mathbb{T}(d;\delta_{d})\). Let \(\mathscr{X}(d)^{\operatorname{red}}:=(T^{*}R^{\circ}(d)\oplus\mathfrak{g}(d)_ {0})/G(d)\). There is also a Koszul equivalence \[\kappa^{\prime}\colon D^{b}(\mathscr{P}(d)^{\operatorname{red}})\xrightarrow {\sim}\operatorname{MF}^{\operatorname{gr}}\big{(}\mathscr{X}(d)^{ \operatorname{red}},\operatorname{Tr}W\big{)}.\] Define \(\mathbb{M}(d;\delta_{d})^{\operatorname{red}}\subset D^{b}\left(\mathscr{X}(d )^{\operatorname{red}}\right)\) as in (2.19), and let \(\mathbb{T}(d;\delta_{d})^{\operatorname{red}}\subset D^{b}(\mathscr{P}(d)^{ \operatorname{red}})\) be the subcategory such that, under the Koszul equivalence \(\kappa^{\prime}\), we have that: \[\kappa^{\prime}\colon\mathbb{T}(d;\delta_{d})^{\operatorname{red}} \xrightarrow{\sim}\mathbb{S}^{\operatorname{gr}}(d;\delta_{d})^{ \operatorname{red}}.\] We next discuss the compatibility between reduced and non-reduced quasi-BPS categories. For an isomorphism \(\mathfrak{g}(d)\cong\mathfrak{g}(d)_{0}\times\mathbb{C}\) of \(G(d)\)-representation, the projection onto the first factor induces a map \(t\colon\mathscr{X}(d)\to\mathscr{X}(d)^{\operatorname{red}}\). We have \(t\circ\operatorname{Tr}W=\operatorname{Tr}W\). Let \(l^{\prime}\colon\mathscr{P}(d)^{\operatorname{red}}\hookrightarrow\mathscr{P}(d)\) be the natural closed immersion. The next proposition follows from [7, Lemma 2.4.4]: **Proposition 2.4**.: _The following diagram commutes:_ _It induces a commutative diagram:_ ### Semiorthogonal decompositions We recall several semiorthogonal decompositions from [PTe], see [PTe, Subsection 3.3] for the ordering of summands in all the semiorthogonal decompositions. Recall the convention about the product of categories from Subsection 2.7. We will consider quivers satisfying the following: **Assumption 2.1**.: The quiver \(Q=(I,E)\) is symmetric and: * for all \(a,b\in I\) different, the number of edges from \(a\) to \(b\) is even, and * for all \(a\in I\), the number of loops at \(a\) is odd. For \(\alpha\in\mathbb{N}\), we define the quiver \[Q^{\alpha f}=(I^{f},E^{\alpha f}),\] which is a generalization of the framed quiver \(Q^{f}\). The set of vertices is \(I^{f}=I\sqcup\{\infty\}\), and the set of edges \(E^{\alpha f}\) is the disjoint union of \(E\) and \(\alpha\) edges from \(\infty\) to any vertex of \(I\). Consider the moduli of semistable representations \(\mathcal{X}^{\alpha f}(d)^{\text{ss}}\) of the quiver \(Q^{\alpha f}\) for the King stability condition \(\sigma_{d}\), which is a smooth quasi-projective variety. **Theorem 2.5**.: ([PTe, Corollary 4.17]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1. Let \(\alpha\in\mathbb{N}\) and \(\mu\in\mathbb{R}\setminus\mathbb{Q}\). There is a \(X(d)\)-linear semiorthogonal decomposition_ \[D^{b}\left(\mathcal{X}^{\alpha f}(d)^{\text{ss}}\right)=\left\langle\bigotimes _{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+v_{i}\tau_{d_{i}}):\mu\leqslant\frac{v_ {1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}}<\alpha+\mu \right\rangle. \tag{2.23}\] _Here \((d_{i})_{i=1}^{k}\) is a partition of \(d\), \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\), and \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\) is defined by_ \[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g} (d)^{\lambda>0} \tag{2.24}\] _where \(\lambda\) is an antidominant cocharacter corresponding to the partition \((d_{i})_{i=1}^{k}\). The functor from a summand on the right hand side to \(D^{b}\left(\mathcal{X}^{f}(d)^{\text{ss}}\right)\) is the composition of the Hall product with the pullback along the projection map \(\mathcal{X}^{f}(d)\to\mathcal{X}(d)\)._ **Remark 2.6**.: Note that there are equivalences \[\mathbb{M}(d_{i})_{v_{i}}=\mathbb{M}(d_{i};v_{i}\tau_{d_{i}})\overset{\sim}{ \to}\mathbb{M}(d_{i};\theta_{i}+v_{i}\tau_{d_{i}}) \tag{2.25}\] by taking the tensor product with \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\). Thus the summands in Theorem 2.5 are equivalent to \(\bigotimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\). We next discuss a semiorthogonal decomposition of the stack of representations of \(Q\). **Theorem 2.7**.: ([PTe, Theorem 4.2]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1. There is a \(X(d)\)-linear semiorthogonal decomposition_ \[D^{b}\left(\mathcal{X}(d)\right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{M}( d_{i})_{v_{i}}:\frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}} \right\rangle, \tag{2.26}\] _where \((d_{i})_{i=1}^{k}\) is a partition of \(d\) and \((v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\). The functor from a summand on the right hand side to \(D^{b}(\mathcal{X}(d))\) is given by the Hall product composed with tensoring with the line bundle \(\boxtimes_{i=1}^{k}\theta_{i}\), see Remark 2.6._ Using [Pada, Proposition 2.1], [PTa, Proposition 2.5], there are analogous semiorthogonal decompositions in the non-zero potential case constructed from the semiorthogonal decompositions above. The analogue of Theorem 2.5 is the following: **Theorem 2.8**.: ([PTe, Theorem 4.18]) _Let \(Q\) be a symmetric quiver satisfying Assumption 2.1 and let \(\alpha\geqslant 1\). Let \(\mu\in\mathbb{R}\setminus\mathbb{Q}\). There is a semiorthogonal decomposition_ \[\operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{ss},\operatorname{Tr}W \right)=\left\langle\bigotimes_{i=1}^{k}\mathbb{S}(d_{i})_{v_{i}}:\mu\leqslant \frac{v_{1}}{\underline{d}_{1}}<\cdots<\frac{v_{k}}{\underline{d}_{k}}<\alpha +\mu\right\rangle,\] _where the right hand side is as in (2.23). If \((Q,W)\) is a tripled quiver with potential, there is an analogous semiorthogonal decomposition of \(\operatorname{MF}^{\operatorname{gr}}\left(\mathcal{X}^{\alpha f}(d)^{ss}, \operatorname{Tr}W\right)\) for the grading introduced in Subsection 2.12._ We note the following assumption on a quiver \(Q^{\circ}=(I,E^{\circ})\), which says its tripled quiver \(Q\) satisfies Assumption 2.1 and thus Theorems 2.7 and 2.8 can be applied for its tripled quiver with potential: **Assumption 2.2**.: For all \(a,b\in I\), we have that \[\#(a\to b\text{ in }E^{\circ})-\#(b\to a\text{ in }E^{\circ})\in 2 \mathbb{Z}. \tag{2.27}\] We end with a corollary of [Pada, Theorem 1.1]. We will use it only for quivers \(Q^{\circ}\) satisfying Assumption 2.2, and then the corollary can be also deduced from a version of Theorem 2.7 for an arbitrary \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) (see [PTe, Theorem 4.2]) using Koszul equivalence and [PTa, Proposition 2.5]. **Theorem 2.9**.: _Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver, let \(d\in\mathbb{N}^{I}\), let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\) with \(\langle 1_{d},\delta_{d}\rangle=v\). Recall the quasi-BPS categories from Subsection 2.13. The category \(\mathbb{M}(d;\delta_{d})\) is right admissible in \(D^{b}(\mathcal{X}(d))_{v}\), so there is a \(X(d)\)-linear semiorthogonal decomposition:_ \[D^{b}(\mathcal{X}(d))_{v}=\langle\mathbb{B}(d;\delta_{d}),\mathbb{M}(d;\delta _{d})\rangle. \tag{2.28}\] _The category \(\mathbb{M}(d;\delta_{d})^{\operatorname{red}}\) is right admissible in \(D^{b}(\mathcal{X}(d)^{\operatorname{red}})\)._ _Applying matrix factorizations and using the Koszul equivalence, the category \(\mathbb{T}(d;\delta_{d})\) is right admissible in \(D^{b}(\mathcal{P}(d))_{v}\), so there is a semiorthogonal decomposition:_ \[D^{b}(\mathcal{P}(d))_{v}=\langle\mathbb{A}(d;\delta_{d}),\mathbb{T}(d;\delta _{d})\rangle. \tag{2.29}\] _The category \(\mathbb{T}(d;\delta_{d})^{\operatorname{red}}\) is right admissible in \(D^{b}(\mathcal{P}(d)^{\operatorname{red}})_{v}\)._ We note the following: **Corollary 2.10**.: _Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver, let \(d\in\mathbb{N}^{I}\), and let \(\delta_{d}\in M(d)_{\mathbb{R}}^{W_{d}}\). The closed immersion \(l^{\prime}\colon\mathcal{P}(d)^{\operatorname{red}}\hookrightarrow\mathcal{P} (d)\) induces a weak equivalence of spectra:_ \[l^{\prime}_{*}\colon K^{\operatorname{top}}(\mathbb{T}(d;\delta)^{ \operatorname{red}})\xrightarrow{\sim}K^{\operatorname{top}}(\mathbb{T}(d; \delta)).\] Proof.: There is an equivalence of spectra \(l^{\prime}_{*}\colon G^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{ red}})\xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{P}(d))\), see the isomorphism (2.6). The claim follows from Proposition 2.4 and Theorem 2.9. ### Base-change and semiorthogonal decompositions In Section 9, we need to construct semiorthogonal decompositions for etale covers of moduli of representations of a quiver, or for moduli of representations of preprojective algebras. It will be convenient to use the following base-change result for semiorthogonal decompositions, see [10] for the case of derived categories of varieties. For a pretriangulated dg-category \(\mathcal{D}\) and a subcategory \(\mathcal{C}\subset\mathcal{D}\), we say that \(\mathcal{D}\)_is classically generated by \(\mathcal{C}\)_ if the smallest pretriangulated subcategory of \(\mathcal{D}\) which contains \(\mathcal{C}\) and is closed under direct summands is \(\mathcal{D}\). **Proposition 2.11**.: _Let \(\mathcal{X}\) be a QCA (quasi-compact with affine stabilizers) derived stack with a morphism \(\pi\colon\mathcal{X}\to S\) to a scheme \(S\). Let_ \[D^{b}(\mathcal{X})=\langle\mathcal{C}_{i}\mid i\in I\rangle\] _be a \(S\)-linear semiorthogonal decomposition. Then, for any etale map \(f\colon T\to S\) and \(f_{T}\colon\mathcal{X}_{T}=\mathcal{X}\times_{S}T\to\mathcal{X}\), there is a semiorthogonal decomposition_ \[D^{b}(\mathcal{X}_{T})=\langle\mathcal{C}_{i,T}\mid i\in I\rangle,\] _where \(\mathcal{C}_{i,T}\subset D^{b}(\mathcal{X}_{T})\) is the subcategory classically generated by \(f_{T}^{*}\mathcal{C}_{i}\)._ Proof.: The image of \(f_{T}^{*}\colon\operatorname{Ind}D^{b}(\mathcal{X})\to\operatorname{Ind}D^{b}( \mathcal{X}_{T})\) classically generates \(\operatorname{Ind}D^{b}(\mathcal{X}_{T})\), as any \(A\in\operatorname{Ind}D^{b}(\mathcal{X}_{T})\) is a direct summand of \(f_{T}^{*}f_{T*}A\). Indeed, consider the diagram: Then \(f_{T}^{*}f_{T*}A=g_{T*}g_{T}^{*}A=A\otimes g_{T*}\mathcal{O}_{\mathcal{X}^{ \prime}}\). The map \(g_{T}\) has a section given by the diagonal map \(\Delta\colon\mathcal{X}_{T}\to\mathcal{X}^{\prime}\), thus \(g_{T*}\mathcal{O}_{\mathcal{X}^{\prime}}\) has \(\mathcal{O}_{\mathcal{X}_{T}}\) as a direct summand, and so \(A\) is indeed a direct summand of \(f_{T}^{*}f_{T*}A\). By the QCA assumption, objects in \(D^{b}(\mathcal{X}_{T})\subset\operatorname{Ind}D^{b}(\mathcal{X}_{T})\) are compact, see [11]. Therefore \(D^{b}(\mathcal{X}_{T})\) is classically generated by \(f_{T}^{*}D^{b}(\mathcal{X})\), thus by \(\mathcal{C}_{i,T}\) for \(i\in I\). To show semiorthogonality, consider \(i,j\in I\) such that \(\operatorname{Hom}(A_{i},A_{j})=0\) for all \(A_{i}\in\mathcal{C}_{i}\) and \(A_{j}\in\mathcal{C}_{j}\). We have \[\operatorname{Hom}_{D^{b}(\mathcal{X}_{T})}(f_{T}^{*}A_{i},f_{T}^ {*}A_{j}) =\operatorname{Hom}_{\operatorname{Ind}D^{b}(\mathcal{X})}(A_{i},f_{T*}f_{T}^{*}A_{j})\] \[=\operatorname{Hom}_{\operatorname{Ind}D^{b}(\mathcal{X})}(A_{i},A_{j}\otimes_{\mathcal{O}_{S}}f_{*}\mathcal{O}_{T}). \tag{2.30}\] Here \(f_{*}\mathcal{O}_{S}\in D_{\operatorname{qc}}(S)=\operatorname{Ind}\operatorname {Perf}(S)\), and the \(S\)-linearity of \(\mathcal{C}_{j}\) implies \(A_{j}\otimes_{\mathcal{O}_{S}}f_{*}\mathcal{O}_{T}\in\operatorname{Ind} \mathcal{C}_{j}\). Then the vanishing of (2.30) follows from the compactness of \(A_{i}\) (see the end of the proof of [12, Lemma 5.5] for how compactness is used). ## 3. Topological K-theory of quotient stacks In this section, we recall the definition of the Chern character maps for quotient stacks and we prove versions of the Atiyah-Hirzebruch theorem for quotient stacks. The main tool we use is the approximation of cohomology theories of quotient stacks by varieties [14, 15]. We also construct a Chern character map for quasi-smooth quotient stacks and discuss versions of the GRR and Atiyah-Hirzebruch theorems for quasi-smooth morphisms. The results are most probably well known to the experts, but we do not know a reference for them. ### The Chern character map for a classical quotient stack Consider a quotient stack \[\mathscr{X}=X/G,\] where \(G\) is a connected reductive group and \(X\) is a classical quasi-projective scheme with an action of \(G\). Let \(M\) be a compact Lie group such that \(G\) is the complexification of \(M\). Let \(EM\) be a contractible CW complex with a free action of \(M\). For \(i\in\mathbb{Z}\), consider the Chern character map of the CW complex \(EM\times_{M}X\): \[\operatorname{ch}\colon G_{i}^{\operatorname{top}}(\mathscr{X}) =G_{i}^{\operatorname{top}}(EM\times_{M}X)\] \[\to\widetilde{H}_{i}^{\operatorname{BM}}(EM\times_{M}X)= \widetilde{H}_{i}^{\operatorname{BM}}(\mathscr{X}):=\prod_{j\in\mathbb{Z}}H_{ i+2j}^{\operatorname{BM}}(\mathscr{X}). \tag{3.1}\] The above Chern character map is, in general, neither injective nor surjective. It becomes an isomorphism when the K-theory is completed with respect to an augmentation ideal by theorems of Atiyah-Segal [1], and Edidin-Graham [1] in the algebraic case. The Chern character map for a stack can be approximated by the Chern character map for varieties as follows [1], see also Subsection 2.5. For \(V\) a representation of \(G\), denote by \(S\subset V\) the closed set of points with non-trivial stabilizer. Let \(U:=V\setminus S\). We may choose \(V\) such that \(U/G\) and \((X\times U)/G\) are schemes. Then the following diagram commutes, where the vertical maps are pullback maps and the bottom map is an isomorphism by the Atiyah-Hirzebruch theorem: Choose representations \(V_{n}\twoheadrightarrow V_{n-1}\) and closed subsets \(S_{n}\subset V_{n}\) as in Subsection 2.5. For \(\ell\) fixed and for \(n\) large enough, recall that we have isomorphisms induced by pullbacks: \[H_{\ell}^{\operatorname{BM}}(\mathscr{X})\overset{\sim}{\to}H_{\ell+2 \dim V_{n}}^{\operatorname{BM}}((X\times V_{n})/G)\overset{\sim}{\to}H_{\ell+ 2\dim V_{n}}^{\operatorname{BM}}\left((X\times U_{n})/G\right).\] Then \(\operatorname{ch}(y)\) for \(y\in G_{i}^{\operatorname{top}}(\mathscr{X})\) equals the limit of \(\operatorname{ch}_{V_{n}}(\operatorname{res}_{V_{n}}(y))\). Note that, in the algebraic case, Edidin-Graham show in [1, Proposition 3.1] that the limit of \(\operatorname{ch}_{V_{n}}(\operatorname{res}_{V_{n}}(y))\) is well-defined and use it to define the Chern character. Let \(\mathscr{X}^{\prime}\subset\mathscr{X}\) be a closed quotient stack. There are also Chern character maps with closed supports: \[\operatorname{ch}_{\mathscr{X}^{\prime},\mathscr{X}}\colon G_{i,\mathscr{X}^{ \prime}}^{\operatorname{top}}(\mathscr{X})\to H_{i,\mathscr{X}^{\prime}}^{ \operatorname{BM}}(\mathscr{X}). \tag{3.2}\] There is also a Chern character map: \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathscr{X})\to \widetilde{H}^{i}(\mathscr{X}):=\prod_{j\in\mathbb{Z}}H^{i+2j}(\mathscr{X}), \tag{3.3}\] where \(i\in\mathbb{Z}\). As above, the Chern character map (3.3) can be approximated by Chern character maps of varieties. The Chern character maps (3.1) and (3.3) are compatible as follows, where \(\varepsilon\) and \(\varepsilon^{\prime}\) are the maps induced by intersecting with the fundamental class, see [1, Section 5 and Property 2 from Section 4.1]: (3.4) ### An Atiyah-Hirzeburch type theorem for quotient stacks We assume \(\mathcal{X}\) is a classical quotient stack as in the previous subsection. Let \(i\in\mathbb{Z}\). Consider the increasing filtration \[E_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X}):=\mathrm{ch}^{-1}\left(H_{\leq i+2 \ell}^{\mathrm{BM}}(\mathcal{X})\right)\subset G_{i}^{\mathrm{top}}(\mathcal{ X}). \tag{3.5}\] Note that \(E_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})=G_{i}^{\mathrm{top}}(\mathcal{X})\) for \(\ell\) large enough. Denote by \[\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X}):=E_{\ell}G_{i}^{\mathrm{ top}}(\mathcal{X})/E_{\ell-1}G_{i}^{\mathrm{top}}(\mathcal{X}).\] The Chern character induces a map, which we call _the cycle map_: \[\mathrm{c}\colon\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})\to H_{i+2 \ell}^{\mathrm{BM}}(\mathcal{X}). \tag{3.6}\] Note that the cycle map is injective by construction. We prove the following version of the Atiyah-Hirzebruch theorem for quotient stacks. **Proposition 3.1**.: _For \(i,\ell\in\mathbb{Z}\), the map (3.6) is an isomorphism._ Proof.: Let \(i,\ell\in\mathbb{Z}\) and let \(a=i+2\ell\). Let \(x\in H_{a}^{\mathrm{BM}}(\mathcal{X})\). Let \(V\) be a representation of \(G\) such that \(S\subset X\times V\), the locus of points with non-trivial stabilizer, satisfies: \[\mathrm{codim}\left(S\text{ in }X\times V\right)>\dim X-\frac{a}{2}+1.\] Let \(b:=\dim V\), \(U:=V\setminus S\) and let \(t\colon(X\times V)/G\to X/G\) be the natural projection. The map \(t\) induces an isomorphism \[t^{*}\colon H_{i+2\ell}^{\mathrm{BM}}(X/G)\overset{\sim}{\to}H_{i+2\ell+2b}^{ \mathrm{BM}}((X\times V)/G).\] Next, the restriction map \(\alpha\) is an isomorphism for \(\delta\geqslant\ell+b\). \[\alpha\colon H_{i+2\delta}^{\mathrm{BM}}((X\times V)/G)\overset{\sim}{\to}H_ {i+2\delta}^{\mathrm{BM}}\left((X\times U)/G\right). \tag{3.7}\] It suffices to check that \(H_{i+2\delta-\eta,S/G}^{\mathrm{BM}}((X\times V)/G)\cong H_{i+2\delta-\eta}^ {\mathrm{BM}}(S/G)=0\) for \(\eta\in\{0,1\}\). This is true because \(i+2\delta-\eta>2\dim S\). Indeed, it suffices to check that \(\frac{1}{2}a+b>\dim S+1\), alternatively that \(\mathrm{codim}(S\text{ in }X\times V)>\dim X-\frac{1}{2}a+1\), which is true by our assumption on \(V\). There is a commutative diagram with rows exact sequences: \[\begin{CD}G_{i}^{\mathrm{top}}((X\times V)/G)@>{}>{}>G_{i}^{\mathrm{top}} \left((X\times U)/G\right)@>{}>{}>G_{i-1,S/G}^{\mathrm{top}}((X\times V)/G)\\ @V{}V{\mathrm{ch}}V@V{}V{\mathrm{ch}}V\\ \widetilde{H}_{i}^{\mathrm{BM}}((X\times V)/G)@>{}>{}>\widetilde{H}_{i}^{ \mathrm{BM}}((X\times U)/G)@>{}>{}>\widetilde{H}_{i-1,S/G}^{\mathrm{BM}}((X \times V)/G)\\ @V{}V{\beta}V@V{\beta}V{\beta}V\\ \prod H_{i+2\delta}^{\mathrm{BM}}((X\times V)/G)@>{\alpha}>{}>\prod H_{i+2 \delta}^{\mathrm{BM}}\left((X\times U)/G\right)@>{}>{}>0.\end{CD}\] In the above, the products are after \(\delta>\ell+b\) and and the maps \(\beta\) are natural projections. The kernels of the maps \(\beta\circ\mathrm{ch}\) lie in exact sequences for \(\ell\) and \(\ell-1\), and by their taking their quotient we obtain a diagram: \[\begin{CD}\mathrm{gr}_{\ell+b}G_{i}^{\mathrm{top}}((X\times V)/G)@>{\sim}>{}> \mathrm{gr}_{\ell+b}G_{i}^{\mathrm{top}}\left((X\times U)/G\right)\\ @V{\mathbb{C}}V{\mathbb{C}}V@V{\mathbb{C}^{\prime}}V{\mathbb{C}^{\prime}}V \\ H_{i+2\ell+2b}^{\mathrm{BM}}((X\times V)/G)@>{\alpha}>{\sim}>H_{i+2\ell+2b}^{ \mathrm{BM}}\left((X\times U)/G\right).\end{CD}\] The map \(c^{\prime}\) is an isomorphism by the Atiyah-Hirzebruch theorem, thus the map \(\mathrm{c}\) is also an isomorphism, and the claim follows. For \(\mathcal{X}\) a quotient stack as above, recall that there is an intersection product \(H_{i}^{\mathrm{BM}}(\mathcal{X})\otimes H^{j}(\mathcal{X})\to H_{i+j}^{ \mathrm{BM}}(\mathcal{X})\) for \(i,j\in\mathbb{Z}\). Note the following immediate statement. **Proposition 3.2**.: _Let \(\alpha\in\prod_{i\geq 0}H^{i}(\mathcal{X})\) such that \(\alpha=1+\alpha^{\prime}\) for \(\alpha^{\prime}\in\prod_{i\geq 1}H^{i}(\mathcal{X})\). Define \(\mathrm{ch}^{\prime}(-):=\mathrm{ch}(-)\cdot\alpha\). Then \(\mathrm{ch}^{\prime}\) induces a function on the associated graded pieces \(\mathrm{gr}_{\ell}G_{i}^{\mathrm{top}}(\mathcal{X})\), and this function equals the cycle map (3.6)._ ### The Chern character map for quasi-smooth stacks Assume \(\mathcal{X}=X/G\) is a quotient stack with \(X\) a quasi-smooth scheme. One can define a Chern character map for \(\mathcal{X}\) using the the Chern character map for \(\mathcal{X}^{\mathrm{cl}}\) and the isomorphisms (2.6). However, we define a topological Chern character map which takes into account the derived structure. For a closed immersion \(\mathcal{X}\hookrightarrow\mathcal{Y}\), consider the topological K-theory with closed supports \(G_{\bullet,\mathcal{X}}^{\mathrm{top}}(\mathcal{Y})\cong G_{\bullet}^{ \mathrm{top}}(\mathcal{X})\) and the Borel-Moore homology with closed supports \(H_{\bullet,\mathcal{X}}^{\mathrm{BM}}(\mathcal{Y})\cong H_{\bullet}^{ \mathrm{BM}}(\mathcal{X})\). Let \(\mathcal{X}\) be a quasi-smooth quotient stack. Consider a closed immersion \(i\colon\mathcal{X}\hookrightarrow\mathcal{Y}\), where \(\mathcal{Y}\) is a smooth classical quotient stack. Let \(N\) be the normal bundle of \(i\), which is a vector bundle on \(\mathcal{X}\) and thus has a Todd class \(\mathrm{td}(N)\in\widetilde{H}^{0}(\mathcal{X})\). Consider the local Chern character map \[\mathrm{ch}_{\mathcal{X},\mathcal{Y}}\colon G_{\bullet,\mathcal{X}}^{\mathrm{ top}}(\mathcal{Y})\to\widetilde{H}_{\bullet,\mathcal{X}}^{\mathrm{BM}}(\mathcal{Y}).\] Define \[\mathrm{ch}_{\mathcal{X}}:=\mathrm{ch}_{\mathcal{X},\mathcal{Y}}\cdot\mathrm{td }(N)\colon G_{\bullet}^{\mathrm{top}}(\mathcal{X})\to\widetilde{H}_{\bullet}^{ \mathrm{BM}}(\mathcal{X}). \tag{3.8}\] **Lemma 3.3**.: _The map \(\mathrm{ch}_{\mathcal{X}}\) is independent of a choice of \(\mathcal{Y}\) as above._ Proof.: Let \(i^{\prime}\colon\mathcal{X}\hookrightarrow\mathcal{Y}^{\prime}\) be a different closed immersion. Choose \(\mathcal{Y}^{\prime\prime}\) and closed immersions \(j\colon\mathcal{Y}\hookrightarrow\mathcal{Y}^{\prime\prime}\) and \(j^{\prime}\colon\mathcal{Y}^{\prime}\hookrightarrow\mathcal{Y}^{\prime\prime}\). Note that the Todd classes for the normal bundles of \(ji\) and \(j^{\prime}i^{\prime}\) are the same. The statement then follows from the GRR theorem for the closed immersions \(j\) and \(j^{\prime}\), see Theorem 2.1. **Remark 3.4**.: If \(\mathcal{X}\) is a classical stack, then the Chern character constructed above coincides with the usual Chern character (3.2), as one can see using the GRR theorem for a closed immersion of \(\mathcal{X}\) in a smooth ambient stack. Similarly, one proves a topological GRR theorem for quasi-smooth morphisms using Theorem 2.1. **Proposition 3.5**.: _(i) Let \(f\colon\mathcal{X}\to\mathcal{Y}\) be a quasi-smooth proper map of quasi-smooth quotient stacks. Let \(T_{f}\) be the virtual tangent bundle and consider the Todd class \(\operatorname{td}(T_{f})\in\widetilde{H}^{0}(\mathcal{X})\). Define \(f^{\prime}_{*}(-):=f_{*}(\operatorname{td}(T_{f})\cdot(-))\). Then the following diagram commutes:_ (3.9) _(ii) Further, for any smooth morphism \(f\colon\mathcal{X}\to\mathcal{Y}\) between quasi-smooth quotient stacks, the following diagram commutes:_ (3.10) Define a filtration on \(G^{\operatorname{top}}_{\bullet}(\mathcal{X})\) as in (3.5), the associated graded, and a cycle map as in (3.6), which is also an isomorphism by a relative version of Proposition 3.1 and Proposition 3.2: \[\operatorname{c}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}( \mathcal{X})\xrightarrow{\sim}H^{\operatorname{BM}}_{i+2\ell}(\mathcal{X}). \tag{3.11}\] We have that \(\operatorname{td}(T_{f})=1+x\in\widetilde{H}^{0}(\mathcal{X})\) for \(x\in\prod_{i\geqslant 2}H^{i}(\mathcal{X})\). We record the following corollary of the diagrams (3.9) and (3.10), see also Proposition 3.2. **Corollary 3.6**.: _Let \(f\colon\mathcal{X}\to\mathcal{Y}\) be a quasi-smooth morphism of quasi-smooth quotient stacks of relative dimension \(d\). Let \(i,\ell\in\mathbb{Z}\). If \(f\) is smooth, then it induces a pullback map:_ \[f^{*}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{Y}_{0} )\to\operatorname{gr}_{\ell+d}G^{\operatorname{top}}_{i}(\mathcal{X}_{0}).\] _If \(f\) is proper, then it induces a pushforward map:_ \[f_{*}\colon\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{X}_{0} )\to\operatorname{gr}_{\ell}G^{\operatorname{top}}_{i}(\mathcal{Y}_{0}).\] ## 4. Topological K-theory of categories of singularities In this section, we compute the topological K-theory of categories of matrix factorizations in terms of the monodromy invariant cohomology of vanishing cycles. The results and approach are inspired by work of Efimov [1], Blanc-Robalo-Toen-Vesozzi [1], and Brown-Dyckerhoff [1]. Let \(\mathcal{X}\) be a smooth quotient stack, let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function with \(0\) the only singular value, let \(\iota\colon\mathcal{X}_{0}\hookrightarrow\mathcal{X}\) be the (derived) fiber over \(0\). Let \(d:=\dim_{\mathbb{C}}\mathcal{X}\). The category of singularities \[D_{\operatorname{sg}}(\mathcal{X}_{0}):=D^{b}\mathrm{Coh}(\mathcal{X}_{0})/ \mathrm{Perf}(\mathcal{X}_{0}) \tag{4.1}\] is equivalent to the category of matrix factorizations [10, 1]: \[\operatorname{MF}(\mathcal{X},f)\xrightarrow{\sim}D_{\operatorname{sg}}( \mathcal{X}_{0}). \tag{4.2}\] We denote by \(K^{\operatorname{sg}}_{\bullet}(\mathcal{X}_{0})\) the topological K-theory of \(D_{\operatorname{sg}}(\mathcal{X}_{0})\). From (4.1), there is a long exact sequence of \(\mathbb{Q}\)-vector spaces: \[\ldots\to K^{\operatorname{top}}_{i}(\mathcal{X}_{0})\to G^{\operatorname{ top}}_{i}(\mathcal{X}_{0})\to K^{\operatorname{sg}}_{i}(\mathcal{X}_{0})\to K^{ \operatorname{top}}_{i-1}(\mathcal{X}_{0})\to G^{\operatorname{top}}_{i-1}( \mathcal{X}_{0})\to\ldots. \tag{4.3}\] We assume throughout the section that \(f\) is _quasi-homogeneous_, that is, that there exists an action of \(\mathbb{C}^{*}\) on \(\mathcal{X}\) contracting \(\mathcal{X}\) onto \(\mathcal{X}_{0}\) such that \(f\) is of weight \(d>0\) with respect to the action of \(\mathbb{C}^{*}\), or \(f=0\). Note that the function (2.12) is quasi-homogeneous of weight \(1\) with respect to the weight \(1\) scaling action on the fibers. Then \(0\) is the only singular value of \(f\). Further, there is a weak equivalence induced by restriction: \[K^{\operatorname{top}}(\mathcal{X})\overset{\sim}{\to}K^{\operatorname{top}}( \mathcal{X}_{0}).\] Note that actually all the results in this section hold as long as the isomorphism \(K^{\operatorname{top}}(\mathcal{X})\overset{\sim}{\to}K^{\operatorname{top}}( \mathcal{X}_{0})\) holds, and this is used only in the proof of Proposition 4.2. ### Vanishing cycle cohomology We begin by recalling two distinguished triangles relating the vanishing and cycle functors applied to the constant sheaf. A reference is [10, Chapter 3], especially [10, pages 24-28]. The results in loc. cit. are stated for varieties, but they also hold for quotient stacks as in [1, Subsection 2.2]. There is an exact triangle in \(D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}[-1]\to\psi_{f}[-1]:=\psi_{f}\mathbb{Q} \mathcal{X}[-1]\overset{\operatorname{can}}{\longrightarrow}\varphi_{f}[-1] :=\varphi_{f}\mathbb{Q}\mathcal{X}[-1]\to\iota_{*}\mathbb{Q}\mathcal{X}_{0}. \tag{4.4}\] By taking the dual of the above triangle, we obtain the distinguished triangle: \[\varphi_{f}[-1]\overset{\operatorname{var}}{\longrightarrow}\psi_{f}[-1]\to \iota_{*}\iota^{!}\mathbb{Q}\mathcal{X}[1]\to\varphi_{f}[1]. \tag{4.5}\] We have that \(\operatorname{var}\circ\operatorname{can}=1-\operatorname{T}\), where \(\operatorname{T}\) is the monodromy operator. Consider the map \[\alpha\colon\mathbb{Q}\mathcal{X}_{0}=\iota^{*}\mathbb{Q}\mathcal{X}\to\iota^ {!}\mathbb{Q}\mathcal{X}[2]\] given by capping with the fundamental class of the quasi-smooth variety \(\mathcal{X}_{0}\). If \(f\) is not the zero map, then this is the usual construction. If \(f\) is the zero map, then \(\mathcal{X}_{0}\cong\mathcal{X}\times r\), where \(r=\operatorname{Spec}\mathbb{C}[\epsilon]\) for \(\epsilon\) in homological degree \(1\). The map \(\alpha\) is then the zero map. Let \(\varphi_{f}^{\operatorname{inv}}\) be the cone of \(1-\operatorname{T}\): \[\varphi_{f}\overset{1-\operatorname{T}}{\longrightarrow}\varphi_{f}\to \varphi_{f}^{\operatorname{inv}}\to\varphi_{f}[1]. \tag{4.6}\] Consider the diagram, where the rows and the columns are distinguished triangles: (4.7) In the above diagram, the second row is (4.4), the second column is (4.5), and the third column is (4.6). We obtain that the first row is also a distinguished triangle: \[\iota_{*}\mathbb{Q}\mathcal{X}_{0}\overset{\alpha}{\to}\iota_{*}\iota^{!} \mathbb{Q}\mathcal{X}[2]\to\varphi_{f}^{\operatorname{inv}}\to\iota_{*} \mathbb{Q}\mathcal{X}_{0}[1]. \tag{4.8}\] We will also use later the notations \(\varphi_{f}^{\operatorname{inv}}\,\mathbb{Q}_{\mathcal{X}}\) and \(\varphi_{f}^{\operatorname{inv}}\,\mathrm{IC}_{\mathcal{X}}\) when it is convenient to indicate the ambient space. Denote by \[H^{\bullet}(\mathcal{X},\varphi_{f})^{\operatorname{inv}} :=\ker(1-\operatorname{T})\subset H^{\bullet}(\mathcal{X}, \varphi_{f}),\] \[H^{\bullet}(\mathcal{X},\varphi_{f})_{\operatorname{inv}} :=H^{\bullet}(\mathcal{X},\varphi_{f})/\mathrm{image}(1-\operatorname{ T}).\] There is a long exact sequence: \[\ldots\to H^{2d-i-2}(\mathscr{X}_{0}) \xrightarrow{\alpha}H^{\mathrm{BM}}_{i}(\mathscr{X}_{0})\to H^{2d-i}( \mathscr{X},\varphi^{\mathrm{inv}}_{f}[-2])=H^{2d-i-2}(\mathscr{X},\varphi^{ \mathrm{inv}}_{f})\] \[\to H^{2d-i-1}(\mathscr{X}_{0})\to H^{\mathrm{BM}}_{i-1}(\mathscr{X}_ {0})\to\ldots \tag{4.9}\] and there are short exact sequences: \[0\to H^{i}(\mathscr{X},\varphi_{f})_{\mathrm{inv}}\to H^{i}(\mathscr{X}, \varphi^{\mathrm{inv}}_{f})\to H^{i+1}(\mathscr{X},\varphi_{f})^{\mathrm{inv}} \to 0. \tag{4.10}\] We note the following compatibility between K-theory and cohomology. Let \(\alpha^{\prime}\colon\mathrm{Perf}(\mathscr{X}_{0})\hookrightarrow D^{b}( \mathscr{X}_{0})\) be the inclusion. **Proposition 4.1**.: _The following diagram commutes:_ Proof.: If \(f\) is not the zero map, then the diagram above is the same as the diagram (3.4). If \(f\) is zero, then \(\alpha\) is zero. We show that \(\alpha^{\prime}\) is also the zero map on topological K-theory. Let \(\mathscr{X}_{0}\) be the derived zero locus of \(0\colon\mathscr{X}\to\mathbb{C}\). Let \(r=\mathrm{Spec}\,\mathbb{C}[\epsilon]\) for \(\epsilon\) of homological degree \(1\), then \(\mathscr{X}_{0}\cong\mathscr{X}\times r\). Consider the natural projection \(\pi\colon\mathscr{X}_{0}=\mathscr{X}\times r\to\mathscr{X}\) and let \(l\colon\mathscr{X}_{0}^{\mathrm{cl}}\cong\mathscr{X}\to\mathscr{X}_{0}\). Then \(\pi^{*}\colon K^{\mathrm{top}}_{\bullet}(\mathscr{X})\xrightarrow{\sim}K^{ \mathrm{top}}_{\bullet}(\mathscr{X}_{0})\) and \(l_{*}\colon G^{\mathrm{top}}_{\bullet}(\mathscr{X}_{0})\xrightarrow{\sim}G^{ \mathrm{top}}_{\bullet}(\mathscr{X})\). For any topological vector bundle \(E\) on \(\mathscr{X}\), there is an isomorphism: \[\pi^{*}(E)\cong l_{*}(E)\oplus l_{*}(E)[1]\in G^{\mathrm{top}}_{0}(\mathscr{X} _{0}),\] so the conclusion for \(i=0\) holds. A similar computation holds for the suspension of \(\mathscr{X}\), so the conclusion also holds for \(i=1\). ### Chern character maps for matrix factorizations Let \(X\) be a smooth affine variety with an action of a reductive group \(G\). Consider the quotient stack \(\mathscr{X}=X/G\). Let \(f\colon\mathscr{X}\to\mathbb{C}\) be a regular function. The main result of this subsection is the construction of a Chern character map: \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathrm{MF}(\mathscr{X},f))\to\widetilde {H}^{i}(\mathscr{X},\varphi^{\mathrm{inv}}_{f}).\] We may assume that \(\mathrm{Crit}(f)\subset\mathscr{X}_{0}:=f^{-1}(0)\). Further, replacing \(\mathscr{X}\) with an open neighborhood of \(\mathscr{X}_{0}\), we may also assume that the pull-back gives a weak equivalence of spectra \(K^{\mathrm{top}}(\mathscr{X})\xrightarrow{\sim}K^{\mathrm{top}}(\mathscr{X}_{0})\). Consider the regular function \(\widetilde{f}\colon\mathscr{X}\times\mathbb{C}\to\mathbb{C}\) defined by \(\widetilde{f}(x,t)=t\cdot f(x)\) and set \[F_{\widetilde{f}}=(\widetilde{f})^{-1}(1)\subset\mathscr{X}\times\mathbb{C}^{ *}.\] For a closed substack \(\mathscr{Y}\subset\mathscr{X}\), we denote by \(K^{\mathrm{top}}(\mathscr{X}/\mathscr{Y})\) the relative topological K-theory spectra, i.e. the fiber of the map \(K^{\mathrm{top}}(\mathscr{X})\to K^{\mathrm{top}}(\mathscr{Y})\). **Proposition 4.2**.: _There is a canonical weak equivalence of spectra:_ \[K^{\mathrm{top}}(\mathrm{MF}(\mathscr{X},f))\xrightarrow{\sim}K^{\mathrm{ top}}(\mathscr{X}\times\mathbb{C}^{*}/F_{\widetilde{f}}).\] Proof.: We consider graded categories of matrix factorizations of \(\mathcal{X}\times\mathbb{C}\), where the grading is given by the \(\mathbb{C}^{*}\)-action with weight \((0,2)\). By the Koszul equivalence (2.14) and (4.2), there are equivalences: (4.11) In the above diagram, the horizontal sequences are exact sequences of dg-categories and the vertical arrows are equivalences induced by (2.14). Consider the inclusion \(\iota\colon\mathcal{X}_{0}\hookrightarrow\mathcal{X}\) and the projection \(p\colon\mathcal{X}\times\mathbb{C}\to\mathcal{X}\). Note that \(p|_{F_{\widetilde{f}}}\colon F_{\widetilde{f}}\to\mathcal{X}\setminus \mathcal{X}_{0}\) is an isomorphism. We have the commutative diagram of spectra: The horizontal sequences are exact triangles of spectra, and the vertical arrows are equivalences. Let \(i\colon\mathcal{X}\hookrightarrow\mathcal{X}\times\mathbb{C}\) be the inclusion into \(\mathcal{X}\times\{0\}\). By Lemma 4.3 below together with the isomorphism \(K^{\mathrm{top}}(\mathcal{X})\overset{\sim}{\to}K^{\mathrm{top}}(\mathcal{X}_ {0})\) (this is the only place where we use that \(f\) is quasi-homogeneous), we have the equivalence \[i_{*}\colon K^{\mathrm{top}}(\mathcal{X})\overset{\sim}{\to}K^{\mathrm{top}}( \mathrm{MF}^{\mathrm{gr}}_{\mathcal{X}\times\{0\}}(\mathcal{X}\times\mathbb{C },\widetilde{f})).\] Therefore by taking the cofibers of we obtain the equivalence \[K^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{X}\times\mathbb{C}^{*}, \widetilde{f}))\overset{\sim}{\to}\mathrm{fib}\big{(}K^{\mathrm{top}}( \mathcal{X}\times\mathbb{C}^{*})\to K^{\mathrm{top}}(F_{\widetilde{f}})\big{)}.\] Therefore the desired equivalence follows from the right vertical equivalence in (4.11). We have used the following lemma: **Lemma 4.3**.: _The following diagram commutes_ Proof.: The equivalence \(\kappa\) is given by \((-)\otimes_{\mathcal{O}_{\mathcal{X}_{0}}}\mathcal{K}\) for the Koszul factorization \(\mathcal{K}\), see [Tod, Section 2.3.3]: \[\mathcal{K}=\mathcal{O}_{\mathcal{X}_{0}}\otimes_{\mathcal{O}_{\mathcal{X}}} \mathcal{O}_{\mathcal{X}\times\mathbb{C}}=\mathcal{O}_{\mathcal{X}}[\varepsilon,t]\] where \(\deg\varepsilon=-1\), \(\deg t=2\), with differential \(d_{\mathcal{X}}(\alpha(t)+\beta(t)\varepsilon)=f\beta(t)+t\alpha(t)\varepsilon\). By construction, it commutes with tensor product from \(D^{b}(\mathcal{X})\). Moreover, as an object of \(\mathrm{MF}^{\mathrm{gr}}(\mathcal{X}\times\mathbb{C},\widetilde{f})\), the object \(\mathcal{X}\) is isomorphic to \(i_{*}\mathcal{O}_{\mathcal{X}}[1]\), see [1, Proposition 3.20] or [1, Equation (2.3.6)]. Therefore the lemma holds. We next relate the relative cohomology to the monodromy invariant cohomology of vanishing cycles: **Proposition 4.4**.: _There are canonical isomorphisms:_ \[H^{\bullet}(\mathcal{X}\times\mathbb{C}^{*}/F_{\widetilde{f}})\cong H^{\bullet }(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]).\] Proof.: Consider the commutative diagram where horizontal sequences are exact triangles. By taking the fibers of the vertical maps, we obtain the exact triangle \[\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}\oplus\iota_{*}\mathbb{Q}_{\mathcal{X}_{ 0}}[-1]\to\psi_{f}^{\mathrm{inv}}[-1]\to\varphi_{f}^{\mathrm{inv}}[-1]\to\iota _{*}\mathbb{Q}_{\mathcal{X}_{0}}[1]\oplus\iota_{*}\mathbb{Q}_{\mathcal{X}_{0}}. \tag{4.12}\] Let \(u\colon\mathcal{X}\setminus\mathcal{X}_{0}\hookrightarrow\mathcal{X}\). Then \(\psi_{f}^{\mathrm{inv}}[-1]=\iota_{*}t^{*}u_{*}u^{*}\mathbb{Q}_{\mathcal{X}}\), see [12, Equation (17)]. We then have that: \[\mathbb{Q}_{\mathcal{X}_{0}}\oplus\mathbb{Q}_{\mathcal{X}_{0}}[-1]=\iota^{*}p _{*}\mathbb{Q}_{\mathcal{X}\times\mathbb{C}^{*}},\ \psi_{f}^{\mathrm{inv}}[-1]=\iota_{*}t^{*}u_{*}u^{*}\mathbb{Q}_{\mathcal{X}}= \iota_{*}t^{*}p_{*}\mathbb{Q}_{F_{\widetilde{f}}}.\] The first map in (4.12) is identified with \(\iota_{*}t^{*}p_{*}\) of the natural map \(\mathbb{Q}_{\mathcal{X}\times\mathbb{C}^{*}}\to\mathbb{Q}_{F_{\widetilde{f}}}\). Therefore we obtain the desired isomorphism. Consider the Chern character map of relative K-theories: \[\mathrm{ch}\colon K_{i}^{\mathrm{top}}(\mathcal{X}\times\mathbb{C}^{*}/F_{ \widetilde{f}})\to\widetilde{H}^{i}(\mathcal{X}\times\mathbb{C}^{*}/F_{ \widetilde{f}}). \tag{4.13}\] Define the Chern character map \[\mathrm{ch}\colon K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X},f))\to\widetilde {H}^{i}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}). \tag{4.14}\] such that the following diagram commutes, where the horizontal maps are isomorphisms by Propositions 4.2 and 4.4: Recall that \(d:=\dim_{\mathbb{C}}\mathcal{X}\). Define the filtration \[E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X},f)):=\mathrm{ch}^{-1} \left(H^{\geqslant 2d-i-2\ell}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]) \right). \tag{4.15}\] We obtain cycle maps on the associated graded pieces: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{X },f))\to H^{2d-i-2\ell}(\mathcal{X},\varphi_{f}^{\mathrm{inv}}[-2]). \tag{4.16}\] **Proposition 4.5**.: _The maps (4.16) are isomorphisms for all \(i,\ell\in\mathbb{Z}\), and the map (4.14) is an isomorphism if \(\mathcal{X}\) is a variety._ Proof.: Define a filtration: \[E_{\ell}K_{i}^{\rm top}(\mathscr{X}\times\mathbb{C}^{*}/F_{\widehat{f}}):={\rm ch }^{-1}\left(H^{\geqslant 2d-i-2\ell}(\mathscr{X}\times\mathbb{C}^{*}/F_{ \widehat{f}})\right)\] and the cycle maps on the associated graded pieces, which are isomorphisms using the long exact sequence for relative K-theory and Proposition 3.1: \[{\rm c}\colon\operatorname{gr}_{\ell}K_{i}^{\rm top}(\mathscr{X}\times \mathbb{C}^{*}/F_{\widehat{f}})\xrightarrow{\sim}H^{2\dim\mathscr{X}-i-2\ell}( \mathscr{X}\times\mathbb{C}^{*}/F_{\widehat{f}}).\] The conclusions then follow. Composing with the inverse of the equivalence (4.2), we also obtain a Chern character: \[{\rm ch}\colon K_{i}^{\rm sg}(\mathscr{X}_{0})\to\widetilde{H}^{i}(\mathscr{X },\varphi_{f}^{\rm inv}). \tag{4.17}\] Note the following compatibility of the Chern character maps. **Proposition 4.6**.: _The following diagram commutes, where the top sequence is (4.3) and the bottom sequence is (4.9):_ (4.18) Proof.: By the construction of the Chern character (4.17) and the GRR theorem, it suffices to show the following diagram commutes, which is indeed the case: ### The Grothendieck-Riemann-Roch theorem for matrix factorizations The Grothendieck-Riemann-Roch theorem for relative topological K-theory and cohomology implies the following. **Theorem 4.7**.: _Let \(h\colon\mathscr{X}\to\mathscr{Y}\) be a morphism of smooth quotient stacks. Consider a regular function \(f\colon\mathscr{Y}\to\mathbb{C}\), let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous. Let \(i\in\mathbb{Z}\)._ _(a) The following diagram commutes:_ _(b) Assume \(h\) is proper. Let \({\rm td}(T_{h})\in\widetilde{H}^{0}(\mathscr{X}_{0})\) be the Todd class of the virtual tangent bundle \(T_{h}\) of \(h\) and let \(h^{\prime}_{*}(-):=h_{*}({\rm td}(T_{h})\cdot(-))\). Then the following diagram commutes:_ Proof.: We may assume that \(f\) and \(g\) have only \(0\) as a critical value. The equivalence from Proposition 4.2 and the isomorphism from Proposition 4.4 commutes with both \(h_{*}\) and \(h^{*}\). The Chern character (4.13) commutes with \(h^{*}\), so part (a) follows. Finally, the topological Grothendieck-Riemann-Roch theorem implies that the following diagram commutes, so part (b) follows as well: We note the following functoriality of graded topological K-theory of categories of singularities. **Proposition 4.8**.: _Let \(h\colon\mathcal{X}\to\mathcal{Y}\) be a morphism of smooth quotient stacks of relative dimension \(d\), let \(f\colon\mathcal{Y}\to\mathbb{C}\) be a regular function, let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous. Let \(\mathcal{X}_{0}\) and \(\mathcal{Y}_{0}\) be the (derived) zero loci of \(g\) and \(f\), respectively. Let \(i,\ell\in\mathbb{Z}\). Then \(h\) induces a pullback map:_ \[h^{*}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{ MF}(\mathcal{Y},f))\to\operatorname{gr}_{\ell+d}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{X},g)).\] _If \(h\) is proper, then there is a pushforward map:_ \[h_{*}\colon\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{ MF}(\mathcal{X},g))\to\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}( \operatorname{MF}(\mathcal{Y},f)).\] Proof.: The claim follows from Theorem 4.7 and Proposition 3.2. For future reference, we also state explicitly the compatibility of the Chern character maps with Knorrer periodicity, which is a particular case of Theorem 4.7. **Corollary 4.9**.: _Let \(X\) be a smooth affine variety with an action of a reductive group, let \(\mathcal{X}:=X/G\) and consider a regular function \(f\colon\mathcal{X}\to\mathbb{C}\) with only \(0\) as a critical value. Let \(U\) be a finite dimensional representation of \(G\) and consider the natural pairing \(w\colon U\times U^{\vee}\to\mathbb{C}\). Let \(\mathcal{Y}:=(X\times U\times U^{\vee})/G\) and consider the regular function \(f+w\colon\mathcal{Y}\to\mathbb{C}\), where \(f\) and \(w\) are pulled-back from \(X\) and \(U\times U^{\vee}\), respectively. Consider the natural maps:_ \[X\overset{v}{\leftarrow}X\times U\overset{s}{\hookrightarrow}X\times U\times U ^{\vee}\] _where \(v\) is the projection and \(s(x,u)=(x,u,0)\). Let \(\operatorname{ch}^{\prime}:=\operatorname{ch}\cdot\operatorname{td}(T_{s})\), where \(T_{s}\) is the relative tangent complex of \(s\). The following diagram commutes:_ \[\begin{CD}K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},f))@>{s _{*}v^{*}}>{}>K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{Y},f+w)) \\ @V{}V{\operatorname{ch}^{\prime}}V@V{}V{\operatorname{ch}}V\\ \widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{inv}})@>{s_{*}v^{*}}> {}>\widetilde{H}^{i}(\mathcal{Y},\varphi_{f+w}^{\operatorname{inv}}).\end{CD}\] Note that the horizontal maps are isomorphisms by the Thom-Sebastiani theorem, see the proofs of Propositions 6.11 and 6.13. The top horizontal map is called Knorrer periodicity [11, 12] ### Complements #### 4.4.1. Injectivity of the cycle map The Chern characters (3.1), (3.3), or (4.14) may not be injective when \(\mathcal{X}\) is a stack. However, they are all isomorphism when \(\mathcal{X}\) is a variety. In some cases of interest, we can show that (4.14) is injective for \(\mathcal{X}\) a stack using the following propositions. **Proposition 4.10**.: _Let \(\mathcal{X}\) be a smooth quotient stack and let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function. Let \(\mathbb{S}\) be a subcategory of \(\operatorname{MF}(\mathcal{X},f)\). Assume there exists a smooth variety \(Y\) and a morphism \(r\colon Y\to\mathcal{X}\) such that \(r^{*}\colon\mathbb{S}\to\operatorname{MF}(Y,g)\) is (left or right) admissible, where \(g:=f\circ r\). Let \(i\in\mathbb{Z}\). Then the Chern character_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{S})\to K_{i}^{ \operatorname{top}}(\operatorname{MF}(\mathcal{X},f))\to\widetilde{H}^{i}( \mathcal{X},\varphi_{f}^{\operatorname{inv}})\] _is injective._ Proof.: The pullback map \(r^{*}\colon K_{i}^{\operatorname{top}}(\mathbb{S})\hookrightarrow K_{i}^{ \operatorname{top}}(\operatorname{MF}(Y,g))\) is injective. The claim then follows from the diagram: **Proposition 4.11**.: _Let \(\mathcal{X}\) be a smooth quotient stack and let \(f\colon\mathcal{X}\to\mathbb{C}\) be a regular function. Assume there is a semiorthogonal decomposition \(\operatorname{MF}(\mathcal{X},f)=\langle\mathbb{B}_{i}\mid i\in I\rangle\) and a collection of finite subsets \(I_{n}\subset I\) for \(n\in\mathbb{N}\) with the following two properties:_ * _for any finite subset_ \(S\subset I\)_, there exists_ \(n\in\mathbb{N}\) _such that_ \(S\subset I_{n}\)_,_ * _for all_ \(n\in\mathbb{N}\)_, there exists a smooth variety_ \(Y_{n}\) _and a morphism_ \(r_{n}\colon Y_{n}\to\mathcal{X}\) _such that the category_ \(\mathbb{B}^{n}:=\langle\mathbb{B}_{i}\mid i\in I_{n}\rangle\) _is (left or right) admissible in_ \(\operatorname{MF}(Y_{n},f\circ r_{n})\) _via_ \(r_{n}^{*}\)_._ _Let \(i\in\mathbb{Z}\). Then the Chern character_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathcal{X},f))\to\widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{ inv}})\] _is injective._ Proof.: Let \(x\in K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},f))=\bigoplus_{j \in I}K_{i}^{\operatorname{top}}(\mathbb{B}_{j})\). Let \(S\subset I\) be a finite set such that \(x\in\bigoplus_{j\in S}K_{i}^{\operatorname{top}}(\mathbb{B}_{j})\). Then there exists \(n\) such that \(x\in K^{\operatorname{top}}(\mathbb{B}^{n})\). The Chern character \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{B}^{n})\to \widetilde{H}^{i}(\mathcal{X},\varphi_{f}^{\operatorname{inv}})\] is injective by Proposition 4.10, and the claim follows. #### 4.4.2. Action of exterior algebra on the K-theory of matrix factorizations Denote by \(p:=\operatorname{Spec}\mathbb{C}\). The following computation follows as in Proposition 4.1. **Lemma 4.12**.: _As a \(\mathbb{Z}/2\)-algebra, we have_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(p,0))=\Lambda:=\mathbb{Q}[\epsilon]\] _where \(\epsilon\) has degree one._ Note that, for any regular function on a smooth stack \(h\colon\mathcal{Y}\to\mathbb{C}\), the category \(\operatorname{MF}(\mathcal{Y},h)\) is a module over \(\operatorname{MF}(p,0)\), so \(K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathcal{Y},h))\) is a \(\mathbb{Z}/2\)-graded \(\Lambda\)-module by Lemma 4.12. **Proposition 4.13**.: _Let \(\mathcal{X}\) be a smooth stack. Then_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X},0))\cong K_{\cdot }^{\operatorname{top}}(\mathcal{X})\otimes_{\mathbb{Q}}\Lambda\] _as \(\Lambda\)-modules. Then, if \(\mathbb{M}\subset D^{b}(\mathcal{X})\) is an admissible subcategory of \(D^{b}(\mathcal{X})\), there is an isomorphism of \(\Lambda\)-modules:_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}(\mathbb{M},0))\cong K_{\cdot }^{\operatorname{top}}(\mathbb{M})\otimes_{\mathbb{Q}}\Lambda.\] Proof.: It suffices to prove the first isomorphism. Let \(\mathcal{X}_{0}\) be the derived zero locus of \(0\colon\mathcal{X}\to\mathbb{C}\). By the long exact sequence (4.3), it suffices to show that the map \(\alpha^{\prime}\colon\operatorname{Perf}(\mathcal{X}_{0})\to D^{b}(\mathcal{X} _{0})\) induces the zero map: \[\alpha^{\prime}\colon K_{\bullet}^{\operatorname{top}}(\mathcal{X}_{0})\to G _{\bullet}^{\operatorname{top}}(\mathcal{X}_{0}),\] which we showed in the proof of Proposition 4.1. #### 4.4.3. The Chern character for the algebraic K-theory of matrix factorizations Consider the natural transformation \[\gamma\colon K_{0}^{\operatorname{alg}}:=K_{0}\to K_{0}^{\operatorname{top}}\] from algebraic K-theory to topological K-theory [1, Remark 4.14]. For a quotient stack \(\mathcal{X}=X/G\), where \(G\) is a reductive group acting on a smooth affine scheme \(X\), there is a Chern character: \[\operatorname{ch}^{\operatorname{alg}}\colon K_{0}^{\operatorname{alg}}( \operatorname{MF}(\mathcal{X},f))\xrightarrow{\gamma}K_{0}^{\operatorname{ top}}(\operatorname{MF}(\mathcal{X},f))\xrightarrow{\operatorname{ch}}\widetilde{H}^{0}( \mathcal{X},\varphi_{f}^{\operatorname{inv}}).\] We next state an algebraic version of the GRR theorem 4.7. **Theorem 4.14**.: _Let \(h\colon\mathcal{X}\to\mathcal{Y}\) be a morphism of smooth quotient stacks. Consider a regular function \(f\colon\mathcal{Y}\to\mathbb{C}\), let \(g:=f\circ h\), and assume that \(f\) and \(g\) are quasi-homogeneous._ _(a) The following diagram commutes:_ _(b) Assume \(h\) is proper. Let \(\operatorname{td}(T_{h})\in\widetilde{H}^{0}(\mathcal{X}_{0})\) be the Todd class of the virtual tangent bundle \(T_{h}\) of \(h\), and let \(h^{\prime}_{*}(-):=h_{*}(\operatorname{td}(T_{h})\cdot(-))\). Then the following diagram commutes:_ Proof.: Both claims follow from Theorem 4.7 and the commutativity of \(\gamma\) with \(h^{*}\) and \(h_{*}\). #### 4.4.4. Graded and ungraded matrix factorizations One can define graded categories of matrix factorizations in more generality than the one used in Subsection 2.7, see below for one example. It is natural to ask for an analogue of Proposition 4.5 for categories of graded matrix factorizations. We do not know how to answer this question for general graded categories, but we study some examples in Section 5. We mention a theorem of Brown-Dyckerhoff in [1, Theorem 1.3] which computes the topological K-theory for a class of graded matrix factorizations not covered by our methods. Let \(f\colon\mathbb{C}^{n}\to\mathbb{C}\) be a homogeneous polynomial of degree \(d\). Let \(\mathbb{C}^{*}\) act on \(\mathbb{C}^{n}\) with weight \(1\). Consider category \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n},f)\) with objects of the form \[(\alpha\colon\mathcal{F}\rightleftarrows\mathcal{G}\colon\beta),\ \alpha\circ\beta=\beta\circ\alpha=\times f,\] where \(\alpha\) is homogeneous of degree \(d\) and \(\beta\) is homogeneous of degree zero. For each \(i\in\mathbb{Z}\), there are isomorphisms: \[K^{\operatorname{top}}_{\mu_{d},i}(\mathbb{C}^{n},f^{-1}(1))\overset{\sim}{ \to}K^{\operatorname{top}}_{i}(\operatorname{MF}^{\operatorname{gr}}(\mathbb{ C}^{n},f)),\] where the left hand side is the \(\mu_{d}\)-equivariant relative topological K-theory space, see loc. cit. for more details. Note that, for a homogeneous polynomial, the vanishing cycle cohomology can be computed in terms of relative cohomology [1, Proposition 6.4]. We do not have an alternative proof of [1, Theorem 1.3]. However, we note the following relation between graded and ungraded matrix factorizations, that may be used in conjunction with excision arguments for computation, but which we do not use later in the paper. **Proposition 4.15**.: _Let \(\mathbb{C}^{*}\) act on \(\mathbb{C}^{n+1}\) with weight \(1\), consider the grading with respect to this weight, and by abuse of notation denote by \(f\colon\mathbb{C}^{n}\times\mathbb{C}^{*}\overset{\pi_{1}}{\to}\mathbb{C}^{n} \overset{f}{\to}\mathbb{C}\). There is an equivalence_ \[\operatorname{MF}(\mathbb{C}^{n},f)\overset{\sim}{\to}\operatorname{MF}^{ \operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f). \tag{4.19}\] Proof.: We have the isomorphism of stacks \[p\colon(\mathbb{C}^{n}\times\mathbb{C}^{*})/\mathbb{C}^{*}\overset{\sim}{\to }\mathbb{C}^{n},\ (x_{1},\dots,x_{n},t)\mapsto(t^{-1}x_{1},\dots,t^{-1}x_{n}).\] For an object \((\alpha\colon\mathcal{F}\rightleftarrows\mathcal{G}\colon\beta)\) in \(\operatorname{MF}(\mathbb{C}^{n},f)\), we associate the object \[(\alpha^{\prime}\colon p^{*}\mathcal{F}\rightleftarrows p^{*}\mathcal{G} \colon\beta^{\prime}),\ \alpha^{\prime}=t^{d}p^{*}\alpha,\ \beta^{\prime}=p^{*}\beta\] in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\). Note that \(\alpha^{\prime}\) is degree \(d\) and \(\beta^{\prime}\) is degree zero. Since \(\alpha\circ\beta=\beta\circ\alpha=\times f\) and \(p^{*}f=t^{-d}f\), we have \(\alpha^{\prime}\circ\beta^{\prime}=\beta^{\prime}\circ\alpha^{\prime}=\times f\), so it determines an object in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\). Conversely, given an object \((\gamma\colon\mathcal{P}\rightleftarrows\mathcal{Q}\colon\delta)\) in \(\operatorname{MF}^{\operatorname{gr}}(\mathbb{C}^{n}\times\mathbb{C}^{*},f)\), we associate the object \[(\gamma^{\prime}\colon\mathcal{P}_{0}\rightleftarrows\mathcal{Q}_{0}\colon \delta^{\prime}),\ \gamma^{\prime}=t^{-d}\gamma_{0},\ \delta^{\prime}=\delta_{0}\] in \(\operatorname{MF}(\mathbb{C}^{n},f)\). In the above, the subscript \(0\) means taking the degree zero part and the morphism \(\gamma^{\prime}\) is \(\mathcal{P}_{0}\stackrel{{\gamma_{0}}}{{\to}}\mathcal{Q}_{d} \stackrel{{ t^{-d}}}{{\to}}\mathcal{Q}_{0}\). It is easy to see that the above correspondences give mutually inverse functors, giving the equivalence (4.19). ## 5. Dimensional reduction In this section, we show that the Koszul equivalence (2.11) and the dimensional reduction in cohomology are compatible via the Chern character map. We will use these compatibilities in Subsection 7 to compute the topological K-theory of preprojective quasi-BPS categories from the topological K-theory of quasi-BPS categories of tripled quivers with potential. ### Dimensional reduction Recall the setting of the Koszul equivalence from Subsection 2.8. We will use the notations of the various maps from the diagram (2.13). In this subsection, we review the dimensional reduction theorem in cohomology due to Davison [16, Theorem A.1] (note that, to obtain the maps in loc. cit., one needs to precompose all the following maps by \(l_{*}\), see the isomorphism (2.6)). For \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{E}^{\vee}_{0})\), there is a natural isomorphism: \[\varphi_{f}[-1]\iota_{*}\bullet\stackrel{{\sim}}{{\to}}\iota_{*} \bullet. \tag{5.1}\] For \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\), there is a natural transformation \[\eta^{*}\bullet\to\eta^{*}j_{*}j^{*}\bullet \tag{5.2}\] The natural transformations (5.2) and (5.1) induce a natural transformation for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\varphi_{f}[-1]\eta^{*}\bullet\to\varphi_{f}[-1](\eta^{*}j_{*}j^{*}\bullet)= \eta^{*}j_{*}j^{*}\bullet.\] The dimensional reduction isomorphism in cohomology [16, Theorem A.1] is the following natural isomorphism for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\): \[\eta_{i}\varphi_{f}[-1]\eta^{*}\bullet\stackrel{{\sim}}{{\to}} \eta_{i}\eta^{*}j_{*}j^{*}\bullet.\] By taking the Verdier dual of the above natural isomorphism, we obtain: \[\eta_{*}j^{\prime}_{*}j^{\prime\prime}_{*}\eta^{!}\eta^{!}\bullet=\eta_{*}\eta ^{!}j_{*}j^{!}\bullet\stackrel{{\sim}}{{\to}}\eta_{*}\varphi_{f} [-1]\eta^{!}\bullet, \tag{5.3}\] which alternatively can be described as applying the functor \(\eta_{*}\varphi_{f}[-1]\) to the natural transformation \(j^{\prime}_{*}j^{\prime!}\eta^{!}\bullet\to\eta^{!}\bullet\) for \(\bullet\in D^{b}_{\operatorname{con}}(\mathcal{X})\). By taking the cohomology of the two sides in (5.3), one obtains the _dimensional reduction_ isomorphism: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{\operatorname{BM}}_{i}(\mathcal{X}) \stackrel{{\sim}}{{\to}}H^{\operatorname{BM}}_{i+2r}(\mathcal{E} ^{\vee}|_{\mathcal{X}})\stackrel{{\sim}}{{\to}}H^{2\dim\ell-2r-i} (\mathcal{E}^{\vee},\varphi_{f}\mathbb{Q}[-1]). \tag{5.4}\] Further, the monodromy on the left hand side is trivial. The isomorphism (5.3) factors through: \[\eta_{*}j^{\prime}_{*}j^{\prime!}\eta^{!}\bullet\to\eta_{*}\iota_{*}\iota^{! }\eta^{!}\bullet=\eta_{*}\varphi_{f}[-1]\iota_{*}\iota^{!}\eta^{!}\bullet\to \eta_{*}\varphi_{f}[-1]\eta^{!}\bullet.\] Recall that \(\varphi_{f}^{\operatorname{inv}}\) is the cone of the map \(\alpha\colon\iota_{*}\mathbb{Q}_{\mathcal{E}^{\vee}_{0}}\to\iota_{*}\iota^{! }\mathbb{Q}_{\mathcal{E}^{\vee}}[2]\), see (4.8). From the diagram (4.7), the map \(\eta_{*}\iota_{*}\iota^{!}\mathbb{Q}_{\mathcal{E}^{\vee}}[2]\to\eta_{*}\varphi _{f}\mathbb{Q}_{\mathcal{E}^{\vee}}[1]\) factors through \(\eta_{*}\varphi_{f}^{\operatorname{inv}}\), and thus there are maps whose composition is an isomorphism: \[\beta^{\prime}\colon\eta_{*}j^{\prime}_{*}\omega_{\mathcal{E}^{\vee}|_{ \mathcal{X}}}\to\eta_{*}\iota_{*}\iota^{!}\omega_{\mathcal{E}^{\vee}}\to\eta_{* }\varphi_{f}^{\operatorname{inv}}[2\dim\mathcal{E}^{\vee}-2]\stackrel{{ \beta}}{{\to}}\eta_{*}\varphi_{f}\omega_{\mathcal{E}^{\vee}}[-1]. \tag{5.5}\] We let \(\beta^{\diamond}\colon\eta_{*}j^{\prime}_{*}\omega_{\mathcal{E}^{\vee}|_{\mathcal{X }}}\to\eta_{*}\iota_{*}!^{\omega}\omega_{\mathcal{E}^{\vee}}\to\eta_{*}\varphi_{ f}^{\mathrm{inv}}[2\dim\mathcal{E}^{\vee}-2]\). The map \(\beta^{\diamond}\) provides a splitting of the map \(\beta\), thus the triangle (4.6) becomes the natural isomorphism: \[\eta_{*}\varphi_{f}^{\mathrm{inv}}[-2]\cong\eta_{*}\varphi_{f}[-2]\oplus\eta_{* }\varphi_{f}[-1]. \tag{5.6}\] By taking global sections of this isomorphism, there is a natural injective map: \[\gamma\colon H^{\bullet}(\mathcal{E}^{\vee},\varphi_{f}[-1])\hookrightarrow H ^{\bullet}(\mathcal{E}^{\vee},\varphi_{f}^{\mathrm{inv}}[-2]). \tag{5.7}\] We also note that, by taking global sections of the complexes in (5.5), we obtain an isomorphism: \[\beta^{\prime}\colon H^{\mathrm{BM}}_{i}(\mathcal{K}) \xrightarrow{\sim}H^{\mathrm{BM}}_{i+2r}(\mathcal{E}^{\vee}|_{ \mathcal{X}})\to H^{\mathrm{BM}}_{i+2r}(\mathcal{E}^{\vee}_{0})\to H^{2\dim \mathcal{E}-2r-i}(\mathcal{E}^{\vee},\varphi_{f}^{\mathrm{inv}}[-2])\] \[\to H^{2\dim\mathcal{E}-2r-i}(\mathcal{E}^{\vee},\varphi_{f} \mathbb{Q}[-1])=H^{2\dim\mathcal{X}-i}(\mathcal{E}^{\vee},\varphi_{f}\mathbb{Q }[-1]). \tag{5.8}\] Note that the composition of the maps on the top row on (5.8) is given by \(\beta^{\diamond}\). ### The Chern character for graded matrix factorizations The purpose of this subsection is to construct a Chern character map: \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{ \vee},f))\to\widetilde{H}^{i}(\mathcal{E}^{\vee},\varphi_{f}[-1]) \tag{5.9}\] compatible with the Chern character map (3.8) for \(\mathcal{K}\) and the Chern character map (4.14) for \(\mathrm{MF}(\mathcal{E}^{\vee},f)\), see Proposition 5.1. We begin with a few preliminaries. Recall the forget-the-grading functor \[\Theta\colon\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f)\to\mathrm{MF}( \mathcal{E}^{\vee},f) \tag{5.10}\] and the equivalence between matrix factorizations and categories of singularities [10, 1]: \[\mathrm{MF}(\mathcal{E}^{\vee},f)\xrightarrow{\sim}D_{\mathrm{sg}}(\mathcal{ E}^{\vee}_{0}).\] The following diagram commutes: By the isomorphism (2.6), we obtain a commutative diagram, and we define \(\Psi\) as the resulting map: (5.11) Recall the Chern character (4.17) and the splitting (5.8). Let \(N\) be the normal bundle of \(\mathcal{K}\hookrightarrow\mathcal{E}^{\vee}\) and let \(M\) be the normal bundle of \(\mathcal{E}^{\vee}_{0}\hookrightarrow\mathcal{E}^{\vee}\). Let \(\mathrm{ch}^{\prime}:=\mathrm{ch}\cdot\mathrm{td}(N)\) and \(\mathrm{ch}^{\prime\prime}:=\mathrm{ch}\cdot\mathrm{td}(M)\). Then the following diagram commutes by Proposition 3.5: (5.12) **Proposition 5.1**.: _There is an injective Chern character (5.9) such that, in the following commutative diagram, the horizontal maps are injective:_ (5.13) _and such that the following diagram commutes as well for the modified Chern character for the immersions of \(\mathcal{E}^{\vee}|_{\mathcal{X}}\) and \(\mathcal{E}^{\vee}_{0}\) in \(\mathcal{E}^{\vee}\):_ (5.14) Proof.: Define (5.9) such that the diagram (5.14) commutes. We have that \(\gamma\circ j_{*}^{\prime}\eta^{\prime*}=\beta^{\circ}\) and \(\Theta\circ j_{*}^{\prime}\eta^{\prime*}=\Psi\), so the diagram (5.13) commutes as well. It remains to show that \(\Theta\) is injective. The map \(\beta^{\circ}\) is injective by (5.8). Then \(\Psi\) is also injective by the commutativity of the diagram (5.12). By the factorization \(\Theta\circ j_{*}^{\prime}\eta^{\prime*}=\Psi\), the map \(\Theta\) is indeed injective. We define an increasing filtration \(E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f) )\subset K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f))\) by \[E_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f) ):=\mathrm{ch}^{-1}\left(H^{\geqslant 2\dim\mathcal{E}^{\vee}-i-2\ell}( \mathcal{E}^{\vee},\varphi_{f}[-1])\right).\] We obtain cycle maps: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathrm{MF}^{\mathrm{gr }}(\mathcal{E}^{\vee},f))\xrightarrow{\sim}H^{2\dim\mathcal{E}^{\vee}-i-2\ell} (\mathcal{E}^{\vee},\varphi_{f}[-1]).\] The above cycle maps are isomorphisms by the isomoprhim (3.11) together with the commutative diagram (5.14). The following is a corollary of Propositions 4.10, 4.5, and 5.1: **Proposition 5.2**.: _The following diagram commutes, where all the cycle maps are isomorphisms:_ \[\begin{CD}\mathrm{gr}_{\ell}\mathcal{G}_{i}^{\mathrm{top}}(\mathcal{X})@>{j_ {*}^{\prime}\eta^{\prime*}}>{}>\mathrm{gr}_{\ell+r}K_{i}^{\mathrm{top}}( \mathrm{MF}^{\mathrm{gr}}(\mathcal{E}^{\vee},f))@<{}<{}<\mathrm{gr}_{\ell+r}K _{i}^{\mathrm{top}}(\mathrm{MF}(\mathcal{E}^{\vee},f))\\ @V{\downarrow}V{}V@V{\downarrow}V{}V\\ H_{i+2\ell}^{\mathrm{BM}}(\mathcal{X})@>{j_{*}^{\prime}\eta^{\prime*}}>{}> \wedge H^{2\dim\mathcal{X}-i-2\ell}(\mathcal{E}^{\vee},\varphi_{f}[-1])@<{ \gamma}>{}>H^{2\dim\mathcal{X}-i-2\ell}(\mathcal{E}^{\vee},\varphi_{f}^{ \mathrm{inv}}[-2]).\end{CD}\] Proof.: The modified Chern characters \(\operatorname{ch}^{\prime}\) and \(\operatorname{ch}^{\prime\prime}\) induce the cycle maps \(\operatorname{c}\) on the associated graded, see Proposition 3.2. **Remark 5.3**.: Recall that \(K_{\cdot}^{\operatorname{top}}(\mathcal{E}^{\vee},f)\) is a \(\Lambda=\mathbb{Q}[\epsilon]\)-module, where \(\epsilon\) has degree \(1\). We include the following computation of a \(\Lambda\)-module structure on the topological K-theory of a category of matrix factorizations, see also Proposition 4.13, but note that we do not use it later in the paper. **Proposition 5.4**.: _The forget-the-potential functor induces an isomorphism_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}^{\operatorname{gr}}( \mathcal{E}^{\vee},f))\otimes_{\mathbb{Q}}\Lambda\xrightarrow{\sim}K_{\cdot}^{ \operatorname{top}}(\operatorname{MF}(\mathcal{E}^{\vee},f)) \tag{5.15}\] _of \(\Lambda\)-modules. Thus, if \(\mathbb{M}\) is admissible in \(D^{b}(\mathcal{E}^{\vee})\), there is an isomorphism of \(\Lambda\)-modules:_ \[K_{\cdot}^{\operatorname{top}}(\operatorname{MF}^{\operatorname{gr}}( \mathbb{M},f))\otimes_{\mathbb{Q}}\Lambda\xrightarrow{\sim}K_{\cdot}^{ \operatorname{top}}(\operatorname{MF}(\mathbb{M},f)).\] Proof.: It is enough to prove (5.15). Let \(p:=\operatorname{Spec}\mathbb{C}\) and let \(r:=\operatorname{Spec}\Lambda\). Recall the Koszul equivalence (2.14). Using [11, Proposition 3.24] (also see [13, Proposition 3.9] and note that \(\operatorname{MF}(p,0)\simeq D^{b}(r)/\operatorname{Perf}(r)\)), the equivalence (2.14) induces an equivalence: \[\kappa^{\prime}\colon D^{b}(\mathcal{K})\otimes_{D^{b}(p)}\operatorname{MF}(p,0)\xrightarrow{\sim}\operatorname{MF}(\mathcal{E}^{\vee},f).\] Let \(\mathcal{K}_{0}:=\mathcal{K}\times r\) and let \(\pi\colon\mathcal{K}_{0}\to\mathcal{K}\) and \(t\colon r\to p\) be the natural projections. We have that \(\operatorname{MF}(p,0)\cong D^{b}(r)/t^{*}(D^{b}(p))\). Then \[D^{b}(\mathcal{K})\otimes_{D^{b}(p)}\operatorname{MF}(p,0)\cong D^{b}( \mathcal{K}_{0})/\pi^{*}(D^{b}(\mathcal{K})).\] It suffices to show that the map \[\pi^{*}\colon G_{i}^{\operatorname{top}}(\mathcal{K})\to G_{i}^{\operatorname {top}}(\mathcal{K}_{0})\cong G_{i}^{\operatorname{top}}(\mathcal{K})\] is zero, which follows as in the proof of Proposition 4.1. ## 6. Topological K-theory of quasi-BPS categories for quivers with potential In this section, we compute the topological K-theory of quasi-BPS categories for symmetric quivers satisfying Assumption 2.1 with a quasi-homogeneous potential in terms of BPS cohomology, see Theorem 6.2. The main step in the proof of Theorem 6.2 is the construction of the cycle map from topological K-theory of quasi-BPS categories to BPS cohomology, see Theorem 6.3 (which holds for all symmetric quivers). The conclusion then follows by comparing the decomposition of DT invariants in BPS invariants of Meinhardt-Reineke (which also holds for all symmetric quivers) and and the semiorthogonal decomposition of the variety of framed representations from Theorem 2.8. We note that there is a version of Theorem 2.8 for all symmetric quivers, see [PTe]. However, under Assumption 2.1, all quasi-BPS categories appearing in the semiorthogonal decomposition are of the form \(\mathbb{S}(d)_{v}\), which is used crucially in the computation in Subsection 6.3. The construction of the cycle map from Theorem 6.3 holds for all quasi-BPS categories \(\mathbb{S}(d;\delta)\) for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Theorem 6.3 is proved in Subsections 6.6 and is based on the fact that the weight conditions for complexes in \(\mathbb{S}(d;\delta)\) restrict the possible perverse degree of their image under the cycle map, see Proposition 6.15 and Corollaries 6.18 and 6.19. In view of the assumptions in Section 4, we assume throughout this section that the potential \(W\) of \(Q=(I,E)\) is _quasi-homogeneous_, that is, there exists a weight function \(w\colon E\to\mathbb{Z}_{\geqslant 0}\) such that \(W\) is homogeneous of weight \(d>0\) with respect to the function \(w\). ### Statement of the main theorem Before we state Theorem 6.2, we introduce notation related to quasi-BPS categories and BPS sheaves. #### 6.1.1. Quasi-BPS categories Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Consider the category \(\mathbb{M}(d;\delta)\) defined in (2.19) and recall the definition of quasi-BPS categories \(\mathbb{S}(d;\delta)\) from (2.21). For \(\lambda\) a cocharacter of \(T(d)\), recall the definition of \(n_{\lambda}\) from (2.20). For \(\lambda\) a dominant cocharacter of \(T(d)\), define \[\varepsilon_{\lambda,\delta}=\begin{cases}1,\text{ if }\frac{1}{2}n_{\lambda}+ \langle\lambda,\delta\rangle\in\mathbb{Z},\\ 0,\text{ otherwise.}\end{cases} \tag{6.1}\] For a partition \(\mathbf{d}=(d_{i})_{i=1}^{k}\) of \(d\), let \(\varepsilon_{\mathbf{d},\delta}=1\) if \(\varepsilon_{\lambda,\delta}=1\) for all cocharacters \(\lambda\) with associated partition \(\mathbf{d}\) and let \(\varepsilon_{\mathbf{d},\delta}=0\) otherwise. #### 6.1.2. Sets of partitions For a dimension vector \(d=(d^{j})_{j\in I}\in\mathbb{N}^{I}\), recall that \(\underline{d}:=\sum_{j\in I}d^{j}\). Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Denote by \(S_{\delta}^{d}\) the set of partitions \(\mathbf{d}=(d_{i})_{i=1}^{k}\) of \(d\) such that \(\varepsilon_{\mathbf{d},\delta}=1\), where \(\lambda\) is any antidominant cocharacter with associated partition \((d_{i})_{i=1}^{k}\). If \(\delta=v\tau_{d}\), we use the notation \(S_{v}^{d}\) instead of \(S_{v\tau_{d}}^{d}\). Consider \((d_{i})_{i=1}^{k}\in S_{\delta}^{d}\), and an antidominant cocharacter with associated partition \((d_{i})_{i=1}^{k}\). Define \(\theta_{i}\in\frac{1}{2}M(d_{i})\) with \[\sum_{i=1}^{k}\theta_{i}=-\frac{1}{2}R(d)^{\lambda>0}+\frac{1}{2}\mathfrak{g} (d)^{\lambda>0}.\] Let \(\delta_{d_{i}}\in M(d_{i})_{\mathbb{R}}\) such that \(\sum_{i=1}^{k}\delta_{d_{i}}=\delta\). Then the Hall product induces a functor \[m=m_{\lambda}\colon\bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_ {i}})\to\mathbb{M}(d;\delta)\] and similarly for categories of matrix factorizations, see [10, Propositions 3.5 and 3.6] (in loc. cit. and using the notations used there, [10, Proposition 3.6] is stated that \(r>\frac{1}{2}\), but for \(r=\frac{1}{2}\) it is still true that \(\chi-\sigma_{I}\in\frac{1}{2}\mathbb{W}\)). If we assume that \(Q\) satisfies Assumption 2.1, then \(\theta_{i}\in M(d_{i})^{W_{d_{i}}}\), and so there are functors, see Remark 2.6: \[\bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\delta_{d_{i}})\overset{\sim}{\to} \bigotimes_{i=1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}})\overset{m}{ \to}\mathbb{M}(d;\delta_{d}). \tag{6.2}\] Assume that \(\delta=v\tau_{d}\) and write \(\delta_{i}=v_{i}\tau_{d_{i}}\) for \(1\leqslant i\leqslant k\). Then \[\frac{v}{\underline{d}}=\frac{v_{i}}{\underline{d}_{i}} \tag{6.3}\] for any \(1\leqslant i\leqslant k\). If we assume that \(Q\) satisfies Assumption 2.1, the Hall product then induces functors, see (6.2): \[\bigotimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\overset{\sim}{\to}\bigotimes_{i =1}^{k}\mathbb{M}(d_{i};\theta_{i}+\delta_{d_{i}})\overset{m}{\to}\mathbb{M} (d)_{v}.\] We end this subsection with the following computation: **Proposition 6.1**.: _Let \(Q=(I,E)\) be a quiver satisfying Assumption 2.1. Let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\). The set \(S_{v}^{d}\) contains all partitions \(\textbf{d}=(d_{i})_{i=1}^{k}\) such that_ \[\underline{d}|v\cdot\gcd(\underline{d}_{1},\ldots,\underline{d}_{k}).\] _In particular, if \(\gcd(v,\underline{d})=1\), then \(S_{v}^{d}\) contains only the one term partition of \(d\)._ Proof.: Let \(d=(d^{a})_{a\in I}\in\mathbb{N}^{I}\). Note that \(n_{\lambda}=\langle\lambda,\mathbb{L}_{\mathcal{X}(d)}^{\lambda>0}|_{0}\rangle \in 2\mathbb{Z}\) because \(Q\) satisfies Assumption 2.1. Then \(\varepsilon_{\textbf{d},v\tau_{d}}=1\) if and only if \(\langle\lambda,vr_{d}\rangle\in\mathbb{Z}\) for all cocharacters \(\lambda\) with associated partition \(\textbf{d}\). Write \(\lambda=(\lambda^{a})_{a\in I}\), where \(\lambda^{a}\colon\mathbb{C}^{*}\to T(d^{a})\) is a cocharacter \[\lambda(t)=(t^{m_{1}},\ldots,t^{m_{1}},t^{m_{2}},\ldots,t^{m_{2}},\ldots,t^{m _{k}}),\] where \(m_{i}\) appears \(d_{i}^{a}\)-times, and \(m_{i}\neq m_{j}\) for \(1\leqslant i\neq j\leqslant k\). Then the condition \(\langle\lambda,vr_{d}\rangle\in\mathbb{Z}\) is equivalent to that \[v/\underline{d}\cdot\sum_{i=1}^{k}m_{i}\underline{d}_{i}\in\mathbb{Z}\] for all tuples of pairwise distinct integers \((m_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\), which implies the desired conclusion. #### 6.1.3. BPS sheaves and cohomologies Let \(Q=(I,E)\) be a symmetric quiver, let \(W\) be a potential of \(Q\), and let \(d\in\mathbb{N}^{I}\). Consider the stack of dimension \(d\) representations of \(Q\) and its good moduli space: \[\pi_{d}\colon\mathcal{X}(d):=R(d)/G(d)\to X(d):=R(d)/\!\!/G(d).\] We denote by \(\mathrm{IC}:=\mathrm{IC}_{\mathcal{X}(d)}=\mathbb{Q}_{\mathcal{X}(d)}[\dim \mathcal{X}(d)]\) and we may drop \(\mathrm{Tr}\,W\) from the notation of the vanishing cycle functor. Recall that \(\varphi:=\varphi_{\mathrm{Tr}\,W}\mathbb{Q}_{\mathcal{X}(d)}\). Following [1], define the BPS sheaf \[\mathcal{BPS}_{d}:=\begin{cases}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{X(d)}[-1 ],\text{ if }X(d)^{\mathrm{st}}\neq\emptyset,\\ 0,\text{ if }X(d)^{\mathrm{st}}=\emptyset.\end{cases}\] Note that \(\mathcal{BPS}_{d}\in\mathrm{Perv}(X(d))\). Consider a partition \(A=(d_{i})_{i=1}^{k}\) of \(d\). Let \(\ell(A):=k\). Assume the set \(\{d_{1},\ldots,d_{k}\}=\{e_{1},\ldots,e_{s}\}\) has cardinality \(s\) and that, for each \(1\leqslant i\leqslant s\), there are \(m_{i}\) elements in \(\{d_{1},\ldots,d_{k}\}\) equal to \(e_{i}\). Define the addition maps \(\oplus_{i}\colon X(e_{i})^{\times m_{i}}\to X(m_{i}e_{i})\) for \(1\leqslant i\leqslant s\) and \(\oplus^{\prime}\colon\times_{i=1}^{s}X(m_{i}e_{i})\to X(d)\), which are finite. Define the sheaves: \[\mathrm{Sym}^{m_{i}}\big{(}\mathcal{BPS}_{e_{i}}\big{)} :=\oplus_{i,*}\left(\mathcal{BPS}_{e_{i}}^{\boxtimes m_{i}} \right)^{\mathfrak{S}_{m_{i}}}\in\mathrm{Perv}(X(m_{i}e_{i})), \tag{6.4}\] \[\mathcal{BPS}_{A} :=\oplus_{*}^{\prime}\left(\boxtimes_{i=1}^{s}\mathrm{Sym}^{m_{i} }(\mathcal{BPS}_{e_{i}})\right)\in\mathrm{Perv}(X(d)).\] Alternatively, by the Thom-Sebastiani theorem, the sheaf \(\mathcal{BPS}_{A}\) has the following description. Let \(\mathcal{BPS}_{A}^{0}\) be the sheaf defined above for \(W=0\). Then \(\mathcal{BPS}_{A}=\varphi_{\mathrm{Tr}\,W}\mathcal{BPS}_{A}^{0}[-1]\). Define the complexes \[\mathcal{BPS}_{d,\delta} :=\bigoplus_{A\in S_{s}^{d}}\mathcal{BPS}_{A}[-\ell(A)]\in D_{ \mathrm{con}}^{b}(X(d)),\] \[\mathcal{BPS}_{d,v} :=\mathcal{BPS}_{d,vr_{d}}\in D_{\mathrm{con}}^{b}(X(d)). \tag{6.5}\] As we will see in (6.18), the complexes \(\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{d,\delta}\) are direct summands of \(\pi_{*}\varphi\mathrm{IC}[-1]\) preserved by \(1-\mathrm{T}\). Define \(\mathcal{BPS}^{\mathrm{inv}}_{A},\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}\in D ^{b}_{\mathrm{con}}(X(d))\) by the exact triangles: \[\mathcal{BPS}^{\mathrm{inv}}_{A}[-1] \to\mathcal{BPS}_{A}\xrightarrow{1-\mathrm{T}}\mathcal{BPS}_{A} \to\mathcal{BPS}^{\mathrm{inv}}_{A},\] \[\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}[-1] \to\mathcal{BPS}_{d,\delta}\xrightarrow{1-\mathrm{T}}\mathcal{BPS }_{d,\delta}\to\mathcal{BPS}^{\mathrm{inv}}_{d,\delta}.\] #### 6.1.4. Statement of the main theorem Let \(Q=(I,E)\) be a symmetric quiver and let \(W\) be a quasi-homogeneous potential. Consider the Chern character map (4.14): \[\mathrm{ch}\colon K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v})\hookrightarrow K^{ \mathrm{top}}_{i}(\mathrm{MF}(\mathcal{X}(d),\mathrm{Tr}\,W))\to\widetilde{H} ^{i}(\mathcal{X}(d),\varphi^{\mathrm{inv}}_{\mathrm{Tr}\,W}). \tag{6.6}\] Recall (4.15) and define the filtration: \[E_{\ell}K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v}):=K^{\mathrm{top}}_{i}(\mathbb{ S}(d)_{v})\cap E_{\ell}K^{\mathrm{top}}_{i}(\mathrm{MF}(\mathcal{X}(d),\mathrm{ Tr}\,W))\subset K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v}).\] There is an injective cycle map on the associated graded pieces: \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}(\mathbb{S} (d)_{v}) \to H^{2\dim\mathcal{X}(d)-2\ell-i}(\mathcal{X}(d),\varphi^{\mathrm{ inv}}_{\mathrm{Tr}\,W}[-2])\] \[\xrightarrow{\sim}H^{\dim\mathcal{X}(d)-2\ell-i}(\mathcal{X}(d), \varphi^{\mathrm{inv}}_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-2]), \tag{6.7}\] where we used \(\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}=\varphi_{\mathrm{Tr}\,W}[ \dim\mathcal{X}(d)]\) for computing the cohomological degree. The following is the main result of this section: **Theorem 6.2**.: _Assume the quiver \(Q\) satisfies Assumption 2.1 and let \(W\) be a quasi-homogeneous potential of \(Q\). Then the cycle map (6.7) induces an isomorphisms for \(i,\ell\in\mathbb{Z}\):_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}\left(\mathbb{S}(d)_{v} \right)\xrightarrow{\sim}H^{\dim\mathcal{X}(d)-2\ell-i}(X(d),\mathcal{BPS}^{ \mathrm{inv}}_{d,v}[-1]). \tag{6.8}\] The main part of proving Theorem 6.2 is the construction of a cycle map from the topological K-theory of quasi-BPS categories to BPS cohomology, which applies to all symmetric quivers \(Q\). **Theorem 6.3**.: _Let \(Q\) be an arbitrary symmetric quiver and let \(W\) be a quasi-homogeneous potential of \(Q\). Let \(d\in\mathbb{N}^{I}\), \(\delta\in M(d)^{W_{d}}_{\mathbb{R}}\), and \(i,\ell\in\mathbb{Z}\). The cycle map (6.7) induces a map:_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K^{\mathrm{top}}_{i}\left(\mathbb{S}(d; \delta)\right)\to H^{\dim\mathcal{X}(d)-2\ell-i}\left(X(d),\mathcal{BPS}^{ \mathrm{inv}}_{d,\delta}[-1]\right). \tag{6.9}\] We mention the following numerical corollary of Theorems 6.2 and 6.3. **Corollary 6.4**.: _Let \(Q\) be an arbitrary symmetric quiver and let \(W\) be a quasi-homogeneous potential. Let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\) and let \(i\in\mathbb{Z}\). Then:_ \[\dim_{\mathbb{Q}}K^{\mathrm{top}}_{i}(\mathbb{S}(d)_{v})\leqslant\dim_{ \mathbb{Q}}H^{\cdot}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}. \tag{6.10}\] _If \(Q\) satisfies Assumption 2.1, then equality holds in (6.10)._ When \(\gcd(\underline{d},v)=1\) and \(Q\) satisfies Assumption 2.1, we regard \(\mathbb{S}(d)_{v}\) as a categorification of the monodromy invariant BPS cohomology of \((Q,W)\). Before we prove Corollary 6.4, note the following: **Proposition 6.5**.: _Let \(Q\) be a quiver satisfying Assumption 2.1. The Chern character map (6.6) is injective._ Proof.: It follows from Proposition 4.10 and Theorem 2.8. Proof of Corollary 6.4 from Theorem 6.3.: Note that there is a (non-canonical) isomorphism \[H^{\bullet}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}\cong H^{\bullet}(X(d), \mathcal{BPS}_{d,v})_{\mathrm{inv}}. \tag{6.11}\] The cycle map (6.9) is injective because (6.8) is injective. Then, by Theorem 6.3, we have that: \[\dim_{\mathbb{Q}}\operatorname{gr}.K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v}) \leqslant\dim_{\mathbb{Q}}H^{\cdot}(X(d),\mathcal{BPS}_{d,v})^{\mathrm{inv}}. \tag{6.12}\] If \(Q\) satisfies Assumption 2.1, then (6.12) is an equality. It suffices to show that \(\dim_{\mathbb{Q}}K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})=\dim_{\mathbb{Q}} \operatorname{gr}.K_{i}^{\mathrm{top}}(\mathbb{S}(d)_{v})\), equivalently that (6.6) is injective, which is Proposition 6.5. Corollary 1.5 follows easily from Theorem 6.2. Proof of Corollary 1.5.: Note that Proposition 6.1 implies that \(\mathbf{d}=(d_{i})_{i=1}^{k}\in S_{v}^{d}\) if and only if \(\underline{d}/\gcd(\underline{d},v)\) divides \(\underline{d}_{i}\) for \(1\leqslant i\leqslant k\). Then \(S_{v}^{d}=S_{v^{\prime}}^{d}\) for \(v,v^{\prime}\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=\gcd(\underline{d},v^{\prime})\). The statement then follows from Theorem 6.2. In Section 7, we compute the topological K-theory of quasi-BPS categories of preprojective algebras of quivers satisfying Assumption 2.2 using Theorem 6.2, see Theorem 7.6. In [PTd], we further use Theorem 7.6 to compute the topological K-theory of quasi-BPS categories of K3 surfaces. In particular, we obtain categorifications of the BPS cohomology of a large class of preprojective algebras and of K3 surfaces. We end this subsection by discussing the zero potential case of Theorem 6.2. Then \(\mathcal{BPS}_{d}=\mathrm{IC}_{X(d)}\). Denote by \(\mathrm{IH}^{\bullet}(X(d)):=H^{\bullet}\big{(}X(d),\mathrm{IC}_{X(d)}\big{)}\). Note that \(H^{\mathrm{odd}}(\mathfrak{X}(d))=H^{\mathrm{odd}}(\mathfrak{X}(d),\mathrm{IC }_{\mathfrak{X}(d)})=0\). We then have that \(\mathrm{IH}^{\mathrm{even}}(X(d))=0\) because \(\mathrm{IC}_{X(d)}[-1]\) is a direct summand of \(R\pi_{*}\mathrm{IC}_{\mathfrak{X}(d)}\), see (6.16); alternatively, the vanishing \(\mathrm{IH}^{\mathrm{even}}(X(d))=0\) follows from Kirwan surjectivity. By Theorem 6.2 and Proposition 4.13, we obtain the following: **Theorem 6.6**.: _Let \(Q\) be a quiver satisfying Assumption 2.1, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\) such that \(\gcd\,(\underline{d},v)=1\). For \(\ell\in\mathbb{Z}\), the cycle map induces an isomorphism:_ \[\mathrm{c}\colon\operatorname{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v })\xrightarrow{\sim}\mathrm{IH}^{\dim\mathfrak{X}(d)-2\ell-1}(X(d)).\] We note a consequence of Corollary 6.4, alternatively a numerical corollary of Theorem 6.6. **Corollary 6.7**.: _Let \(Q\) be a quiver satisfying Assumption 2.1, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\) such that \(\gcd\,(\underline{d},v)=1\). Then_ \[\dim_{\mathbb{Q}}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v})=\dim_{\mathbb{Q}} \mathrm{IH}^{\cdot}(X(d))\] _for any \(i\in\mathbb{Z}\)._ For \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\) and \(Q\) as in the statement of Corollary (6.7), we regard \(\mathbb{M}(d)_{v}\) as a categorification of the intersection cohomology of \(X(d)\). Note that, in general, \(X(d)\) is a singular scheme. ### The decomposition theorem Let \(\alpha\in\mathbb{N}\) and recall the construction of framed quivers \(Q^{\alpha f}\) from Subsection 2.14. We review the explicit computation of summands in the BBDG decomposition theorem [1] for the pushforwards of the constant sheaves along the maps: \[\pi_{\alpha f,d}\colon\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}\to X(d),\ \pi_{d}\colon\mathcal{X}(d)\to X(d)\] due to Meinhardt-Reineke [11] and Davison-Meinhardt [15]. The maps \(\pi_{\alpha f,d}\) "approximate" the map \(\pi_{d}\), see [15, Subsection 4.1]. The computation of \(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\) is deduced from the computation of \(\pi_{\alpha f,d*}\mathbb{Q}_{\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}}[\dim \mathcal{X}(d)]\). We introduce some constructible sheaves on \(X(d)\). Let \(A\) be a tuplet \((e_{i},m_{i,a})\) for \(1\leqslant i\leqslant s\) and for \(a\geqslant 0\), with \((e_{i})_{i=1}^{s}\in\mathbb{Z}_{\geqslant 1}^{s}\) pairwise distinct and \(m_{i,a}\geqslant 0\) such that \(\sum_{i=1}^{s}\sum_{a\geqslant 0}e_{i}m_{i,a}=d\). Let \(\mathcal{P}\) be the set of all such tuplets \(A\) and let \(\mathcal{P}_{\alpha}\subset\mathcal{P}\) be the subset of such tuplets with \(m_{i,a}=0\) for \(a\geqslant\alpha e_{i}\). Note that each \(A\) has a corresponding partition with terms \(e_{i}\) with multiplicity \(\sum_{a\geqslant 0}m_{i,a}\) for \(1\leqslant i\leqslant s\). Consider the addition maps: \[\oplus_{i,a}\colon X(e_{i})^{\times m_{i,a}}\to X(m_{i,a}e_{i}),\ \oplus^{\prime}\colon\ \times_{i=1}^{s}\times_{a\geqslant 0}X(m_{i,a}e_{i}) \to X(d). \tag{6.13}\] Define the constructible complexes: \[\mathrm{Sym}^{m_{i,a}}\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right) :=\oplus_{i,a}\left(\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right)^{ \boxtimes m_{i,a}}\right)^{\mathfrak{S}m_{i,a}},\] \[\mathrm{P}_{A} :=\oplus_{*}^{\prime}\left(\boxtimes_{1\leqslant i\leqslant s,a \geqslant 0}\mathrm{Sym}^{m_{i,a}}\left(\mathrm{IC}_{X(e_{i})}[-2a-1]\right) \right).\] Then \(\mathrm{P}_{A}\) is supported on the image of \(\oplus^{\prime}\) and is a shifted perverse sheaf of degree \[p_{A}:=\sum_{i=1}^{k}\sum_{a\geqslant 0}m_{i,a}(2a+1),\] meaning that \(\mathrm{P}_{A}[p_{A}]\in\mathrm{Perv}(X(d))\). Define analogously \[\mathrm{Q}_{A}:=\oplus_{*}^{\prime}\left(\boxtimes_{1\leqslant i\leqslant s,a \geqslant 0}\mathrm{Sym}^{m_{i,a}}\left(\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{X(e _{i})}[-2a-2]\right)\right). \tag{6.14}\] Then one can show, using the Thom-Sebastiani theorem, that \[\mathrm{Q}_{A}=\varphi_{\mathrm{Tr}\,W}\mathrm{P}_{A}[-1].\] Let \(\alpha\) be an even positive natural number. The following explicit form of the BBDG decomposition theorem for \(\pi_{\alpha f,d}\) was determined by Meinhardt-Reineke [11, Proposition 4.3]: \[\pi_{\alpha f,d*}\left(\mathbb{Q}_{\mathcal{X}^{\alpha f}(d)^{\mathrm{ss}}}[ \dim\mathcal{X}(d)]\right)=\bigoplus_{A\in\mathcal{P}_{\alpha}}P_{A}. \tag{6.15}\] The result in loc. cit. is stated as an equality in the Grothendieck group of constructible sheaves, but the above stronger statement holds by the argument in [15, Proof of Theorem 4.10]. Using the above, one can obtain, see [15, Theorem C], the following decomposition: \[\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}=\bigoplus_{A\in\mathcal{P}}\mathrm{P}_{A}. \tag{6.16}\] The proper pushforward commutes with the vanishing cycle functor, so applying the vanishing cycle functor to (6.15) one obtains the following decomposition, which is also called the DT/ BPS wall-crossing: \[\pi_{\alpha f,d*}\varphi_{\mathrm{Tr}\,W}\left(\mathbb{Q}_{\mathcal{X}^{ \alpha f}(d)^{\mathrm{ss}}}[\dim\mathcal{X}(d)-1]\right)=\bigoplus_{A\in \mathcal{P}_{\alpha}}Q_{A}. \tag{6.17}\] The map \(\pi_{d}\) can be approximated by the proper maps \(\pi_{\alpha f,d}\), thus \(\pi_{d*}\) also commutes with the vanishing cycle functor. From (6.16), we obtain: \[\pi_{d*}\varphi_{\operatorname{Tr}W}\mathrm{IC}_{\mathfrak{X}(d)}[-1]=\bigoplus _{A\in\mathcal{P}}Q_{A}. \tag{6.18}\] The summands in all the above decompositions are induced via the Hall product. We now state a corollary of (6.17). **Proposition 6.8**.: _Let \(\alpha\) be an even positive integer and let \(i\in\mathbb{Z}\). Then there is an isomorphism of \(\mathbb{N}^{I}\times\mathbb{N}\)-graded vector spaces, where the second grading is the cohomological grading:_ \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{ \alpha f}(d)^{\operatorname{ss}},\varphi\left[\dim\mathfrak{X}(d)-1\right] \right)^{\operatorname{inv}}\cong\\ \left(\operatorname{Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{ \bullet}\left(X(d),\mathcal{BPS}_{d}[-1]\right)\otimes H^{\bullet}(\mathbb{P} ^{\alpha d-1})\right)\right)^{\operatorname{inv}}. \tag{6.19}\] _By forgetting the cohomological grading, there is an isomorphism of \(\mathbb{N}^{I}\)-graded vector spaces:_ \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{\alpha f}(d)^{ \operatorname{ss}},\varphi\right)^{\operatorname{inv}}\cong\left(\operatorname {Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot}\left(X(d),\mathcal{BPS}_{ d}\right)^{\oplus\alpha d}\right)\right)^{\operatorname{inv}}.\] Proof.: By taking global sections of the two sides of (6.17), we obtain an isomorphism: \[\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet}\left(\mathfrak{X}^{ \alpha f}(d)^{\operatorname{ss}},\varphi\left[\dim\mathfrak{X}(d)-1\right] \right)\cong\\ \operatorname{Sym}\left(\bigoplus_{d\in\mathbb{N}^{I}}H^{\bullet }\left(X(d),\mathcal{BPS}_{d}[-1]\right)\otimes H^{\bullet}(\mathbb{P}^{ \alpha d-1})\right). \tag{6.20}\] The isomorphism (6.19) follows by taking the monodromy invariant parts on the two sides of the isomorphism (6.20). ### Semiorthogonal decompositions and the BBDG decomposition theorem In this section, we prove Theorem 6.2 assuming Theorem 6.3. The proof follows from a comparison of the pieces in the semiorthogonal decomposition from Theorem 2.8 with the summands of the DT/ BPS wall-crossing (6.17). Actually, the proof is based on a comparison of dimensions of certain vector spaces. In the rest of this subsection, we will use certain non-canonical maps, but they suffice for comparing dimensions of vector spaces. We rewrite the Chern character isomorphism (4.17) for \(\mathfrak{X}\) a smooth variety with a regular function \(f\colon\mathfrak{X}\to\mathbb{C}\). Observe that there is a (non-canonical) isomorphism \(H^{i}(\mathfrak{X},\varphi_{f})^{\operatorname{inv}}\cong H^{i}(\mathfrak{X}, \varphi_{f})_{\operatorname{inv}}\) of \(\mathbb{Q}\)-vector spaces. Rewrite (4.17) as the following (non-canonical) isomorphism of \(\mathbb{Q}\)-vector spaces for every \(i\in\mathbb{Z}\): \[\operatorname{ch}\colon K_{i}^{\operatorname{sg}}(\mathfrak{X}_{0})\xrightarrow {\sim}H^{\cdot}(\mathfrak{X},\varphi_{f})^{\operatorname{inv}}. \tag{6.21}\] Recall the notations \(\operatorname{gr}.K_{i}^{\operatorname{top}}:=\bigoplus_{a\in\mathbb{Z}} \operatorname{gr}_{a}K_{i}^{\operatorname{top}}\) and \(H^{\cdot}:=\bigoplus_{a\in\mathbb{Z}}H^{a}.\) Given a vector space \(V\) with a linear map \(T\colon V\to V\), we denote by \(V^{\operatorname{inv}}\) the kernel of \((1-T)\). For a set \(A\) of pairs \((V_{a},T_{a})_{a\in A}\) we denote by \((\otimes_{a\in A}V_{a})^{\operatorname{inv}}\) the kernel of \(1-\otimes_{a\in A}T_{a}\). Note that \(\otimes_{a\in A}V_{a}^{\rm inv}\subset(\otimes_{a\in A}V_{a})^{\rm inv}\). The same notation is also used for symmetric products. We will apply the above notation when \(T_{a}\) are monodromy operators on vanishing cycle cohomologies. Note the following corollary of Theorem 6.3, which follows because the cycle map (6.8) is injective: **Corollary 6.9**.: _Assume Theorem 6.3 holds. Then the cycle map (6.9) is injective._ Proof of Theorem 6.2 assuming Theorem 6.3.: Let \(\alpha\) be an even positive integer and fix \(i\in\mathbb{Z}\). By Theorem 2.8, there is a semiorthogonal decomposition: \[\operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname{Tr}W \right)=\left\langle\bigotimes_{j=1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right\rangle,\] where the right hand side is after all partitions \(\sum_{j=1}^{k}d_{j}=d\) and all weights \(v_{j}\in\mathbb{Z}\) with \(0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/\underline{d}_{k}<\alpha\). There is thus an isomorphism of \(\mathbb{N}^{I}\)-graded vector spaces: \[\bigoplus_{d\in\mathbb{N}^{I}}\operatorname{gr}.K_{i}^{\rm top}\left( \operatorname{MF}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname{Tr}W \right)\right)\cong\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/ \underline{d}_{k}<\alpha}\operatorname{gr}.K_{i}^{\rm top}\left(\bigotimes_{j= 1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right).\] Recall the isomorphism of \(\mathbb{Q}\)-vector spaces (6.11). By Corollary 6.9, there is an injective (non-canonical) map: \[\operatorname{gr}.K_{i}^{\rm top}(\mathbb{S}(d)_{v})\hookrightarrow H^{\cdot} (X(d),\mathcal{BPS}_{d,v})^{\rm inv}. \tag{6.22}\] Then we have injective maps \[\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}<\ldots<v_{k}/ \underline{d}_{k}<\alpha}\operatorname{gr}.K_{i}^{\rm top}\left(\bigotimes_{j =1}^{k}\mathbb{S}(d_{j})_{v_{j}}\right)\] \[\hookrightarrow\bigoplus_{0\leqslant v_{1}/\underline{d}_{1}< \ldots<v_{k}/\underline{d}_{k}<\alpha}H^{\cdot}\left(\times_{j=1}^{k}X(d_{j}),\boxtimes_{j=1}^{k}\mathcal{BPS}_{d,v}\right)^{\rm inv}\] \[\hookrightarrow\left(\bigotimes_{\begin{subarray}{c}\mu\in \mathbb{Q}\\ 0\leqslant\mu<\alpha\end{subarray}}\left(\operatorname{Sym}\left(\bigoplus_{ \begin{subarray}{c}d\in\mathbb{N}^{I}\\ \exists v\operatorname{s.t.}\mu=v/\underline{d}\end{subarray}}H^{\cdot}(X(d), \mathcal{BPS}_{d})\right)\right)\right)^{\rm inv}\] \[\xrightarrow{\sim}\left(\operatorname{Sym}\left(\bigoplus_{d\in \mathbb{N}^{I}}H^{\cdot}(X(d),\mathcal{BPS}_{d})^{\oplus\underline{d}}\right) \right)^{\rm inv}\xrightarrow{\sim}\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot} \left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\varphi\right)^{\rm inv}\] where the first inclusion follows from Corollary 6.9 (applied to disjoint union of \(k\)-copies of \(Q\), the \(k=1\) case is (6.22)), the second inclusion follows from the definition of \(\mathcal{BPS}_{d,v}\), Proposition 6.1, and the fact that the Thom-Sebastiani isomorphism is natural with respect to the monodromy actions, the first isomorphism follows from a combinatorial observation, and the second isomorphism follows from Proposition 6.8. We thus obtain an injective map of \(\mathbb{N}^{I}\)-graded vector spaces: \[\bigoplus_{d\in\mathbb{N}^{I}}\operatorname{gr}.K_{i}^{\rm top}\left( \operatorname{MF}^{\rm gr}\left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\operatorname {Tr}W\right)\right)\hookrightarrow\bigoplus_{d\in\mathbb{N}^{I}}H^{\cdot} \left(\mathcal{X}^{\alpha f}(d)^{\rm ss},\varphi\right)^{\rm inv}.\] By the isomorphism (6.21) together with the exact sequence (4.10), the \(\mathbb{N}^{I}\)-graded piece of both sides of the above map has the same (finite) dimension, hence the map above is an isomorphism. The map (6.22) is then also an isomorphism, thus also the maps (6.8) are isomorphisms. It thus remains to prove Theorem 6.3. In Subsection 6.10, we reduce the proof for a general symmetric quiver to that of a quiver with at least two loops at every vertex. In Subsection 6.5, we prove a restriction statement of the image under the cycle map of an object in a quasi-BPS category. In Subsection 6.6, we combine the above restriction with the decomposition theorems (6.16) to prove Theorem 6.3. ### Reduction to quivers with enough edges Consider an arbitrary symmetric quiver with potential \((Q,W)\). Let \(Q=(I,E)\). For \(i\in I\), let \(\omega_{i},\omega_{i}^{\prime}\) be two loops at \(i\). Let \(E^{\mathfrak{Z}}:=E\sqcup\{\omega_{i},\omega_{i}^{\prime}\mid i\in I\}\) and consider the quadratic potential \(W^{q}:=\sum_{I\in I}\omega_{i}\omega_{i}^{\prime}\). Define the quiver with potential: \[Q^{\mathfrak{Z}}:=(I,E^{\mathfrak{Z}}),\,W^{\mathfrak{Z}}:=W+W^{q}.\] **Proposition 6.10**.: _Assume Theorem 6.3 holds for \((Q^{\mathfrak{Z}},W^{\mathfrak{Z}})\). Then Theorem 6.3 holds for \((Q,W)\)._ Recall the stack of representations \(\mathscr{X}(d)=R(d)/G(d)\) of \(Q\). For the quiver \(Q^{\mathfrak{Z}}\) and for \(d\in\mathbb{N}^{I}\), we consider the following: the stack of representations with its good moduli space \[\pi_{d}^{\mathfrak{Z}}\colon\mathscr{X}^{\mathfrak{Z}}(d):=\left(R(d)\oplus \mathfrak{g}(d)^{\oplus 2}\right)/G(d)\to X^{\mathfrak{Z}}(d),\] the BPS sheaves \(\mathcal{BPS}^{\mathfrak{Z}}_{d,v}\) as defined in (6.5), the polytope \(\mathbb{W}^{\mathfrak{Z}}(d)\) as in (2.16), the integers \(n_{\lambda}^{\mathfrak{Z}}\) as in (2.20), the quasi-BPS categories \(\mathbb{M}^{\mathfrak{Z}}(d;\delta)\) from (2.19) and \(\mathbb{S}^{\mathfrak{Z}}(d;\delta)\) from (2.21). Let \[\mathscr{S}(d):=(R(d)\oplus\mathfrak{g}(d))/G(d)\] and consider the maps, where \(v,t\) are the natural projections and \(s\) is the natural inclusion: Let \(G:=G(d)\) and \(\mathfrak{g}:=\mathfrak{g}(d)\). We discuss two preliminary propositions. **Proposition 6.11**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(i\in\mathbb{Z}\). There is an isomorphism:_ \[s_{*}v^{*}\colon H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W} \mathrm{IC}_{\mathscr{X}(d)}[-1]) \xrightarrow{\sim}H^{i}(\mathscr{X}^{\mathfrak{Z}}(d),\varphi_{ \mathrm{Tr}\,W^{\mathfrak{Z}}}\mathrm{IC}_{\mathscr{X}^{\mathfrak{Z}}(d)}[-1]),\] \[H^{i}(X(d),\mathcal{BPS}_{d,\delta}) \xrightarrow{\sim}H^{i}(X^{\mathfrak{Z}}(d),\mathcal{BPS}^{ \mathfrak{Z}}_{d,\delta}).\] Proof.: Consider the diagram: (6.23) We first show there is an isomorphism of sheaves on \(D^{b}_{\mathrm{con}}(X^{\mathfrak{Z}}(d))\): \[s_{*}v^{*}\colon u_{*}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X} (d)}[-1]\xrightarrow{\sim}\pi_{d*}^{\mathfrak{Z}}\varphi_{\mathrm{Tr}\,W^{ \mathfrak{Z}}}\mathrm{IC}_{\mathscr{X}^{\mathfrak{Z}}(d)}[-1]. \tag{6.24}\] First, there is an isomorphism of sheaves on \(D^{b}_{\mathrm{con}}(X^{\mathfrak{Z}}(d))\): \[s_{*}v^{*}\colon u_{*}\pi_{d*}\mathrm{IC}_{\mathscr{X}(d)}\xrightarrow{\sim} \pi_{d*}^{\mathfrak{Z}}\varphi_{\mathrm{Tr}\,W^{\mathfrak{Z}}}\mathrm{IC}_{ \mathscr{X}^{\mathfrak{Z}}(d)}[-1]. \tag{6.25}\] The above map is obtained from base-change from the the map for \(\mathscr{X}(d)=\operatorname{pt}\), that is, from the map \[s_{0*}v_{0}^{*}\colon\pi_{d,0*}u_{0*}\mathrm{IC}_{BG}\xrightarrow{\sim}\pi_{d,0* }\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathfrak{g}^{\oplus 2}/G}[-1], \tag{6.26}\] where \(s_{0},v_{0},u_{0}\) are the maps as in (6.23) for \(\mathscr{X}(d)\) replaced by \(\operatorname{Spec}\mathbb{C}=\operatorname{pt}\), and where \(\pi_{d,0}\colon\mathfrak{g}^{\oplus 2}/G\to\mathfrak{g}^{\oplus 2}/\!\!/G\). By a direct computation, we have that \[\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathfrak{g}^{\oplus 2}/G}[-1]= \mathrm{IC}_{BG}\] because \(W^{q}\) is a Morse function with critical locus \(BG\), the origin in \(\mathfrak{g}^{\oplus 2}/G\). Further, (6.26) is an isomorphism for global sections by dimensional reduction, see Subsection 5.1, so (6.26) is an isomorphism. Then (6.25) is also an isomorphism. Abuse notation and write \(\operatorname{Tr}W\colon\mathscr{X}^{\mathsf{I}}(d)\xrightarrow{\operatorname {proj}}\mathscr{X}(d)\xrightarrow{\operatorname{Tr}W}\mathbb{C}.\) Note that \(\pi_{d*}\) commutes with \(\varphi_{\operatorname{Tr}W}\) because \(\pi_{d}\) can be approximated with the proper maps \(\pi_{\alpha f,d}\), see Subsection 6.2. Further, \(\varphi_{\operatorname{Tr}W}\) commutes with proper pushforward and smooth pullback. Apply \(\varphi_{\operatorname{Tr}W}\) to both sides of (6.25) and use the Thom-Sebastiani theorem for vanishing cycles to obtain: \[s_{*}v^{*}\colon u_{*}\pi_{d*}\varphi_{\operatorname{Tr}W}\mathrm{IC}_{ \mathscr{X}(d)}[-1] \xrightarrow{\sim}\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W }\left(\varphi_{\operatorname{Tr}W^{q}}\mathrm{IC}_{\mathscr{X}(d)}[-1]\right)\] \[\cong\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W^{2}} \mathrm{IC}_{\mathscr{X}^{\mathsf{I}}(d)}[-1].\] We now explain that the isomorphism (6.24) induces an isomorphism of sheaves in \(D^{b}_{\operatorname{con}}(X^{\mathsf{I}}(d))\): \[u_{*}\mathcal{BPS}_{d,\delta}\xrightarrow{\sim}\mathcal{BPS}_{d,\delta}^{ \mathsf{I}}.\] First, we explain that \(S_{\delta}^{d}(Q)=S_{\delta}^{d}(Q^{\mathsf{I}})\). Let \(\lambda\) be a cocharacter of \(T(d)\). Let \(n_{\lambda}\) and \(n_{\lambda}^{\mathsf{I}}\) be the integers (2.20) for \(Q\) and \(Q^{\mathsf{I}}\), respectively. Let \(\varepsilon_{\lambda,\delta}\) and \(\varepsilon_{\lambda,\delta}^{\mathsf{I}}\) be the integers (6.1) for \(Q\) and \(Q^{\mathsf{I}}\), respectively. Then \[n_{\lambda}^{\mathsf{I}}-n_{\lambda}=2\langle\lambda,\mathfrak{g}(d)^{\lambda >0}\rangle,\] thus \(\varepsilon_{\lambda,\delta}=\varepsilon_{\lambda,\delta}^{\mathsf{I}}\), so indeed \(S_{\delta}^{d}(Q)=S_{\delta}^{d}(Q^{\mathsf{I}})=:S_{\delta}^{d}\). It suffices to check that (6.24) induces isomorphisms: \[u_{*}\mathcal{BPS}_{A}\xrightarrow{\sim}\mathcal{BPS}_{A}^{\mathsf{I}} \tag{6.27}\] for any \(A\in S_{\delta}^{d}\). The isomorphism (6.24) is obtained by applying the functor \(\varphi_{\operatorname{Tr}W}\) to the isomorphism (6.24) for \(W=0\), that is, from the isomorphism (6.25). Therefore it suffices to check (6.27) when \(W=0\), so we assume that \(W=0\) in the rest of the proof. Assume \(A\) has a corresponding partition \((d_{i})_{i=1}^{k}\) of \(d\). Let \(X_{A}\) be the image of \(\oplus\colon\times_{i=1}^{k}X(d_{i})\to X(d)\). There is an isomorphism: \[u_{*}{}^{p}\mathcal{H}^{k}(\pi_{d*}\mathrm{IC}_{\mathscr{X}(d)})\xrightarrow{ \sim}{}^{p}\mathcal{H}^{k}(\pi_{d*}^{\mathsf{I}}\varphi_{\operatorname{Tr}W^{ q}}\mathrm{IC}_{\mathscr{X}^{\mathsf{I}}(d)}[-1]).\] There are either no summands of support \(X_{A}\) on both sides, case in which both \(u_{*}\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{A}^{\mathsf{I}}\) are zero, or there are unique summands of support \(X_{A}\) on both sides, namely \(u_{*}\mathcal{BPS}_{A}\) and \(\mathcal{BPS}_{A}^{\mathsf{I}}\), and thus (6.27) follows. We note the following corollary of Proposition 6.11. **Corollary 6.12**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and let \(i\in\mathbb{Z}\). There is an isomorphism:_ \[s_{*}v^{*}\colon H^{i}(\mathscr{X}(d),\varphi_{\operatorname{Tr}W}^{\mathrm{ inv}}\mathrm{IC}_{\mathscr{X}(d)}[-1]) \xrightarrow{\sim}H^{i}(\mathscr{X}^{\mathsf{I}}(d),\varphi_{ \operatorname{Tr}W^{\mathsf{I}}}^{\mathrm{inv}}\mathrm{IC}_{\mathscr{X}^{ \mathsf{I}}(d)}[-1]),\] \[H^{i}(X(d),\mathcal{BPS}_{d,\delta}^{\mathrm{inv}}) \xrightarrow{\sim}H^{i}(X^{\mathsf{I}}(d),\mathcal{BPS}_{d, \delta}^{\mathsf{I},\mathrm{inv}}).\] We also relate quasi-BPS categories under Knorrer periodicity: **Proposition 6.13**.: _There is an equivalence:_ \[s_{*}v^{*}\colon\operatorname{MF}(\mathscr{X}(d),\operatorname{Tr}W) \xrightarrow{\sim}\operatorname{MF}(\mathscr{X}^{\underline{1}}(d), \operatorname{Tr}W^{\underline{1}}),\] \[\mathbb{S}(d;\delta) \xrightarrow{\sim}\mathbb{S}^{\underline{1}}(d;\delta).\] Proof.: (cf. [PTe, Proposition 2.14]) Consider the Koszul complex \[\mathscr{K}:=s_{*}v^{*}\mathscr{O}_{\mathscr{X}}\in\operatorname{MF}(\mathscr{ X}^{\underline{1}}(d),\operatorname{Tr}W^{q}),\] where \(s_{*}v^{*}\colon\operatorname{MF}(\mathscr{X}(d),0)\xrightarrow{\sim} \operatorname{MF}(\mathscr{X}^{\underline{1}}(d),\operatorname{Tr}W^{q})\) is an equivalence by Knorrer periodicity. By the Thom-Sebastiani theorem for matrix factorizations [Pre], there is then an equivalence: \[t^{*}(-)\otimes\mathscr{K}\colon\operatorname{MF}(\mathscr{X}(d),\operatorname{ Tr}W)\xrightarrow{\sim}\operatorname{MF}(\mathscr{X}^{\underline{1}}(d), \operatorname{Tr}W^{\underline{1}}).\] Note that \(t^{*}(-)\otimes\mathscr{K}=s_{*}v^{*}(-)\). It remains to show that \[t^{*}(-)\otimes\mathscr{K}\colon\mathbb{S}(d;\delta)\xrightarrow{\sim} \mathbb{S}^{\underline{1}}(d;\delta).\] It suffices to show that, for \(F\in D^{b}(\mathscr{X}(d))\), we have that \(F\in\mathbb{M}(d;\delta)\) if and only if \(t^{*}(F)\otimes\mathscr{K}\in\mathbb{M}^{\underline{1}}(d;\delta)\). We use Lemma 2.3. Let \(\nu\colon B\mathbb{C}^{*}\to\mathscr{X}(d)\), let \(F\in D^{b}(\mathscr{X}(d))\), and let \(A_{F}\subset\mathbb{Z}\) be the set of weights of \(\nu^{*}(F)\). Note that for any \(\nu^{\underline{1}}\colon B\mathbb{C}^{*}\to\mathscr{X}^{\underline{1}}(d)\) such that \(t\circ\nu^{\underline{1}}=\nu\), we have that the weights of \((\nu^{\underline{1}})^{*}(t^{*}(F)\otimes\mathscr{K})\) are the Minkowski sum \(A_{F}+[-\langle\nu,\mathfrak{g}\rangle,\langle\nu,\mathfrak{g}\rangle]\). We have that \[A_{F}\subset\left[-\frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle, \frac{1}{2}n_{\lambda}+\langle\lambda,\delta_{d}\rangle\right]\] if and only if \[A_{F}+[-\langle\nu,\mathfrak{g}\rangle,\langle\nu,\mathfrak{g}\rangle]\subset \left[-\frac{1}{2}n_{\lambda}^{\underline{1}}+\langle\lambda,\delta_{d}\rangle,\frac{1}{2}n_{\lambda}^{\underline{1}}+\langle\lambda,\delta_{d}\rangle\right].\] The conclusion then follows. Proof of Proposition 6.10.: Let \(i,\ell\in\mathbb{Z}\). By Corollary 4.9, there is a commutative diagram, where \(b=\dim\mathscr{X}(d)-i-2\ell=\dim\mathscr{X}^{\underline{1}}(d)-i-2(\ell+\dim \mathfrak{g})\): \[\operatorname{gr}_{\ell}K_{i}^{\operatorname{top}}(\operatorname{MF}( \mathscr{X}(d),\operatorname{Tr}W))\xrightarrow{s_{*}v^{*}}\operatorname{gr}_ {\ell+\dim\mathfrak{g}}K_{i}^{\operatorname{top}}(\operatorname{MF}(\mathscr{X }^{\underline{1}}(d),\operatorname{Tr}W^{\underline{1}}))\] \[\operatorname{\downarrow}\operatorname{c}\] \[H^{b}(\mathscr{X}(d),\varphi_{\operatorname{Tr}W}^{\operatorname{ inv}}\operatorname{IC}_{\mathscr{X}(d)}[-2])\xrightarrow{s_{*}v^{*}}H^{b}(\mathscr{X}^{ \underline{1}}(d),\varphi_{\operatorname{Tr}W^{\underline{1}}}^{\operatorname {inv}}\operatorname{IC}_{\mathscr{X}^{\underline{1}}(d)}[-2]).\] The conclusion follows from Corollary 6.12 and Proposition 6.13. It will be convenient to make the following assumption on a quiver: **Assumption 6.1**.: Assume the quiver \(Q\) is symmetric and has at least two loops at any vertex. We introduce some notation. For any cocharacter \(\lambda\) with associated partition \(\mathbf{d}\), define \(c_{\mathbf{d}}:=c_{\lambda}:=\dim\mathscr{X}(d)-\dim\mathscr{X}(d)^{\lambda \geq 0}\). **Lemma 6.14**.: _Let \(Q=(I,E)\) be a quiver which satisfies Assumption 6.1 and let \(d\in\mathbb{N}^{I}\) be non-zero._ _(a) For any cocharacter \(\lambda\) of \(T(d)\), we have \(c_{\lambda}\geqslant 0\), and the inequality is strict if \(\lambda\) has an associated partition with at least two terms. Moreover, we have_ \[\dim\mathscr{X}(d)^{\lambda\geq 0}-\dim\mathscr{X}(d)^{\lambda}=c_{\lambda}.\] _(b) The map \(\pi_{d}\colon\mathscr{X}(d)\to X(d)\) is generically a \(\mathbb{C}^{*}\)-gerbe, in particular there exists a stable representation of dimension \(d\)._ Proof.: We only discuss the first claim of part (a), the second claim and part (b) are similar. Let \(S(d)\) be the affine space of dimension \(d\) representations of the quiver obtained from \(Q\) by deleting one loop at every vertex in \(I\). Then \[\mathscr{X}(d)=\left(S(d)\oplus\mathfrak{g}(d)\right)/G(d).\] We have that \[\dim\mathscr{X}(d)=\dim S(d)\text{ and }\dim\mathscr{X}(d)^{\lambda\geqslant 0 }=\dim\left(S(d)\right)^{\lambda\geqslant 0},\] so \(c_{\lambda}=\dim\left(S(d)\right)^{\lambda<0}\), and the first claim follows. ### Coproduct-like maps in K-theory In this subsection,we assume that \(Q\) satisfies Assumption 6.1. Consider an antidominant cocharacter \(\lambda\) of \(T(d)\) and let \(a_{\lambda}\colon\mathscr{X}(d)^{\lambda}\to\mathscr{X}(d)\) be the natural morphism inducing pullback maps for any \(i,\ell\in\mathbb{Z}\): \[a_{\lambda}^{*}\colon K_{i}^{\text{top}}\left(\operatorname{MF} \left(\mathscr{X}(d),\operatorname{Tr}W\right)\right)\to K_{i}^{\text{top}} \left(\operatorname{MF}\left(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W \right)\right),\] \[a_{\lambda}^{*}\colon\operatorname{gr}_{\ell}K_{i}^{\text{top} }(\operatorname{MF}\left(\mathscr{X}(d),\operatorname{Tr}W\right))\to \operatorname{gr}_{\ell+2d_{\lambda}}K_{i}^{\text{top}}\left(\operatorname{MF }\left(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W\right)\right),\] where \(d_{\lambda}:=\dim\mathscr{X}(d)^{\lambda}-\dim\mathscr{X}(d)\) is the relative dimension of \(a_{\lambda}\). Consider the quotient \(G(d)^{\prime}:=G(d)^{\lambda}/\text{image}(\lambda)\) and the stack \(\mathscr{X}(d)^{\prime\lambda}:=R(d)^{\lambda}/G(d)^{\prime}\). There is an isomorphism: \[K_{i}^{\text{top}}\left(\operatorname{MF}\left(\mathscr{X}(d)^{\lambda}, \operatorname{Tr}W\right)\right)\cong K_{i}^{\text{top}}\left(\operatorname{ MF}\left(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W\right)\right)[q^{ \pm 1}].\] There is an analogous isomorphism for graded K-theory. There are also maps in cohomology: \[a_{\lambda}^{*}\colon H^{{}^{\prime}}\left(\mathscr{X}(d),\varphi_{ \operatorname{Tr}W}\right) \to H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi_{ \operatorname{Tr}W}\right)[h],\] \[a_{\lambda}^{*}\colon H^{{}^{\prime}}\left(\mathscr{X}(d),\varphi_{ \operatorname{Tr}W}^{\text{inv}}\right) \to H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi_{ \operatorname{Tr}W}^{\text{inv}}\right)[h].\] Assume the associated partition of \(\lambda\) is \(\mathbf{d}=\left(d_{i}\right)_{i=1}^{k}\). Recall that \(c_{\lambda}:=c_{\mathbf{d}}:=\dim\mathscr{X}(d)-\dim\mathscr{X}(d)^{\lambda \geqslant 0}\). We define the following integers (which we call _widths_ of magic or quasi-BPS categories in this paper): \[c_{\lambda,\delta}:=c_{\lambda}+\varepsilon_{\lambda,\delta},\,c_{\mathbf{d}, \delta}:=c_{\mathbf{d}}+\varepsilon_{\mathbf{d},\delta}. \tag{6.28}\] **Proposition 6.15**.: _Let \(\lambda\) be an antidominant cocharacter of \(G(d)\) and let \(i,\ell\in\mathbb{Z}\). Consider the diagram:_ _Then the image of \(c\,a_{\lambda}^{*}\mathfrak{gr}_{\bullet}K_{\bullet}^{\text{top}}\left( \mathbb{S}(d;\delta)\right)\) lies in the subspace_ \[\bigoplus_{j=0}^{c_{\lambda,\delta}-1}H^{2\dim\mathscr{X}(d)-2\ell-i-2j} \left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\text{inv}}[-2]\right)h^{j} \subset H^{{}^{\prime}}\left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\text{inv }}[-2]\right)[h].\] _Note that \(a_{\lambda}\) only depends on the partition **d**, so we obtain that the image of \(\operatorname{c}a_{\lambda}^{*}\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left( \mathbb{S}(d;\delta)\right)\) lies in the subspace_ \[\bigoplus_{j=0}^{c_{d,\delta}-1}H^{2\dim\mathscr{X}(d)-2\ell-i-2j}\left(\mathscr{ X}(d)^{\prime\lambda},\varphi^{\mathrm{inv}}[-2]\right)h^{j}\subset H^{\cdot} \left(\mathscr{X}(d)^{\prime\lambda},\varphi^{\mathrm{inv}}[-2]\right)[h].\] Proof.: Consider a complex \(A\) in \(\mathbb{S}(d;\delta)\). Then \(a_{\lambda}^{*}(A)\) is in the subcategory of \(\operatorname{MF}(\mathscr{X}(d)^{\lambda},\operatorname{Tr}W)\) generated by \(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W)_{v}\) for \[v\in S_{\lambda,\delta}:=\left[-\frac{1}{2}\langle\lambda,\mathbb{L}_{ \mathscr{X}(d)}^{\lambda>0}\rangle+\langle\lambda,\delta\rangle,\frac{1}{2} \langle\lambda,\mathbb{L}_{\mathscr{X}(d)}^{\lambda>0}\rangle+\langle\lambda, \delta\rangle\right]\cap\mathbb{Z}.\] Thus \[a_{\lambda}^{*}K_{i}^{\mathrm{top}}\left(\mathbb{S}(d;\delta)\right)\subset K _{i}^{\mathrm{top}}\left(\operatorname{MF}\left(\mathscr{X}(d)^{\prime\lambda },\operatorname{Tr}W\right)\right)\otimes\mathscr{A}, \tag{6.29}\] where \(\mathscr{A}:=\bigoplus_{j\in S_{\lambda,\delta}}\mathbb{Q}\cdot q^{j}\). There are filtrations pulled back from cohomology by the Chern character for both \(K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda}, \operatorname{Tr}W)\right)\) and \(K_{0}^{\mathrm{top}}\left(B\mathbb{C}^{*}\right)\), and there is an isomorphism obtained by the Kunneth formula: \[\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d) ^{\lambda},\operatorname{Tr}W)\right)\cong\bigoplus_{a+b=\ell}\mathrm{gr}_{a} K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{X}(d)^{\prime\lambda}, \operatorname{Tr}W)\right)\otimes\mathrm{gr}_{b}K_{0}^{\mathrm{top}}\left(B \mathbb{C}^{*}\right). \tag{6.30}\] The filtration \(E_{b}K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\) on \(K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\) induces a filtration \[E_{b}\mathscr{A}:=\mathscr{A}\cap E_{b}K_{0}^{\mathrm{top}}(B\mathbb{C}^{*})\] on \(\mathscr{A}\). There are natural inclusions \(\mathrm{gr}_{b}\mathscr{A}\hookrightarrow\mathrm{gr}_{b}K_{0}^{\mathrm{top}}( B\mathbb{C}^{*})\). We obtain a Kunneth formula: \[\mathrm{gr}_{\ell}\left(K_{i}^{\mathrm{top}}\left(\operatorname{MF}(\mathscr{ X}(d)^{\prime\lambda},\operatorname{Tr}W)\right)\otimes\mathscr{A}\right)\cong \bigoplus_{a+b=\ell}\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\operatorname{MF} (\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W)\right)\otimes\mathrm{gr }_{b}\mathscr{A}. \tag{6.31}\] By (6.29) and (6.31), we have \[a_{\lambda}^{*}\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}\left(\mathbb{S}(d; \delta)\right)\subset\mathrm{gr}.K_{i}^{\mathrm{top}}\left(\operatorname{MF} \left(\mathscr{X}(d)^{\prime\lambda},\operatorname{Tr}W\right)\right)\otimes \mathrm{gr}.\mathscr{A}.\] It suffices to show that: \[\operatorname{c}\left(\mathrm{gr}.\mathscr{A}\right)\subset\bigoplus_{i=0}^{c_ {\lambda,\delta}-1}\mathbb{Q}\cdot h^{i}. \tag{6.32}\] For any \(1\leqslant i\leqslant k\), let \(F_{i}\) be a stable representation of dimension \(d_{i}\) (which exists by Lemma 6.14, and note that this is the only place where we use that \(Q\) satisfies Assumption 6.1) and let \(F:=\bigoplus_{i=1}^{k}F_{i}\). Let \(V/(\mathbb{C}^{*})^{k}\) be the moduli stack of representations of the Ext quiver of \(F\) and dimension vector \((1,\dots,1)\in\mathbb{N}^{k}\). Note that there is an etale map \[V/(\mathbb{C}^{*})^{k}\to\mathscr{X}(d),\,0\mapsto F.\] We have the equality of sets \[S_{\lambda,\delta}=\Big{[}-\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+ \langle\lambda,\delta\rangle,\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+ \langle\lambda,\delta\rangle\Big{]}\cap\mathbb{Z}.\] We denote the image of \(\lambda\) in \((\mathbb{C}^{*})^{k}\) by \(\mathbb{C}^{*}\). Consider the maps: \[V^{\lambda}\stackrel{{ q^{\prime}}}{{\longrightarrow}}V^{\lambda \geqslant 0}\stackrel{{ p^{\prime}}}{{\longrightarrow}}V,\] Let \(\ell\) be a generic linearization of \(\mathbb{C}^{*}\). By [12, Theorem 2.10], see also [16, Equation (3)], the subcategory of \(D^{b}(V/\mathbb{C}^{*})\) generated by \(\mathcal{O}_{V}(v)\) for weights \(v\in S_{\lambda,\delta}\) is equivalent to \(D^{b}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\) if \(\varepsilon_{\lambda,\delta}=0\), and has a semiorthogonal decomposition with pieces \(D^{b}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\) and \(p^{\prime}_{*}q^{\prime*}D^{b}(V^{\lambda})\) if \(\varepsilon_{\lambda,\delta}=1\). Define the map \[\text{s}\colon K_{0}^{\text{top}}(B\mathbb{C}^{*})\cong K_{0}^{\text{top}}(V/ \mathbb{C}^{*})\to K_{0}^{\text{top}}(V^{\ell\text{-ss}}/\mathbb{C}^{*})\oplus p ^{\prime}_{*}q^{\prime*}K_{0}^{\text{top}}(V^{\lambda})^{\oplus\varepsilon_{ \lambda,\delta}}\] as the direct sum of the restriction onto \(V^{\ell\text{-ss}}/\mathbb{C}^{*}\) and the inverse of the inclusion: \[p^{\prime}_{*}q^{\prime*}\colon K_{0}^{\text{top}}\left(D^{b}(V^{\lambda})_{a }\right)^{\oplus\varepsilon_{\lambda,\delta}}\cong K_{0}^{\text{top}}(V^{ \lambda})^{\oplus\varepsilon_{\lambda,\delta}}\to K_{0}^{\text{top}}(V/ \mathbb{C}^{*})\] for a weight \(a=\left\lfloor\frac{1}{2}\langle\lambda,V^{\lambda>0}\rangle+\langle\lambda, \delta\rangle\right\rfloor\in\mathbb{Z}\) of \(\lambda\), constructed by the semiorthogonal decomposition [12]. The following composition is an isomorphism: \[\mathcal{A}\hookrightarrow K_{0}^{\text{top}}(B\mathbb{C}^{*})\cong K_{0}^{ \text{top}}(V/\mathbb{C}^{*})\overset{\text{s}}{\to}K_{0}^{\text{top}}(V^{\ell \text{-ss}}/\mathbb{C}^{*})\oplus p^{\prime}_{*}q^{\prime*}K_{0}^{\text{top}}( V^{\lambda})^{\oplus\varepsilon_{\lambda,\delta}}.\] Note that the Hall product \(p^{\prime}_{*}q^{\prime*}\colon H^{\cdot}(V^{\lambda})\to H^{\cdot}(V/\mathbb{ C}^{*})\) has image \(\mathbb{Q}\cdot h^{c_{\lambda}}\), and thus it has a natural inverse \(H^{\cdot}(V/\mathbb{C}^{*})\to p^{\prime}_{*}q^{\prime*}H^{\cdot}(V^{\lambda})\). Let \(\mathrm{t}\) be the direct sum of this inverse and the restriction map: \[\mathrm{t}\colon H_{\cdot\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Consider the projection map \(t_{\lambda}\colon\mathcal{X}(d)^{\lambda}\to\mathcal{X}(d)^{\prime\lambda}\). The maps introduced fit in the following diagram: Consider the following perverse truncation \[\mathrm{S}\colon t_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[c_{ \lambda}] \cong\mathrm{IC}_{\mathcal{X}(d)^{\prime\lambda}}[c_{\lambda}-1] \otimes\mathbb{Q}[h]\] \[\to{}^{p}{}_{\tau}{}^{\geqslant c_{\lambda}+1}\left(\mathrm{IC}_ {\mathcal{X}(d)^{\prime\lambda}}[c_{\lambda}-1]\otimes\mathbb{Q}[h]\right)\] \[\cong\bigoplus_{j\geqslant 0}\mathrm{IC}_{\mathcal{X}(d)^{\prime \lambda}}[-c_{\lambda}-1-2j]\cong t_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{ \lambda}}[-c_{\lambda}]. \tag{6.35}\] Define the map \(\Delta_{\lambda}\) as the composition: \[\Delta_{\lambda}\colon\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\to\pi_{d*}a_{\lambda *}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[2c_{\lambda}]=i_{\lambda*}\pi_{ \lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}[2c_{\lambda}]\xrightarrow{ \mathrm{S}}i_{\lambda*}\pi_{\lambda*}\mathrm{IC}_{\mathcal{X}(d)^{\lambda}}. \tag{6.36}\] Recall the notations from Subsection 6.2 and the decomposition theorem (6.16). Consider the total perverse cohomology \[\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right):=\bigoplus_{i \in\mathbb{Z}}{}^{p}\mathcal{H}^{i}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)} \right)[-i].\] For \(A\in\mathcal{P}\) as in Subsection 6.2, there are then natural maps \[\mathrm{P}_{A}\to\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right) \to\mathrm{P}_{A}.\] **Proposition 6.16**.: _Let \(A,B\in\mathcal{P}\) with corresponding sheaves \(\mathrm{P}_{A}\) and \(\mathrm{P}_{B}\) of different support. Assume that \(p_{B}\leqslant p_{A}\)._ _(a) The map (6.36) induces an isomorphism_ \[\Delta_{\lambda}\colon\mathrm{P}_{A}\xrightarrow{\sim}\mathrm{P}_{A}. \tag{6.37}\] _(b) The map \(\Delta_{\lambda}\colon\mathrm{P}_{B}\to\mathrm{P}_{A}\) is zero._ Proof.: (a) Assume \(\lambda\) has associated partition \((d_{i})_{i=1}^{k}\). Assume further that the set \(\{d_{1},\dots,d_{k}\}=\{e_{1},\dots,e_{s}\}\) has cardinality \(s\) and that each \(e_{i}\) appears \(m_{i}\) times among the \(d_{j}\) for \(1\leqslant j\leqslant k\). Let \(A^{\circ}\in\mathcal{P}\) be the tuplet \((e_{i},m_{i,a})\) with \(m_{i,0}=m_{i}\) and \(m_{i,a}=0\) for \(a\geqslant 1\). For \(d\in\mathbb{N}^{I}\), let \(\hbar_{d}:=c_{1}(\mathcal{O}(\sigma_{d}))\in H^{2}(\mathcal{X}(d))\). By [1, Theorem C], the summand \(\mathrm{P}_{A}\) of \(\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right)\) is obtained from \(\boxtimes_{i=1}^{k}(\mathrm{IC}_{X(d_{i})}[-1])\) by multiplication with the equivariant parameters \(\hbar_{d_{i}}\) for \(1\leqslant i\leqslant k\). The map \(\Delta_{\lambda}\colon\mathrm{P}_{A}\to\mathrm{P}_{A}\) is thus obtained by multiplication by equivariant parameters \(\hbar_{d_{i}}\) for \(1\leqslant i\leqslant k\) from the map \[\Delta_{\lambda}m_{\lambda}\colon i_{\lambda*}\boxtimes_{i=1}^{k}\left( \mathrm{IC}_{X(d_{i})}[-1]\right)\to i_{\lambda*}\boxtimes_{i=1}^{k}\left( \mathrm{IC}_{X(d_{i})}[-1]\right).\] By [1, Theorem C], the image of \(m_{\lambda}\left(i_{\lambda*}\boxtimes_{i=1}^{k}\left(\mathrm{IC}_{X(d_{i})}[ -1]\right)\right)\) in \(\mathcal{H}\left(\pi_{d*}\mathrm{IC}_{\mathcal{X}(d)}\right)\) is \(\mathrm{P}_{A^{\circ}}\). Thus the map (6.37) is obtained by multiplication by equivariant parameters from the map \[\Delta_{\lambda}\colon\mathrm{P}_{A^{\circ}}\to\mathrm{P}_{A^{\circ}}.\] We may thus assume that \(A=A^{\circ}\). The Hall product is induced by a map \[m_{\lambda}\colon i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d) ^{\lambda}}\to\pi_{d\star}\mathrm{IC}_{\mathfrak{X}(d)}.\] The lowest non-zero piece of the perverse filtration on \(i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}\) is given by \[{}^{p}{}_{\tau}{}^{\leqslant k}i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{ \mathfrak{X}(d)^{\lambda}}=i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC} _{X(e_{i})}[-1]\right)^{\boxtimes m_{i}}.\] The (shifted) perverse sheaf \(i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{ \boxtimes m_{i}}\) splits as a direct sum of simple sheaves, and one such sheaf is \(\mathrm{P}_{A}\). There is thus a natural inclusion \(\mathrm{P}_{A}\subset i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{ \mathfrak{X}(d)^{\lambda}}\). The map \[\Delta_{\lambda}m_{\lambda}\colon\mathrm{P}_{A}\to i_{\lambda\star}\pi_{ \lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}} \tag{6.38}\] has image in the lowest non-zero perverse truncation of \(i_{\lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}\), and thus (6.38) induces a map: \[\Delta_{\lambda}m_{\lambda}\colon\mathrm{P}_{A}\to{}^{p}\tau^{\leqslant s}i_{ \lambda\star}\pi_{\lambda\star}\mathrm{IC}_{\mathfrak{X}(d)^{\lambda}}= \boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{\boxtimes m_{i}}. \tag{6.39}\] The (shifted) perverse sheaf \(i_{\lambda\star}\boxtimes_{i=1}^{s}\left(\mathrm{IC}_{X(e_{i})}[-1]\right)^{ \boxtimes m_{i}}\) has only one summand isomorphic to \(\mathrm{P}_{A}\), which is a simple (shifted) perverse sheaf. Thus the map (6.39) restricts to a map \[\mathrm{P}_{A}\to\mathrm{P}_{A} \tag{6.40}\] All such maps are given by multiplication by scalars. It is thus an isomorphism if it is not the zero map. It suffices to show that the maps (6.38) or (6.39) are not zero. We will show this after passing to a convenient open set of \(X(d)^{\lambda}\). For any non-zero \(e\in\mathbb{N}^{I}\), by the same argument used to prove that (6.14), there exists a stable point in \(R(e)\), equivalently the map \(\pi_{e}\colon\mathfrak{X}(e)\to X(e)\) is generically a \(\mathbb{C}^{*}\)-gerbe. For \(1\leqslant i\leqslant k\), let \(R_{i}\) be a simple representation of \(Q\) of dimension \(d_{i}\) such that \(R_{i}\) and \(R_{j}\) are not isomorphic for \(1\leqslant i<j\leqslant k\). Let \(R:=\bigoplus_{i=1}^{k}R_{i}\). Note that the stabilizer of \(R\) is \(T=(\mathbb{C}^{*})^{k}\). By the etale slice theorem, there is an analytic smooth open substack \(R\in\mathcal{U}/T\subset\mathfrak{X}(d)\) such that \[\mathcal{U}/\!/T\to X(d)\text{ and }\mathcal{U}^{\lambda}\to X(d)^{\lambda}\] are an analytic neighborhoods of \(\pi_{d}(R)\) and \(\times_{i=1}^{k}\pi_{d_{i}}(R_{i})=\pi_{\lambda}(R)\), respectively. After possibly shrinking \(\mathcal{U}\), we may assume that \(\mathcal{U}\) and \(\mathcal{U}^{\lambda}\) are contractible. The maps are, analytically locally over \(\pi_{d}(R)\in X(d)\), isomorphic to the following: (6.41) Note that the maps \(p_{\lambda}\) and \(a_{\lambda}\) in (6.41) are closed immersions. To show that the map (6.40) is non-zero, it suffices to check that the map \[\Delta_{\lambda}m_{\lambda}|_{\mathcal{U}^{\lambda}}\colon\mathrm{P}_{A}|_{ \mathcal{U}^{\lambda}}\to\mathrm{P}_{A}|_{\mathcal{U}^{\lambda}} \tag{6.42}\] is non-zero. It suffices to check that the map is non-zero after passing to global sections. We drop the restriction to \(\mathcal{U}^{\lambda}\) from the notation from now on. The element \(1\in H^{0}(\mathcal{U}^{\lambda}/T)\) is in \(P^{\leqslant s}H\left(\mathcal{U}^{\lambda}/T\right)\). We check by a direct computation that \[\Delta_{\lambda}m_{\lambda}(1)=1\in H^{0}(\mathcal{U}^{\lambda}/T). \tag{6.43}\] Note that the computation (6.43) shows that the map (6.42) is non-zero, and thus the conclusion follows. It suffices to check the computation in (6.43) for \(\mathcal{U}^{\lambda}/\mathbb{C}^{*}\), where by \(\mathbb{C}^{*}\) we denote the image of \(\lambda\), because \(H^{0}(\mathcal{U}^{\lambda}/\mathbb{C}^{*})\cong H^{0}(\mathcal{U}^{T}/T) \cong\mathbb{Q}\). Observe that \(H^{\cdot}(\mathcal{U}^{\lambda}/\mathbb{C}^{*})\cong\mathbb{Q}[h]\) and that \[m_{\lambda}(1)=p_{\lambda*}q_{\lambda}^{*}(1)=h^{c_{\lambda}}\] because \(p_{\lambda}\) has relative dimension \(-c_{\lambda}\). Note that \(\Delta_{\lambda}(h^{c_{\lambda}})=1\) from the construction of \(\Delta_{\lambda}\), and thus the conclusion follows. (b) If \(p_{B}<p_{A}\), the map \(\Delta_{\lambda}\colon\mathrm{P}_{B}\to\mathrm{P}_{A}\) is zero by considering the perverse degree. If \(p_{B}=p_{A}\), then the map is zero because, after a shift, it is a map of simple perverse sheaves with different support. We next prove the analogue of Proposition 6.16 for a non-zero potential. Let \(W\) be an arbitrary potential of \(Q\). Recall the sheaves \(\mathrm{Q}_{A}\) from Subsection 6.2. Let \(\mathcal{H}(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-1])\) be the total perverse cohomology of \(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathcal{X}(d)}[-1]\). There are natural maps: \[\mathrm{Q}_{A}\to\mathcal{H}(\pi_{*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{ \mathcal{X}(d)}[-1])\to\mathrm{Q}_{A}.\] Apply the vanishing cycle functor to the maps (6.36) to obtain: \[\Delta_{\lambda}\colon\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC }_{\mathcal{X}(d)}[-1] \to i_{\lambda*}\pi_{\lambda*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{ \mathcal{X}(d)^{\lambda}}[-1]\] \[=i_{\lambda*}\boxtimes_{i=1}^{k}\left(\pi_{d*}\varphi_{\mathrm{Tr }\,W}\mathrm{IC}_{\mathcal{X}(d_{i})}[-1]\right). \tag{6.44}\] Let \(\mathrm{Q}_{A}^{\mathrm{inv}}\) be defined by the exact triangle \[\mathrm{Q}_{A}^{\mathrm{inv}}[-1]\to\mathrm{Q}_{A}\xrightarrow{1-\mathrm{T}} \mathrm{Q}_{A}\to\mathrm{Q}_{A}^{\mathrm{inv}}.\] **Proposition 6.17**.: _Let \(A,B\in\mathcal{P}\) with corresponding sheaves \(\mathrm{Q}_{A}\) and \(\mathrm{Q}_{B}\) of different support. Assume that \(p_{B}\leqslant p_{A}\)._ _(a) The map (6.44) induces isomorphisms_ \[\Delta_{\lambda}\colon\mathrm{Q}_{A}\xrightarrow{\sim}\mathrm{Q}_{A},\,\Delta _{\lambda}\colon\mathrm{Q}_{A}^{\mathrm{inv}}\xrightarrow{\sim}\mathrm{Q}_{A} ^{\mathrm{inv}}. \tag{6.45}\] _(b) The maps \(\Delta_{\lambda}\colon\mathrm{Q}_{B}\to\mathrm{Q}_{A}\) and \(\Delta_{\lambda}\colon\mathrm{Q}_{B}^{\mathrm{inv}}\to\mathrm{Q}_{A}^{\mathrm{ inv}}\) are zero._ Proof.: The maps above are induced from the map (6.36), thus the conclusion follows from Proposition 6.16. We now record corollaries to be used in the proof of Theorem 6.3. Fix a splitting \[H^{\bullet}\left(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathcal{X}(d)}[-1]\right)=\bigoplus_{A\in\mathcal{P}}H^{\bullet}( X(d),\mathrm{Q}_{A}^{\mathrm{inv}}). \tag{6.46}\] Let \(x\in H^{\bullet}\left(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathcal{X}(d)}[-1]\right)\). Use the decomposition above to write \[x=\sum_{A\in\mathcal{P}}x_{A} \tag{6.47}\] with \(x_{A}\in H^{\bullet}(X(d),\mathrm{Q}_{A}^{\mathrm{inv}})\). **Corollary 6.18**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Let \(\lambda\) be an antidominant cocharacter of \(T(d)\) with associated partition **d** such that \(\varepsilon_{\lambda,\delta}=\varepsilon_{\textbf{d},\delta}\). Let \(x\in H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)}[-1])\) and assume that_ \[a_{\lambda}^{*}(x)\in\bigoplus_{j=0}^{c_{\textbf{d},\delta}-1}H^{i-2j}( \mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{ IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])h^{j}.\] _(a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(\Delta_{\lambda}(x)=0\)._ _(b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(\Delta_{\lambda}(x)\) is in the image of_ \[H^{i}(\mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}} \mathrm{IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])\hookrightarrow H^{i}( \mathscr{X}(d)^{\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)^{\lambda}}[-1]).\] Proof.: Recall the definition of S from (6.35). (a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(\mathrm{Sp}_{\lambda}^{*}(x)=0\), so \(\Delta_{\lambda}(x)=0\). (b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(\mathrm{Sp}_{\lambda}^{*}(x)\in H^{\cdot}\left(X(d)^{\lambda},\varphi_{ \mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{\mathscr{X}(d)^{\prime\lambda}}[-c_{ \lambda}-2]\right)\). The conclusion follows from the definition of \(\Delta_{\lambda}\) in (6.44). **Corollary 6.19**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\). Let \(\lambda\) be an antidominant cocharacter of \(T(d)\) with associated partition **d** such that \(\varepsilon_{\lambda,\delta}=\varepsilon_{\textbf{d},\delta}\). Let \(x\in H^{i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{IC}_{ \mathscr{X}(d)}[-1])\) and assume that_ \[a_{\lambda}^{*}(x)\in\bigoplus_{j=0}^{c_{\textbf{d},\delta}-1}H^{i-2j}( \mathscr{X}(d)^{\prime\lambda},\varphi_{\mathrm{Tr}\,W}^{\mathrm{inv}}\mathrm{ IC}_{\mathscr{X}(d)^{\prime\lambda}}[-2])h^{j}.\] _Recall the decomposition (6.47)._ _(a) If \(\varepsilon_{\textbf{d},\delta}=1\), then \(x_{A}=0\) for all tuples \(A\in\mathscr{P}\) with corresponding cocharacter \(\lambda\)._ _(b) If \(\varepsilon_{\textbf{d},\delta}=0\), then \(x_{A}=0\) for all tuples \(A\in\mathscr{P}\) with corresponding cocharacter \(\lambda\) and different from \(A^{\circ}\)._ Proof.: Both claims follow from Proposition 6.17 and Corollary 6.18. Proof of Theorem 6.3.: Recall the cycle map in (6.7) \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}(\mathbb{S}(d;\delta)) \to H^{\dim\mathscr{X}(d)-2a-i}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}^{ \mathrm{inv}}\mathrm{IC}_{\mathscr{X}(d)}[-2]).\] By Proposition 6.10, we may assume that \(Q\) has at least two loops at every vertex. Let \(y\) be in the image of the above map. By Proposition 6.15 and Corollary 6.19, we have that \(y_{A}=0\) unless \(A=A^{\circ}\) for some partition \(\textbf{d}=(d_{i},m_{i})_{i=1}^{k}\) of \(d\) with \(m_{i}\geqslant 1\) and \(d_{i}\) pairwise distinct with \(\varepsilon_{\textbf{d},\delta}=0\). The statement thus follows. ## 7. Topological K-theory of quasi-BPS categories for preprojective algebras In this section, we use the results of Sections 5 and 6 to compute the topological K-theory of preprojective algebras of quivers satisfying Assumption 2.1 in terms of BPS cohomology, see Theorem 7.6. ### The preprojective BPS sheaf Let \(Q^{\circ}=(I,E^{\circ})\) be a quiver. Recall the moduli stack of dimension \(d\) representations of the tripled quiver \(Q\) of \(Q^{\circ}\) of dimension \(d\) and its good moduli space: \[\pi_{X,d}:=\pi_{d}\colon\mathscr{X}(d)\to X(d).\] Recall also the moduli stack of dimension \(d\) representation of the preprojective algebra of \(Q^{\circ}\) and its good moduli space: \[\pi_{P,d}\colon\mathcal{P}(d)^{\mathrm{cl}}\to P(d).\] Consider the moduli stack of dimension \(d\) representations of the double quiver of \(Q^{\circ}\) and its good moduli space: \[\pi_{Y,d}\colon\mathcal{Y}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee})/G(d) \to Y(d).\] Consider the diagram: Here \(\eta\colon\mathcal{X}(d)\to\mathcal{Y}(d)\) is the projection which forgets the \(\mathfrak{g}(d)\)-component and the bottom horizontal arrows are induced maps on good moduli spaces. Let \(\mathbb{C}\hookrightarrow\mathfrak{g}(d)\) be the diagonal embedding, which induces the closed immersion \[\gamma\colon X^{\prime}(d):=(R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\oplus \mathbb{C})/\!\!/G(d)\hookrightarrow X(d).\] Let \(\eta^{\prime}:=\eta|_{X^{\prime}(d)}\). By [Dava, Theorem/ Definition 4.1], there exists a _preprojective BPS sheaf_ \[\mathcal{BPS}^{p}_{d}\in\operatorname{Perv}(P(d))\] such that the BPS sheaf of the tripled quiver with potential \((Q,W)\) associated to \(Q^{\circ}\) is \[\mathcal{BPS}_{d}=\gamma_{*}\eta^{\prime*}j_{*}(\mathcal{BPS}^{p}_{d})[1]\in \operatorname{Perv}(X(d)). \tag{7.1}\] For a partition \(A=(d_{i})_{i=1}^{k}\) of \(d\), define \(\mathcal{BPS}^{p}_{A}\in\operatorname{Perv}(P(d))\) as in (6.4). For \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), define the following perverse sheaves on \(P(d)\): \[\mathcal{BPS}^{p}_{\delta}:=\bigoplus_{A\in S^{d}_{\delta}}\mathcal{BPS}^{p}_{ A},\,\mathcal{BPS}^{p}_{d,v}:=\mathcal{BPS}^{p}_{v_{7d}}, \tag{7.2}\] where the set of partitions \(S^{d}_{\delta}\) is defined from the tripled quiver \(Q\), see Subsection 6.1.2. Then \(\mathcal{BPS}^{p}_{d,v}\) is a direct summand of \(\pi_{P,ds}\omega_{\mathcal{Y}(d)^{\mathrm{cl}}}\), see [Dava, Theorem A], and so \(H^{-a}(P(d),\mathcal{BPS}^{p}_{d,v})\) is a direct summand of \[H^{\mathrm{BM}}_{a}(\mathcal{P}(d)^{\mathrm{cl}})=H^{-a}\big{(}P(d),\pi_{P,ds }\omega_{\mathcal{Y}(d)^{\mathrm{cl}}}\big{)}.\] Recall the maps \[\mathcal{P}(d)\xleftarrow{\eta^{\prime}}\eta^{-1}(\mathcal{P}(d))\xrightarrow {j^{\prime}}\mathcal{X}(d).\] The dimension of \(\mathcal{P}(d)\) as a quasi-smooth stack is \(\dim\mathcal{P}(d):=\dim\mathcal{Y}(d)-\dim\mathfrak{g}(d)\). Recall the dimensional reduction isomorphism from Subsection 5.1: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{\mathrm{BM}}_{a}(\mathcal{P }(d)^{\mathrm{cl}})\cong H^{\mathrm{BM}}_{a}(\mathcal{P}(d)) \xrightarrow{\sim}H^{2\dim\mathcal{Y}(d)-a}(\mathcal{X}(d),\varphi_{ \mathrm{Tr}\,W}\mathbb{Q}_{\mathcal{X}(d)}[-1])\] \[=H^{\dim\mathcal{Y}(d)-a}(\mathcal{X}(d),\varphi_{\mathrm{Tr}\,W} \mathrm{IC}_{\mathcal{X}(d)}[-1]).\] By the construction of the PBW isomorphism for preprojective Hall algebras [Dava, Equation (31)], the above isomorphism preserves the BPS cohomologies: \[j^{\prime}_{*}\eta^{\prime*}\colon H^{-a}(P(d),\mathcal{BPS}^{p}_{d,v}) \xrightarrow{\sim}H^{\dim\mathcal{Y}(d)-a}(X(d),\mathcal{BPS}_{d,v}). \tag{7.3}\] ### Computations Recall the categories \[\mathbb{T}(d)_{v}\subset D^{b}(\mathcal{P}(d))\text{ and }\mathbb{T}(d)_{v}^{ \operatorname{red}}\subset D^{b}(\mathcal{P}(d)^{\operatorname{red}})\] from Subsection 2.13. Consider the natural closed immersion \(l^{\prime}\colon\mathcal{P}(d)^{\operatorname{red}}\hookrightarrow\mathcal{P} (d)\). The closed immersion \(l\colon\mathcal{P}(d)^{\operatorname{cl}}\hookrightarrow\mathcal{P}(d)\) factors through \(\mathcal{P}(d)^{\operatorname{cl}}\hookrightarrow\mathcal{P}(d)^{ \operatorname{red}}\stackrel{{ l^{\prime}}}{{\hookrightarrow}} \mathcal{P}(d)\). **Proposition 7.1**.: _Let \(Q\) be a symmetric quiver. Then there is a weak equivalence of spectra \(l^{\prime}_{*}\colon K^{\operatorname{top}}(\mathbb{T}(d)_{v}^{ \operatorname{red}})\to K^{\operatorname{top}}(\mathbb{T}(d)_{v})\)._ Proof.: There is a weak equivalence of spectra \(l^{\prime}_{*}\colon G^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{ red}})\xrightarrow{\sim}G^{\operatorname{top}}(\mathcal{P}(d))\). The claim then follows from Theorem 2.9. For \(i\in\mathbb{Z}\), consider the Chern character map (3.8) for the quasi-smooth stack \(\mathcal{P}(d)\): \[\operatorname{ch}\colon G_{i}^{\operatorname{top}}(\mathcal{P}(d))\to\widetilde {H}_{i}^{\operatorname{BM}}(\mathcal{P}(d)). \tag{7.4}\] It induces a Chern character map \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{T}(d)_{v}) \hookrightarrow G_{i}^{\operatorname{top}}(\mathcal{P}(d))\to\widetilde{H}_{i} ^{\operatorname{BM}}(\mathcal{P}(d)). \tag{7.5}\] **Corollary 7.2**.: _The maps (7.4) and (7.5) are injective._ Proof.: It suffices to check that (7.4) is injective. This follows from Proposition 4.10, Theorem 2.8 (applied to a fixed \(\mu\) and all \(\alpha\in\mathbb{Z}_{\geqslant 1}\)), and the Koszul equivalence (2.14). **Corollary 7.3**.: _We have that \(G_{1}^{\operatorname{top}}(\mathcal{P}(d))=0\). Thus also \(K_{1}^{\operatorname{top}}(\mathbb{T}(d)_{v})=0\)._ Proof.: We have that \(H_{\operatorname{odd}}^{\operatorname{BM}}(\mathcal{P}(d)^{\operatorname{cl}} )=0\) by [11]. The conclusion follows by Proposition 7.2. Recall the filtration \(E_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) of \(G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) from Subsection 3.3. Define the filtration: \[E_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}):=E_{\ell}G_{0}^{ \operatorname{top}}(\mathcal{P}(d))\cap K_{0}^{\operatorname{top}}(\mathbb{T }(d)_{v})\subset K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}).\] We denote by \(\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v})\) the associated graded piece, and note that it is a direct summand of \(\operatorname{gr}_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d))\) by Theorem 2.9. Define similarly a filtration \(E_{\ell}G_{0}^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{red}}) \subset G_{0}^{\operatorname{top}}(\mathcal{P}(d)^{\operatorname{red}})\) and a filtration \(E_{\ell}K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}^{\operatorname{red}}) \subset K_{0}^{\operatorname{top}}(\mathbb{T}(d)_{v}^{\operatorname{red}})\). **Corollary 7.4**.: _The forget-the-potential functor \(\Theta\) induces an isomorphism:_ \[\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{X}(d),\operatorname{Tr}W)\right)\xrightarrow{\sim} \operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\operatorname{MF}( \mathcal{X}(d),\operatorname{Tr}W)\right). \tag{7.6}\] _There are also isomorphisms:_ \[\operatorname{gr}_{\ell}K_{0}^{\operatorname{top}}\left(\mathbb{T}(d)_{v} \right)\xrightarrow{\sim}\operatorname{gr}_{\ell+\dim\mathfrak{g}(d)}K_{0}^{ \operatorname{top}}\left(\mathbb{S}^{\operatorname{gr}}(d)_{v}\right) \xrightarrow{\sim}\operatorname{gr}_{\ell+\dim\mathfrak{g}(d)}K_{0}^{ \operatorname{top}}\left(\mathbb{S}(d)_{v}\right).\] Proof.: The isomorphism (7.6) follows from Corollaries 5.2 and 7.3. The other isomorphism follow from the Koszul equivalence, see Proposition 5.2 for an explanation of the degree of the graded pieces. **Corollary 7.5**.: _There is a commutative diagram, where the vertical maps are cycle maps and the left horizontal maps are the dimensional reduction maps \(i^{\prime}_{*}p^{\prime*}\)._ \[\begin{CD}\operatorname{gr}.G_{0}^{\operatorname{top}}(\mathcal{P}(d))@>{ \sim}>{}>\operatorname{gr}.K_{0}^{\operatorname{top}}(\operatorname{MF}^{ \operatorname{gr}}(\mathcal{X}(d),\operatorname{Tr}W))@>{\sim}>{}> \operatorname{gr}.K_{0}^{\operatorname{top}}(\operatorname{MF}(\mathcal{X}(d), \operatorname{Tr}W))\\ @V{\downarrow}V{}V@V{\downarrow}V{}V\\ \widetilde{H}_{0}^{\operatorname{BM}}(\mathcal{P}(d))@>{\sim}>{}>\widetilde{H}^{0}( \mathcal{X}(d),\varphi_{\operatorname{Tr}W}[-1])@>{\sim}>{}>\widetilde{H}^{0}( \mathcal{X}(d),\varphi_{\operatorname{Tr}W}^{\operatorname{inv}}[-2]).\end{CD}\] _Here we have suppressed the cohomological degrees to make the diagram simpler._ Proof.: The claim follows from Proposition 5.2 and Corollary 7.4. **Theorem 7.6**.: _For an arbitrary quiver \(Q^{\circ}\), the cycle map (7.4) for \(\mathcal{P}(d)\) induces a cycle map_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}) \cong\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}^{\mathrm{red}}) \to H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p}). \tag{7.7}\] _If \(Q^{\circ}\) satisfies Assumption 2.2, then (7.7) is an isomorphism._ Proof.: The isomorphism \(\mathrm{gr}_{\ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v})\cong\mathrm{gr}_{ \ell}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v}^{\mathrm{red}})\) follows from Proposition 7.1. Consider the diagram, whose lower square commutes from Corollary 7.5 and the top horizontal map is an isomorphism by Corollary 7.4: By Theorem 6.3, the map \(\beta\) has image in \[H^{\dim\mathcal{P}(d)-2\ell}(\mathscr{X}(d),\mathcal{BPS}_{d,v})\subset H^{2 \dim\mathcal{Y}(d)-2\ell}(\mathscr{X}(d),\varphi_{\mathrm{Tr}\,W}[-1]).\] If \(Q^{\circ}\) satisfies Assumption 2.2, it is an isomorphism onto \(H^{\dim\mathcal{P}(d)-2\ell}(\mathscr{X}(d),\mathcal{BPS}_{d,v})\) by Theorem 6.2. By (7.3), the map \(\alpha\) has image in \(H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p})\), and, if \(Q^{\circ}\) satisfies Assumption 2.2, it is an isomorphism onto \(H^{-2\ell}(P(d),\mathcal{BPS}_{d,v}^{p})\). **Remark 7.7**.: There are two perverse filtrations on \(H^{\mathrm{BM}}(\mathcal{P}(d))\) for any quiver \(Q^{\circ}\). One of them is induced from the tripled quiver with potential \((Q,W)\) and studied in [13]; the first non-zero piece in the perverse filtration is \({}^{p}\tau^{\leqslant 1}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X}(d)}= \mathcal{BPS}_{d}\). Another filtration is induced from the map \(\pi_{P,d}\) and studied in [11], where it is called the "less perverse filtration"; the first non-zero piece in the perverse filtration is \({}^{p}\tau^{\leqslant 0}\pi_{P,d*}\omega_{\mathcal{P}(d)^{\mathrm{cl}}}\). Note that, for any \(v\in\mathbb{Z}\), \({}^{p}\tau^{\leqslant 1}\pi_{d*}\varphi_{\mathrm{Tr}\,W}\mathrm{IC}_{\mathscr{X}(d)}\) is a direct summand of \(\mathcal{BPS}_{d,v}\), which itself is a direct summand of \({}^{p}\tau^{\leqslant 0}\pi_{P,d*}\omega_{\mathcal{P}(d)^{\mathrm{cl}}}\). Thus the topological K-theory of quasi-BPS categories (for \(Q^{\circ}\) satisfying Assumption 2.2, and for any \(v\in\mathbb{Z}\)) lies between the first non-zero pieces of these two perverse filtrations. **Remark 7.8**.: Davison-Hennecart-Schlegel Mejia [15, 16] computed the preprojective BPS sheaves in terms of the intersection complexes of the varieties \(P(d)\). We note the following numerical corollary of Theorem 7.6. **Corollary 7.9**.: _Let \(Q^{\circ}\) be a quiver satisfying Assumption 2.2 and let \((d,v)\in\mathbb{N}^{I}\times\mathbb{Z}\). Then_ \[\dim_{\mathbb{Q}}K_{0}^{\mathrm{top}}(\mathbb{T}(d)_{v})=\dim_{\mathbb{Q}}H^{ \cdot}(P(d),\mathcal{BPS}_{d,v}).\] Proof.: The map (7.5) is injective by Proposition 7.2. The conclusion then follows from Theorem 7.6. ## 8. Examples In this section, we discuss some explicit examples of computations of the topological K-theory of quasi-BPS categories. All vector spaces considered in this section are \(\mathbb{Q}\)-vector spaces. We first note a preliminary proposition. **Proposition 8.1**.: _Let \(Q=(I,E)\) be a symmetric quiver, let \(d\in\mathbb{N}^{I}\), and let \(v\in\mathbb{Z}\). Then_ \[\dim K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{v})=\#\left(M(d)^{+}\cap(\mathbf{W}( d)+v\tau_{d}-\rho)\right).\] Proof.: There is a natural isomorphism \[K_{0}(\mathcal{X}(d))\xrightarrow{\sim}K_{0}^{\mathrm{top}}(\mathcal{X}(d)) \cong K_{0}(BG(d)).\] The category \(\mathbb{M}(d)_{v}\) is admissible in \(D^{b}(\mathcal{X}(d))\), so the above isomorphism restricts to the isomorphism \[K_{0}(\mathbb{M}(d)_{v})\xrightarrow{\sim}K_{0}^{\mathrm{top}}(\mathbb{M}(d)_{ v}).\] The generators of \(K_{0}(\mathcal{X}(d))\) are the classes of the vector bundles \(\mathcal{O}_{\mathcal{X}(d)}\otimes\Gamma_{G(d)}(\chi)\), where \(\chi\) is a dominant weight of \(G(d)\) and \(\Gamma_{G(d)}(\chi)\) is the irreducible representation of \(G(d)\) of highest weight \(\chi\). The computation \[\dim K_{0}(\mathbb{M}(d)_{v})=\#\left(M(d)^{+}\cap(\mathbf{W}(d)+v\tau_{d}- \rho)\right)\] follows then from the definition of \(\mathbb{M}(d)_{v}\). **Remark 8.2**.: In view of Proposition 8.1 and Theorem 6.6, the total intersection cohomology of the spaces \(X(d)\) can be determined by counting lattice points inside the polytope \((\mathbf{W}(d)+v\tau_{d}-\rho)\). ### Toric examples Let \(g\in\mathbb{N}\). Consider the quiver \(Q=(I,E)\), where \(I=\{1,2\}\) and \(E\) has one loop at \(1\), one loop at \(2\), \(2g+1\) edges \(\{e_{1},\ldots,e_{2g+1}\}\) from \(0\) to \(1\) and \(2g+1\) edges \(\{\overline{e}_{1},\ldots,\overline{e}_{2g+1}\}\) from \(1\) to \(0\). The following is a figure for \(g=1\). Fix \(d=(1,1)\in\mathbb{N}^{I}\). Then \[\mathcal{X}(d)=\left(\mathbb{C}^{2}\oplus\mathbb{C}^{2(2g+1)}\right)/(\mathbb{ C}^{*})^{2}.\] The diagonal \(\mathbb{C}^{*}\hookrightarrow(\mathbb{C}^{*})^{2}\) acts trivially on \(\mathbb{C}^{2}\oplus\mathbb{C}^{2(2g+1)}\). The factor \(\mathbb{C}^{*}\) corresponding to the vertex \(1\) acts with weight \(0\) on \(\mathbb{C}^{2}\), weight \(1\) on \(\mathbb{C}^{2g+1}\), and weight \(-1\) on \(\mathbb{C}^{2g+1}\). We consider the stack, which is the \(\mathbb{C}^{*}\)-rigidification of \(\mathcal{X}(d)\): \[\mathcal{X}^{\prime}(d)=\left(\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1} \oplus\mathbb{C}^{2g+1}_{-1}\right)/\mathbb{C}^{*}.\] The GIT quotient for any non-trivial stability condition provides a small resolution of singularities: \[\tau\colon Y:=\left(\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1}\oplus \mathbb{C}^{2g+1}_{-1}\right)^{\mathrm{ss}}/\mathbb{C}^{*}=\mathbb{C}^{2} \times\operatorname{Tot}_{\mathbb{P}^{2g}}\left(\mathcal{O}(-1)^{2g+1}\right) \to X(d).\] Here, _small_ means that \(\dim Y\times_{X(d)}Y=\dim X(d)\) and \(Y\times_{X(d)}Y\) has a unique irreducible component of maximal dimension. Then, by the BBDG decomposition theorem, we have that \(\tau_{*}\mathrm{IC}_{Y}=\mathrm{IC}_{X(d)}\). We decorate the BPS sheaves with a superscript zero to indicate that the potential is zero. We obtain that: \[\mathcal{BPS}^{0}_{d}=\tau_{*}\mathrm{IC}_{Y}=\mathrm{IC}_{X(d)}\text{ and }\mathcal{BPS}^{0}_{(1,0)}=\mathcal{BPS}^{0}_{(0,1)}=\mathrm{IC}_{\mathbb{C}}. \tag{8.1}\] **Proposition 8.3**.: _If \(v\) is odd, then \(\mathbb{M}(d)_{v}\cong D^{b}(Y)\) and \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\)._ _If \(v\) is even, then \(\mathbb{M}(d)_{v}\) has a semiorthogonal decomposition with summands equivalent to \(D^{b}(Y)\) and \(D^{b}(\mathbb{C}^{2})\), and \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\oplus\mathcal{BPS}^{0}_{(1,0)} \boxtimes\mathcal{BPS}^{0}_{(0,1)}\)._ Proof.: The category \(\mathbb{M}(d)_{v}\) is the subcategory of \(D^{b}(\mathscr{X}(d))\) generated by the line bundles \(\mathcal{O}_{\mathscr{X}(d)}(w\beta_{2}+(v-w)\beta_{1})\) for \(w\in\mathbb{Z}\) such that \[\frac{v}{2}\leqslant w\leqslant 2g+1+\frac{v}{2}. \tag{8.2}\] One can show that \(\mathbb{M}(d)_{v}\) is equivalent to the "window subcategory" (in the sense of [11]) of \(D^{b}(\mathscr{X}^{\prime}(d))\) containing objects \(F\) such that the weights of \(\mathbb{C}^{*}\) on \(F|_{0}\) are in \(\left[\frac{v}{2},\frac{v}{2}+2g+1\right]\cap\mathbb{Z}\). If \(v\) is odd, then \(\mathbb{M}(d)_{v}\cong D^{b}(Y)\) by [11, Theorem 2.10]. The boundary points \(\frac{v}{2}\) and \(\frac{v}{2}+2g+1\) are not integers, so \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\). If \(v\) is even, then \(\mathcal{BPS}^{0}_{d,v}=\mathcal{BPS}^{0}_{d}\oplus\mathcal{BPS}^{0}_{(1,0)} \boxtimes\mathcal{BPS}^{0}_{(0,1)}\). The fixed locus of the unique Kempf-Ness locus in the construction of \(Y\) is \((\mathbb{C}^{2}_{0}\oplus\mathbb{C}^{2g+1}_{1}\oplus\mathbb{C}^{2g+1}_{-1})^{ \mathbb{C}^{*}}=\mathbb{C}^{2}\). As a corollary of [11, Theorem 2.10], see the remark in [12, Equation (3)], the category \(\mathbb{M}(d)_{v}\) has a semiorthogonal decomposition with summands \(D^{b}(Y)\) and \(D^{b}(\mathbb{C}^{2})\). As a corollary of the above proposition and of the computations (8.1), we obtain the following: \[\dim K^{\mathrm{top}}_{0}(\mathbb{M}(d)_{v})\overset{(*)}{=}\dim H^{\cdot}(X( d),\mathcal{BPS}^{0}_{d,v})=\begin{cases}2g+1,\text{ if }v\text{ is odd},\\ 2g+2,\text{ if }v\text{ is even}.\end{cases}\] The equality \((*)\) is also the consequence (6.4) of Theorem 6.2. Note that the dimensions of \(K^{\mathrm{top}}(\mathbb{M}(d)_{v})\) can be computed immediately using Proposition 8.1 and (8.2), and then \((*)\) can be checked without using window categories. However, by Proposition 8.3, the equality \((*)\) is obtained as a corollary of the Atiyah-Hirzebruch theorem for the smooth varieties \(Y\) and \(\mathbb{C}^{2}\). Further, Proposition 8.3 is useful when considering a non-zero potential for \(Q\). For example, consider the potential \[W:=\sum_{i=1}^{2g+1}e_{i}\overline{e}_{i}.\] Note that \(W\colon Y\to\mathbb{C}\) is smooth. The computation (8.1) implies that: \[\mathcal{BPS}_{d}=\varphi_{W}\mathrm{IC}_{X(d)}=\tau_{*}\varphi_{W}\mathrm{IC} _{Y}=0\text{ and }\mathcal{BPS}^{0}_{(1,0)}=\mathcal{BPS}^{0}_{(0,1)}=\mathrm{IC}_{\mathbb{C}}. \tag{8.3}\] The BPS sheaves have trivial monodromy. Further, Proposition 8.3 implies that: \[\mathbb{S}(d)_{v} \simeq\mathrm{MF}(Y,W)=0\text{ if }v\text{ is odd},\] \[\mathbb{S}(d)_{v} =\langle\mathrm{MF}(Y,W),\mathrm{MF}(\mathbb{C}^{2},0)\rangle \simeq\mathrm{MF}(\mathbb{C}^{2},0)\text{ if }v\text{ is even}.\] Let \(i\in\mathbb{Z}\). The following equality (which also follows by (6.4)) holds by a direct computation: \[\dim K_{i}^{\text{top}}(\mathbb{S}(d)_{v})=\dim H^{\cdot}(X(d),\mathcal{BPS}_{d,v })=\begin{cases}0,\text{ if }v\text{ is odd},\\ 1,\text{ if }v\text{ is even}.\end{cases}\] **Remark 8.4**.: A similar analysis can be done for any symmetric quiver \(Q=(I,E)\) (not necessarily satisfying Assumption 2.1) and a dimension vector \(d=(d^{i})_{i\in I}\in\mathbb{N}^{I}\) such that \(d^{i}\in\{0,1\}\) for every \(i\in I\). We do not give the details for the proofs. Let \(v\in\mathbb{Z}\) such that \(\gcd(\underline{d},v)=1\). Assume \(W=0\). One can show that, for a generic GIT stability \(\ell\in M(d)_{\mathbb{R}}^{W_{d}}\cong M(d)_{\mathbb{R}}\), the GIT quotient \(Y:=R(d)^{\ell\text{-ss}}/G(d)\cong R(d)^{\ell\text{-ss}}/\!\!/G(d)\) is a small resolution \[\tau\colon Y\to X(d).\] Then \(\mathcal{BPS}_{d}^{0}=\tau_{*}\text{IC}_{Y}.\) By [10], there is an equivalence: \[\mathbb{M}(d)_{v}\cong D^{b}(Y).\] The following equality (which is a corollary of Theorem 6.2) follows then by the Atiyah-Hirzebruch theorem for the smooth variety \(Y\): \[\dim K_{0}^{\text{top}}(\mathbb{M}(d)_{v})=\dim K_{0}^{\text{top}}(Y)=\dim H^{ \cdot}(Y)=\dim H^{\cdot}(X(d),\mathcal{BPS}_{d}^{0}).\] Similar computations can be done also for a general \(v\in\mathbb{Z}\). ### Quivers with one vertex and an odd number of loops Let \(g\in\mathbb{N}\). Consider \(Q\) the quiver with one vertex and \(2g+1\) loops. The following is a picture for \(g=1\). (8.4) For \(d\in\mathbb{N}\), recall the good moduli space map: \[\mathcal{X}(d):=\mathfrak{gl}(d)^{\oplus(2g+1)}/GL(d)\to X(d):=\mathfrak{gl}(d )^{\oplus(2g+1)}/\!\!/GL(d).\] For \(g>0\), the variety \(X(d)\) is singular. For every stability condition \(\ell\in M(d)_{\mathbb{R}}^{W_{d}}\), we have that \(\mathcal{X}(d)^{\ell\text{-ss}}=\mathcal{X}(d)\), so we do not obtain resolutions of singularities of \(X(d)\) as in the previous example. There are no known crepant geometric resolutions (in particular, small resolutions) of \(X(d)\). For \(\gcd(d,v)=1\), Spenko-Van den Bergh [14] proved that \(\mathbb{M}(d)_{v}\) is a twisted noncommutative crepant resolution of \(X(d)\). In view of Theorem 6.6, we regard \(\mathbb{M}(d)_{v}\) as the categorical analogue of a small resolution of \(X(d)\). Reineke [13] and Meinhardt-Reineke [12] provided multiple combinatorial formulas for the dimensions of the individual intersection cohomology vector spaces \(\operatorname{IH}^{\bullet}(X(d))\). As noted in Remark 8.2, Theorem 6.6 also provides combinatorial formulas for the total intersection cohomology of \(X(d)\). We explain that our formula recovers a formula already appearing in the work of Reineke [12, Theorem 7.1]. Fix \(v\in\mathbb{Z}\). By Proposition 8.1, we need to determine the number of (integral, dominant) weights \(\chi=\sum_{i=1}^{d}c_{i}\beta_{i}\in M(d)^{+}\) with \(\sum_{i=1}^{d}c_{i}=v\) and \(c_{i}\geqslant c_{i-1}\) for every \(2\leqslant i\leqslant d\), such that \[\chi+\rho-v\tau_{d}\in\frac{2g+1}{2}\text{sum}[0,\beta_{i}-\beta_{j}], \tag{8.5}\] where the Minkowski sum is after all \(1\leqslant i,j\leqslant d\). Define \(\widetilde{\chi}\in M(d)\) and \(\widetilde{c}_{i}\in\mathbb{Z}\) for \(1\leqslant i\leqslant d\) as follows: \[\widetilde{\chi}:=\chi-g\cdot(2\rho)=\sum_{i=1}^{d}\widetilde{c}_{i}\beta_{i}.\] Note that, for every \(2\leqslant i\leqslant d\), the inequality \(c_{i}\geqslant c_{i-1}\) becomes: \[\widetilde{c}_{i}-\widetilde{c}_{i-1}+2g\geqslant 0. \tag{8.6}\] A dominant weight \(\chi\) satisfies (8.5) if and only if, for all dominant cocharacters \(\lambda\) of \(T(d)\subset GL(d)\), we have: \[\langle\lambda,\chi+\rho-v\tau_{d}\rangle\leqslant\frac{2g+1}{2}\langle\lambda,\mathfrak{g}^{\lambda>0}\rangle=\frac{2g+1}{2}\langle\lambda,\rho\rangle. \tag{8.7}\] **Proposition 8.5**.: _The inequalities (8.7) hold for all dominant cocharacters \(\lambda\) if and only they hold for the cocharacters \(\lambda_{k}(z)=(\overbrace{1\dots,1}^{d-k},\overbrace{z,\dots,z}^{k})\in T(d)\) for \(1\leqslant k\leqslant d\)._ Proof.: In the cocharacter lattice, any dominant cocharacter \(\lambda\) is a linear combination with nonnegative coefficients of \(\lambda_{k}\) for \(1\leqslant k\leqslant d\). Then, if (8.7) holds for all \(\lambda_{k}\), it also holds for all dominant \(\lambda\). We rewrite the conditions (8.7) for \(\lambda_{k}\) using the weight \(\widetilde{\chi}\): \[\langle\lambda_{k},\widetilde{\chi}\rangle\leqslant\langle\lambda_{k},v\tau_ {d}\rangle.\] Alternatively, the condition above can be written as: \[\sum_{i=d-k+1}^{d}\widetilde{c}_{i}\leqslant\frac{vk}{d}. \tag{8.8}\] **Definition 8.6**.: Let \(\mathcal{H}_{d,v}^{2g+1}\) be the set of tuplets of integers \((\widetilde{c}_{i})_{i=1}^{d}\in\mathbb{Z}^{d}\) satisfying the inequality (8.6) and (8.8) for every \(2\leqslant k\leqslant d\) and such that \(\sum_{i=1}^{d}\widetilde{c}_{i}=v\). Let \(H_{d,v}^{2g+1}:=\#\mathcal{H}_{d,v}^{2g+1}\). **Remark 8.7**.: The numbers \(H_{d,0}^{2g+1}\) appear in combinatorics as "score sequences of complete tournaments", and in the study of certain \(\mathbb{C}^{*}\)-fixed points in the moduli of \(SL(n)\)-Higgs bundles, see [12, Section 7]. By Proposition 8.1, we have that: \[\dim K_{0}^{\text{top}}(\mathbb{M}(d)_{v})=H_{d,v}^{2g+1}.\] By Theorem 6.6, for any \(v\in\mathbb{Z}\) such that \(\gcd(d,v)=1\), we obtain that: \[\dim\text{IH}^{\cdot}(X(d))=H_{d,v}^{2g+1}. \tag{8.9}\] The above statement was already proved (by different methods) by Reineke and Meinhardt-Reineke by combining [12, Theorem 7.1] and [19, Theorem 4.6], see also [19, Section 4.3]. Note that we assume that the number of loops is odd in order to apply Theorem 6.2. In loc. cit., Reineke also provided combinatorial formulas for \(m\)-loop quivers for \(m\) even. **Remark 8.8**.: Note that, as a corollary of (8.9), we obtain that \(H^{2g+1}_{d,v}=H^{2g+1}_{d,v^{\prime}}\) if \(\gcd(d,v)=\gcd(d,v^{\prime})=1\). There are natural bijections \(\mathcal{H}^{2g+1}_{d,v}\xrightarrow{\sim}\mathcal{H}^{2g+1}_{d,v^{\prime}}\) for \(d|v-v^{\prime}\) or for \(v^{\prime}=-v\), but we do not know such natural bijections for general \(v,v^{\prime}\) coprime with \(d\). For \(\gcd(d,v)=1\) and \(n\in\mathbb{Z}_{\geqslant 1}\), the topological K-theory \(K^{\operatorname{top}}_{i}(\mathbb{M}(nd)_{nv})\) is computed from the intersection cohomology of \(X(e)\) for \(e\in\mathbb{N}\), and the set \(S^{nd}_{nv}\). The following is a corollary of Proposition 6.1: **Corollary 8.9**.: _For \(\gcd(d,v)=1\) and \(n\in\mathbb{Z}_{\geqslant 1}\), the set \(S^{nd}_{nv}\) consists of partitions \((d_{i})^{k}_{i=1}\) of \(nd\) such that \(d_{i}=n_{i}d\) for \((n_{i})^{k}_{i=1}\in\mathbb{N}^{k}\) a partition of \(n\)._ **Example 8.10**.: Suppose that \(g=0\). In this case, the variety \(X(d)\) is smooth: \[X(d)=\mathfrak{gl}(d)/\!\!/GL(d)\xrightarrow{\sim}\operatorname{Sym}^{d}( \mathbb{C})\cong\mathbb{C}^{d}.\] The above isomorphism is given by sending an element of \(\mathfrak{gl}(d)\) to the set of generalized eigenvalues. However \(X(d)^{\operatorname{st}}=\emptyset\) if \(d>1\), thus \(\mathcal{BPS}_{d}=\operatorname{IC}_{\mathbb{C}}\) if \(d=1\), and \(\mathcal{BPS}_{d}=0\) for \(d>1\). Then by Corollary 8.9, we have \(\mathcal{BPS}_{d,v}=0\) unless \(d|v\), case in which \(\mathcal{BPS}_{d,v}=\operatorname{Sym}^{d}(\mathcal{BPS}_{1})=\operatorname{IC }_{X(d)}\). Thus for \(g=0\), we have \[\dim H\left(X(d),\mathcal{BPS}_{d,v}\right)=\begin{cases}1,\text{ if }d|v,\\ 0,\text{ otherwise}.\end{cases} \tag{8.10}\] On the other hand, by [10, Lemma 3.2] we have that \(\mathbb{M}(d)_{v}=0\) unless \(d|v\), case in which it is the subcategory of \(D^{b}(\mathfrak{X}(d))\) generated by \(\mathcal{O}_{\mathfrak{X}(d)}(v\tau_{d})\), and thus equivalent to \(D^{b}(X(d))\), see [10, Lemma 3.3]. Then: \[\dim K^{\operatorname{top}}_{0}(\mathbb{M}(d)_{v})=\begin{cases}1,\text{ if }d|v,\\ 0,\text{ otherwise}.\end{cases} \tag{8.11}\] For \(g=0\), we can thus verify (6.4) by the direct computations (8.10), (8.11). ### The three loop quiver In this subsection, we make explicit the corollary of Theorem 6.2 for the three loop quiver (8.4) with loops \(\{x,y,z\}\) and with potential \(W=x[y,z]\). The quasi-BPS categories \(\mathbb{S}(d)_{v}\) are the quasi-BPS categories of \(\mathbb{C}^{3}\) and are admissible in the DT category \(\mathcal{DT}(d)\) studied in [PTa]. The quasi-BPS categories \(\mathbb{T}(d)_{v}\) are the quasi-BPS categories of \(\mathbb{C}^{2}\) and are admissible in \(D^{b}(\mathcal{C}(d))\), where \(\mathcal{C}(d)\) is the commuting stack of matrices of size \(d\). For \(n\in\mathbb{N}\), we denote by \(p_{2}(n)\) the number of partitions of \(n\). **Proposition 8.11**.: _Let \((d,v)\in\mathbb{N}\times\mathbb{Z}\) be coprime, let \(n\in\mathbb{N}\) and \(i\in\mathbb{Z}\). Then:_ \[\dim K^{\operatorname{top}}_{i}(\mathbb{S}(nd)_{nv})=\dim K^{\operatorname{ top}}_{0}(\mathbb{T}(nd)_{nv})=p_{2}(n).\] Proof.: By a theorem of Davison [10, Theorem 5.1], we have that \[\mathcal{BPS}_{e}=\operatorname{IC}_{\mathbb{C}^{3}}\] for every \(e\in\mathbb{N}\), where \(\mathbb{C}^{3}\hookrightarrow X(e)\) is the subvariety parameterizing three diagonal matrices. Then \(\dim H\dot{\cdot}(X(e),\mathcal{BPS}_{e})=1\), and so \(\operatorname{Sym}^{k}\left(H\dot{\cdot}(X(e),\mathcal{BPS}_{e})\right)=1\) for every positive integers \(e\) and \(k\). Then \(H\left(X(nd),\mathcal{BPS}_{A}\right)\) is also one dimensional for every \(A\in S^{nd}_{nv}\). Note that \(\#S^{nd}_{nv}=p_{2}(n)\) by Corollary 8.9. Then \[H\dot{\cdot}(X(nd),\mathcal{BPS}_{nd,nv})=\bigoplus_{A\in S^{nv}_{nd}}H\dot{ \cdot}(X(nd),\mathcal{BPS}_{A})=\mathbb{Q}^{\oplus p_{2}(n)}.\] The monodromy is trivial on \(H^{\cdot}(X(nd),\mathcal{BPS}_{nd,nv})\). By Theorems 6.2 and 7.6, we obtain the desired computations. **Remark 8.12**.: By Theorem 6.2, the topological K-theory of quasi-BPS categories may be determined whenever one can compute the BPS cohomology and the set \(S^{v}_{d}\). Proposition 8.11 is an example of such a computation. We mention two other computations for the three loop quiver with potentials \(W^{\prime}:=x[y,z]+z^{a}\) (for \(a\geqslant 2\)) and \(W^{\prime\prime}:=x[y,z]+yz^{2}\). Let \(\mathcal{BPS}^{\prime}_{d}\) and \(\mathbb{S}^{\prime}(d)_{v}\) be the BPS sheaves and the quasi-BPS categories of \((Q,W^{\prime})\). Denote similarly the BPS sheaves and the quasi-BPS categories of \((Q,W^{\prime\prime})\). By [10, Theorem 1.5], we have that \(H^{\cdot}(X(d),\mathcal{BPS}^{\prime}_{d})^{\mathrm{inv}}=0\) because \(H^{\cdot}(\mathbb{C},\varphi_{t^{a}})^{\mathrm{inv}}=0\). Then Theorem 6.2 implies that, for every \(i,v\in\mathbb{Z}\) with \(\gcd(d,v)=1\): \[K^{\mathrm{top}}_{i}(\mathbb{S}^{\prime}(d)_{v})=0.\] By [10, Corollary 7.2], we have that \(H^{\cdot}(X(d),\mathcal{BPS}^{\prime\prime}_{d})^{\mathrm{inv}}=H^{\cdot}(X(d ),\mathcal{BPS}^{\prime\prime}_{d})\) is one dimensional. As in Proposition 8.11, we have that, for every \(i,v\in\mathbb{Z}\): \[\dim K^{\mathrm{top}}_{i}(\mathbb{S}^{\prime\prime}(d)_{v})=p_{2}(\gcd(d,v)).\] ## 9. Etale covers of preprojective algebras In this section, we prove an extension of Theorem 7.6 to etale covers of preprojective stacks which we use to compute the topological K-theory of quasi-BPS categories of K3 surfaces in [26]. We first define quasi-BPS categories and BPS cohomology for etale covers of preprojective stacks. BPS sheaves are defined by base-change from the BPS sheaves of preprojective algebras. Quasi-BPS categories are defined via graded matrix factorizations and the Koszul equivalence, see Subsection 9.2. Recall that Theorem 7.6 follows, via dimensional reduction and the Koszul equivalence, from Theorem 6.2. The two main ingredients in the proof of Theorem 6.2 are the semiorthogonal decomposition from Theorem 2.5 and the construction of the cycle map (6.9) from Theorem 6.3. The analogous statements for etale covers are Propositions 9.5 and 9.6, respectively. We will use the notations and constructions from Subsection 7.1. Throughout this section, we fix a quiver \(Q^{\circ}\) satisfying Assumption 2.2 and a dimension vector \(d\in\mathbb{N}^{I}\). We begin by discussing the setting and by stating Theorem 9.2, the main result of this section. ### Preliminaries Let \(E\) be an affine variety with an action of \(G:=G(d)\) and with a \(G\)-equivariant etale map \[e\colon E\to R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}.\] Consider the quotient stacks with good moduli spaces \[\pi_{F}\colon\mathcal{F}:=(E\oplus\mathfrak{g})/G\to F:=(E\oplus\mathfrak{g}) /\!\!/G.\] Consider the moment map \(\mu\colon E\to\mathfrak{g}^{\vee}\) and the induced function \(f\colon\mathcal{F}\to\mathbb{C}\), where \(f(x,v)=\langle\mu(x),v\rangle\) for \(x\in E\) and \(v\in\mathfrak{g}\). Consider the quotient stack with good moduli space \[\pi_{L}\colon\mathcal{L}:=\mu^{-1}(0)/G\to L:=\mu^{-1}(0)/\!\!/G.\] There are maps: \[\begin{CD}\mathcal{L}^{\mathrm{cl}}@>{e}>{}>\mathcal{P}(d)^{\mathrm{cl}}\\ @V{\pi_{L}}V{\pi_{P,d}}V\\ L@>{e}>{}>P(d).\end{CD}\] Throughout this section, we assume that both horizontal maps in the above diagram are etale. Note that the moment map \(\mu\) has image in the traceless subalgebra \(\mathfrak{g}_{0}^{\vee}\cong\mathfrak{g}_{0}\subset\mathfrak{g}\). Let \(\mu_{0}\colon E\to\mathfrak{g}_{0}^{\vee}\) and let \(\mathcal{L}^{\mathrm{red}}:=\mu_{0}^{-1}(0)/G\). **Definition 9.1**.: Let \(\mathcal{BPS}_{d}^{p}\in\mathrm{Perv}(P(d))\) be the preprojective BPS sheaf and let \[\mathcal{BPS}^{L}:=e^{*}(\mathcal{BPS}_{d}^{p})\in\mathrm{Perv}(L). \tag{9.1}\] One defines \(\mathcal{BPS}_{\delta}^{L}\) for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and \(\mathcal{BPS}_{d,v}^{L}:=\mathcal{BPS}_{v_{\mathcal{T}d}}^{L}\) as in (7.2). By Theorem 2.9, there is a semiorthogonal decomposition: \[D^{b}\left(\mathcal{P}(d)\right)_{v}=\big{\langle}\mathbb{A}(d)_{v},\mathbb{ T}(d)_{v}\big{\rangle}\] for any \(v\in\mathbb{Z}\). The purpose of this section is to prove the following: **Theorem 9.2**.: _Let \(v\in\mathbb{Z}\). There are subcategories \(\mathbb{T}=\mathbb{T}(L)_{v}\) and \(\mathbb{A}=\mathbb{A}(L)_{v}\) of \(D^{b}(\mathcal{L})_{v}\) such that:_ 1. _there is a semiorthogonal decomposition_ \(D^{b}(\mathcal{L})_{v}=\langle\mathbb{A},\mathbb{T}\rangle\)_,_ 2. _if_ \(e\) _is the identity, then_ \(\mathbb{T}=\mathbb{T}(d)_{v}\) _and_ \(\mathbb{A}=\mathbb{A}(d)_{v}\)_,_ 3. _if_ \(h\colon E^{\prime}\to E\) _is an etale map inducing_ \(e^{\prime}:=e\circ h\colon E^{\prime}\to R^{\circ}(d)\oplus R^{\circ}(d)^{\vee}\)_, and if we consider_ \(\pi_{L^{\prime}}\colon\mathcal{L}^{\prime}\to L^{\prime}\) _and the categories_ \(\mathbb{A}(L^{\prime}),\mathbb{T}(L^{\prime})\subset D^{b}(\mathcal{L}^{\prime})\) _for_ \(E^{\prime}\)_, then_ \(h\) _induces functors_ \(h^{*}\colon\mathbb{T}(L)_{v}\to\mathbb{T}(L^{\prime})_{v}\) _and_ \(h^{*}\colon\mathbb{A}(L)_{v}\to\mathbb{A}(L^{\prime})_{v}\)_,_ 4. _for any_ \(i,\ell\in\mathbb{Z}\)_, the cycle map (_3.6_) for_ \(\mathcal{L}\) _induces isomorphisms_ \[\mathrm{c}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}(\mathbb{T})\xrightarrow{ \sim}H^{-2\ell-i}(L,\mathcal{BPS}_{d,v}^{L}).\] _Further, one can also define categories \(\mathbb{T}^{\mathrm{red}},\mathbb{A}^{\mathrm{red}}\subset D^{b}(\mathcal{L}^ {\mathrm{red}})_{v}\) which satisfy the analogous conditions to (1)-(4) above. In particular, the map \(l^{\prime}\colon\mathcal{L}^{\mathrm{red}}\to\mathcal{L}\) induces an isomorphism_ \[\mathrm{c}\circ l^{\prime}_{*}\colon\mathrm{gr}_{\ell}K_{i}^{\mathrm{top}}( \mathbb{T}^{\mathrm{red}})\xrightarrow{\sim}\mathrm{gr}_{\ell}K_{i}^{\mathrm{ top}}(\mathbb{T})\xrightarrow{\sim}H^{-2\ell-i}(L,\mathcal{BPS}_{d,v}^{L}). \tag{9.2}\] We will only explain the constructions for \(\mathcal{L}\), the case of \(\mathcal{L}^{\mathrm{red}}\) is similar. In Subsection 9.2, we define the categories \(\mathbb{T}\) and \(\mathbb{A}\) using graded matrix factorizations and the Koszul equivalence. In Subsection 9.3, we prove the third claim in Theorem 9.2. ### Quasi-BPS categories for etale covers We will use the setting from Subsection 9.1. There is a Cartesian diagram, where the maps \(e\) are etale maps: \[\begin{CD}\mathcal{F}@>{e}>{}>\mathcal{X}(d)\\ @V{\pi_{F}}V{\pi_{X,d}}V@V{\pi_{X,d}}V{}V\\ F@>{e}>{}>X(d).\end{CD}\] By Theorem 2.9, there is a semiorthogonal decomposition \[D^{b}\big{(}\mathcal{X}(d)\big{)}_{v}=\big{\langle}\mathbb{B}(d)_{v},\mathbb{ M}(d)_{v}\big{\rangle}. \tag{9.3}\] Define subcategories \(\mathbb{B}=\mathbb{B}(F)\), \(\mathbb{M}=\mathbb{M}(F)\) of \(D^{b}(\mathcal{F})_{v}\) to be classically generated (see Subsection 2.15) by \(e^{*}\mathbb{B}(d)_{v}\), \(e^{*}\mathbb{M}(d)_{v}\) respectively. Note that, for \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), we can define analogously \[\mathbb{M}(\delta)=\mathbb{M}(F;\delta)\subset D^{b}(\mathcal{F}). \tag{9.4}\] Lemma 2.11 implies that: **Corollary 9.3**.: _There is a semiorthogonal decomposition_ \[D^{b}(\mathcal{F})_{v}=\langle\mathbb{B},\mathbb{M}\rangle.\] _If \(e\) is the identity, then \(\mathbb{M}(d)=\mathbb{M}(d)_{v}\) and \(\mathbb{B}(d)=\mathbb{B}(d)_{v}\)._ Consider the category of graded matrix factorizations \(\operatorname{MF}^{\operatorname{gr}}(\mathcal{F},f)\), where the grading is of weight \(2\) for the summand \(\mathfrak{g}\) and is of weight \(0\) on \(E\). By the Koszul equivalence, we have that: \[\kappa_{L}\colon D^{b}(\mathcal{L})\xrightarrow{\sim}\operatorname{MF}^{ \operatorname{gr}}(\mathcal{F},f). \tag{9.5}\] Define the subcategories of \(D^{b}(\mathcal{L})\): \[\mathbb{T}=\mathbb{T}(L):=\kappa_{L}^{-1}\left(\operatorname{MF}^{ \operatorname{gr}}(\mathbb{M},f)\right),\ \mathbb{A}=\mathbb{A}(L):=\kappa_{L}^{-1}\left( \operatorname{MF}^{\operatorname{gr}}(\mathbb{B},f)\right).\] By [PTa, Proposition 2.5], we obtain: **Corollary 9.4**.: _The properties (1), (2), and (3) in the statement of Theorem 9.2 hold for the categories \(\mathbb{A}\) and \(\mathbb{T}\) of \(D^{b}(\mathcal{L})\)._ We also need a version of Theorem 2.8 for etale covers. Consider the forget-the-framing map \(\tau_{\alpha}\colon\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\to\mathcal{X }(d)\). Let \(\mathcal{F}^{\alpha f}\) be such that the following diagram is Cartesian: (9.6) Recall the semiorthogonal decomposition of Theorem 2.5: \[D^{b}\left(\mathcal{X}^{\alpha f}(d)^{\operatorname{ss}}\right)=\Big{\langle} \tau_{\alpha}^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right) \Big{\rangle}.\] For a partition \(\underline{d}:=(d_{i})_{i=1}^{k}\) of \(d\in\mathbb{N}^{I}\) and for integer weights \(\underline{v}:=(v_{i})_{i=1}^{k}\), define \(\mathbb{M}(\underline{d},\underline{v})\subset D^{b}\left(\mathcal{F}^{ \alpha f}\right)\) to be classically generated by \(e^{*}\tau_{\alpha}^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right)\). By Lemma 2.11, we obtain that: **Proposition 9.5**.: _There is a semiorthogonal decomposition_ \[D^{b}\left(\mathcal{F}^{\alpha f}\right)=\Big{\langle}\mathbb{M}(\underline{ d},\underline{v})\Big{\rangle},\] _where the left hand side is as in Theorem 2.5._ The category \(\mathbb{M}(\underline{d},\underline{v})\) can be described via the Hall product, same as in Theorem 2.5. Let \(\lambda\) be an antidominant cocharacter associated to \((d_{i})_{i=1}^{k}\). Consider the diagram: \[\mathcal{F}^{\lambda}\xrightarrow{q_{F}}\mathcal{F}^{\lambda\geqslant 0} \xrightarrow{p_{F}}\mathcal{F}.\] There is an etale map \(e\colon\mathcal{F}^{\lambda}\to\mathcal{X}(d)^{\lambda}\cong\times_{i=1}^{k} \mathcal{X}(d_{i})\). Then the Hall product \[*_{F}=p_{F*}q_{F}^{*}\colon D^{b}(\mathcal{F}^{\lambda})\to D^{b}(\mathcal{F})\] is base-change of the categorical Hall product \(D^{b}\big{(}\times_{i=1}^{k}\mathcal{X}(d_{i})\big{)}\cong\otimes_{i=1}^{k}D^{ b}(\mathcal{X}(d_{i}))\to D^{b}(\mathcal{X}(d))\). Let \(\widetilde{\mathbb{M}}(\underline{d},\underline{v})\subset D^{b}\left( \mathcal{F}(\underline{d})\right)\) be the subcategory classically generated by \(e^{*}\left(\otimes_{i=1}^{k}\mathbb{M}(d_{i})_{v_{i}}\right)\). There is then an equivalence: \[\tau_{\alpha,F}^{*}\circ*_{F}\colon\widetilde{\mathbb{M}}(\underline{d}, \underline{v})\xrightarrow{\sim}\mathbb{M}(\underline{d},\underline{v}). \tag{9.7}\] ### Comparison with BPS cohomology Recall the notation from Subsection 7.1. Consider the commutative diagram: Recall the sheaf \(\mathcal{BPS}^{L}\in\operatorname{Perv}(L)\) defined in (9.1) and consider the BPS sheaf \(\mathcal{BPS}_{d}\in\operatorname{Perv}(X(d))\) for the tripled quiver with potential \((Q,W)\) associated to \(Q^{\circ}\). Define the BPS sheaf: \[\mathcal{BPS}^{F}=e^{*}(\mathcal{BPS}_{d})\in\operatorname{Perv}(F).\] For \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\), define \[\mathcal{BPS}_{\delta}^{F}\in D_{\operatorname{con}}^{b}(F),\,\mathcal{BPS}_ {d,v}^{F}:=\mathcal{BPS}_{v\tau_{d}}^{F}\] as in (6.5). Note that, by base-change of the decomposition (6.18), we obtain the analogous decomposition for \(\pi_{F}\colon\mathcal{F}\to F\): \[\pi_{F*}\varphi_{f}\mathrm{IC}_{\mathcal{F}}[-1]=\bigoplus_{A\in\mathcal{P}}e ^{*}(\mathrm{Q}_{A}). \tag{9.8}\] By base-change of (6.17), we obtain the following decomposition for the map \(\pi_{\alpha,F}:=\pi_{F}\circ\tau_{\alpha,F}\colon\mathcal{F}^{\alpha f}\to F\): \[\pi_{\alpha,F*}\varphi_{f}\mathbb{Q}_{\mathcal{F}^{\alpha f}}[\dim\mathcal{F} -1]=\bigoplus_{A\in\mathcal{P}_{\alpha}}e^{*}(\mathrm{Q}_{A}). \tag{9.9}\] The monodromy on \(H^{\bullet}(\mathcal{F},\varphi_{f})\) is trivial, so there is a cycle map: \[\mathrm{c}\colon\operatorname{gr}_{a}K_{i}^{\operatorname{top}}\left( \operatorname{MF}(\mathcal{F},f)\right)\xrightarrow{\sim}H^{2\dim\mathcal{F}- i-2a}(\mathcal{F},\varphi_{f}\mathbb{Q}_{\mathcal{F}}[-1])\oplus H^{2\dim \mathcal{F}-i-2a}(\mathcal{F},\varphi_{f}\mathbb{Q}_{\mathcal{F}}[-2]). \tag{9.10}\] We now define a cycle map from topological K-theory of quasi-BPS categories to BPS cohomology, which is the analogue of Theorem 6.2. **Proposition 9.6**.: _Let \(\delta\in M(d)_{\mathbb{R}}^{W_{d}}\) and recall the categories \(\mathbb{M}(\delta)\) from (9.4). The cycle map (9.10) has image in_ \[\mathrm{c}\colon\operatorname{gr}_{a}K_{i}^{\operatorname{top}}\left( \operatorname{MF}(\mathbb{M}(\delta),f)\right)\to H^{\dim\mathcal{F}-i-2a}(F, \mathcal{BPS}_{\delta}^{F})\oplus H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{ \delta}^{F}[-1]). \tag{9.11}\] _Thus, for \(\delta=v\tau_{d}\) and \(\mathbb{M}=\mathbb{M}(v\tau_{d})\), the cycle map (9.10) has image in_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}(\mathbb{M },f)\right)\to H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F})\oplus H^{ \dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F}[-1]). \tag{9.12}\] Proof.: The same argument used in the proof of Theorem 6.3 applies here. The \(\lambda\)-widths (see (6.28)) of the category \(\mathbb{M}(\delta)\) are equal to the \(\lambda\)-widths of the category \(\mathbb{M}(d;\delta)\) for all cocharacters \(\lambda\). The analogue of Proposition 6.15 then holds for \(\mathrm{MF}(\mathbb{M}(\delta),f)\). There is an explicit decomposition of \(\pi_{F*}\mathrm{IC}_{\mathcal{F}}\) obtained by base-change from (6.16), and where the summands are in the image of (the base-change of the) Hall product. In Subsection 6.6, we constructed the map (6.36), proved Proposition 6.16, and noted corollaries of Proposition 6.16. There are versions of the map (6.36) and of Proposition 6.16 by \(F\) by base-change, and the results in Subsection 6.6 also apply for \(\pi_{F*}\mathrm{IC}_{\mathcal{F}}\), and thus for \(\pi_{F*}\varphi_{f}\mathrm{IC}_{\mathcal{F}}[-1]\). We next prove the analogue of Theorem 6.2. **Proposition 9.7**.: _The cycle map (9.12) is an isomorphism._ Proof.: The same argument used to prove Theorem 6.2 applies here, see Subsection 6.3, that is, the statement follows from comparing summands in the semiorthogonal decomposition (9.5) with summands in the decomposition (9.9). The cycle map (9.12) is injective by (9.10) and the admissibility of \(\mathrm{MF}^{\mathrm{gr}}(\mathbb{M},f)\) in \(\mathrm{MF}^{\mathrm{gr}}(\mathcal{F},f)\). Consider a partition \(\underline{d}=(d_{i})_{i=1}^{k}\) of \(d\) and weights \(\underline{v}=(v_{i})_{i=1}^{k}\in\mathbb{Z}^{k}\). Consider the perverse sheaf \[\mathcal{BPS}_{\underline{d},\underline{v}}:=\oplus_{*}\left(\boxtimes_{i=1}^ {k}\mathcal{BPS}_{d_{i},v_{i}}\right)\in\mathrm{Perv}(X(d)),\] where \(\oplus\colon\times_{i=1}^{k}X(d_{i})\to X(d)\) is the direct sum map. By Proposition 9.6 for the disjoint union of \(k\) copies of \(Q\), dimension vector \((d_{i})_{i=1}^{k}\in\left(\mathbb{N}^{I}\right)^{k}\), and \(\delta=\sum_{i=1}^{k}v_{i}\tau_{d_{i}}\), there is an injective map for any \(i\in\mathbb{Z}\): \[\mathrm{gr}.K_{i}^{\mathrm{top}}\left(\mathbb{M}(\underline{d},\underline{v}) \right)\hookrightarrow H^{\,\left(F,e^{*}\mathcal{BPS}_{\underline{d}, \underline{v}}\right).}\] The claim now follows as in the proof of Theorem 6.2. We prove the analogue of Theorem 7.6. **Proposition 9.8**.: _The cycle map obtained by composing (9.10) with the forget-the-potential map is an isomorphism:_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}^{\mathrm{ gr}}(\mathcal{F},f)\right)\xrightarrow{\sim}H^{2\dim\mathcal{F}-i-2a}(\mathcal{F}, \varphi_{f}[-1]). \tag{9.13}\] _Thus there is an isomorphism:_ \[\mathrm{c}\colon\mathrm{gr}_{a}K_{i}^{\mathrm{top}}\left(\mathrm{MF}^{\mathrm{ gr}}(\mathbb{M},f)\right)\to H^{\dim\mathcal{F}-i-2a}(F,\mathcal{BPS}_{d,v}^{F}). \tag{9.14}\] _Further, there is an isomorphism:_ \[\mathrm{gr}_{a}K_{i}^{\mathrm{top}}(\mathbb{T})\xrightarrow{\sim}H^{-2a-i}(L, \mathcal{BPS}_{d,v}^{L})\] Proof.: The isomorphism (9.13) follows from Proposition 5.2. The isomorphism (9.14) follows then from Proposition 9.6. The last isomorphism follows from (9.14) and the compatibility between dimensional reduction and the Koszul equivalence from Proposition 5.2. Proof of Theorem 9.2.: The first three properties hold by Corollary 9.4. The fourth property follows from Proposition 9.8. The statement for reduced stacks follows similarly. The isomorphism (9.2) also follows directly from Proposition 3.5. We also note the following analogue of Corollary 7.2. **Proposition 9.9**.: _The Chern character map_ \[\operatorname{ch}\colon K_{i}^{\operatorname{top}}(\mathbb{T})\hookrightarrow G _{i}^{\operatorname{top}}(\mathcal{L})\to\bigoplus_{j\in\mathbb{Z}}H_{i+2j}^{ \operatorname{BM}}(\mathcal{L})\] _is injective._ Proof.: The proof is analogous to that of Corollary 7.2. The claim follows from Proposition 4.10, a version of Proposition 9.5 involving potential, and the Koszul equivalence.
前項の研究では、ポテンシャルを持つ対称 quivers、プロジェクティブ代数、そして局所表面と関連したQuasi-BPS カテゴリーを導入し、研究しました。これらのカテゴリーは、Enumerative Geometryにおける BPSインVARIANT/コ homologies の性質を備えています。例えば、彼らはカテゴリ的壁の交叉の公式において重要な役割を果たします。この論文では、 quasi-BPS カテゴリーと BPSコ homologies の間の関連性を、 topologi K-Theory のサイクルマップを用いてより精密に示しました。Quasi-BPS カテゴリーの topologi K-Theory のフィルタが、その関連したgraded が BPSコ homologies の monodromy invariant に等しくなることを示しました。その過程で、行列の因子化のカテゴリーの topologi K-Theory を計算し、monodromy invariant vanishing cycles の (Blanc-Robalo-To\"en-Vezzosi の
2309.13774
Combinatorial summation of Feynman diagrams: Equation of state of the 2D SU(N) Hubbard model
Feynman's diagrammatic series is a common language for a formally exact theoretical description of systems of infinitely-many interacting quantum particles, as well as a foundation for precision computational techniques. Here we introduce a universal framework for efficient summation of connected or skeleton Feynman diagrams for generic quantum many-body systems. It is based on an explicit combinatorial construction of the sum of the integrands by dynamic programming, at a computational cost that can be made only exponential in the diagram order on a classical computer and potentially polynomial on a quantum computer. We illustrate the technique by an unbiased diagrammatic Monte Carlo calculation of the equation of state of the $2D$ $SU(N)$ Hubbard model in an experimentally relevant regime, which has remained challenging for state-of-the-art numerical methods.
Evgeny Kozik
2023-09-24T23:07:56
http://arxiv.org/abs/2309.13774v4
# Combinatorial summation of Feynman diagrams: Equation of state of the \(2d\)\(Su(n)\) Hubbard model ###### Abstract Feynman's diagrammatic series is a common language for a formally exact theoretical description of systems of infinitely-many interacting quantum particles, as well as a foundation for precision computational techniques. Here we introduce a universal framework for efficient summation of connected or skeleton Feynman diagrams for generic quantum many-body systems. It is based on an explicit combinatorial construction of the sum of the integrands by dynamic programming, at a computational cost that can be made only exponential in the diagram order on a classical computer and potentially polynomial on a quantum computer. We illustrate the technique by an unbiased diagrammatic Monte Carlo calculation of the equation of state of the \(2D\)\(SU(N)\) Hubbard model in an experimentally relevant regime, which has remained challenging for state-of-the-art numerical methods. ## I Introduction In the diagrammatic Monte Carlo (DiagMC) approach to correlated systems [1; 2; 3; 4], thermodynamic observables or correlation functions are expressed in terms of a sum of all connected Feynman diagrams of the many-body perturbation theory [5], which is then sampled stochastically. Each diagram represents a formula for computing one term in this sum, which in the simplest case consists of a product of one-particle non-interacting Green's functions \(G^{0}\) and a number \(n\) (the diagram order) of interaction vertices \(V\)[66] (see Fig. 1a,b), integrated over all internal variables. The key advantage of this approach is that the diagrams are defined directly in the thermodynamic limit, circumventing the need to extrapolate the result with the system size, which is typically hard in unbiased quantum Monte Carlo methods due to the negative sign problem [6; 7]. However, the control of precision in DiagMC relies on its ability to accurately compute the sum of all diagrams to a sufficiently high order \(n\), which is inhibited by a factorially increasing with \(n\) number of diagrams and hence exploding Monte Carlo variance. Indeed, in correlated regimes all \(\sim n!\) order-\(n\) diagrams are typically of comparable magnitude [8], which largely negates the chief advantage of Monte Carlo - that of importance sampling. This suggests that the summation over diagram topologies and indices on which there is only weak dependence could be done deterministically to a similar effect, or more efficiently if such summation could be performed faster than in \(\mathcal{O}(n!)\) steps. For fermionic systems, where the diagrams have alternating signs, this also helps lower the Monte Carlo variance [9; 10; 11; 12]. Crucially, if the computational cost could be reduced to exponential in \(n\), it was shown in Ref. [11] (with an extension to divergent series [13], if necessary) that the computational time would scale only _polynomially_ with the inverse of the desired error bound. An instructive example is the \(SU(N)\)-symmetric Hubbard model for \(N\) species of fermions. An approximate large-\(N\) (pseudo-)spin symmetry emerges in multi-orbital condensed matter systems due to orbital degeneracy [14; 15]. It is relevant to the description of, e.g., transition-metal oxides and orbital-selective Mott and superconducting transitions [14; 16; 17; 18; 19], graphene and twisted bilayer graphene [15; 19; 20; 21], and is expected to harbour exotic phases of matter, such as topologically non-trivial spin liquids [22]. However, it poses a serious challenge for precision numerical methods owing to the additional exponential scaling of the Hilbert space with \(N\), aggravating the sign problem [23]. Existing DiagMC algorithms based on determinantal summation of connected diagrams [9; 10], which are very efficient in the \(SU(2)\) case, are limited by the rigid structure of the determinant: the \(\sim N^{2}/2\) choices for each of the \(n\) interaction lines increase the computational cost of summing all diagrams of order \(n\) by a factor \(\sim(N^{2}/2)^{n}\). The recent studies by Ibarra-Garcia-Padilla _et al._[24] using the determinantal quantum Monte Carlo (DQMC) [25; 26] and numerical linked-cluster expansion (NLCE) [27; 28] methods at finite temperature, and by Feng _et al._[29] using the auxiliary-field quantum Monte Carlo (AFQMC) method [30] with improvable constraints [31] at zero temperature, revealed a rich phase diagram of the \(SU(N)\) Hubbard model at \(N=3\) and density \(\langle n\rangle=1\). At large \(N\), however, unbiased numerical methods are currently outperformed by experimental realisations of the system with ultracold alkaline-earth-like atoms in optical lattices [32; 33; 34; 35; 36; 37; 38]--analogue quantum simulators [39; 40]--in accessing the regimes of low temperatures and strong correlations [37; 38]. Here we develop a framework for efficient evaluation of Feynman's diagrammatic series of arbitrary structure by deterministic summation of all diagram integrands. The approach is based on an explicit combinatorial construction of each term in the sum, one Green's function at a time, whereby at each step the result is maximally factorised into sums of single Green's functions by dynamic programming. Specifically, the result takes the form of a directed graph (Fig. 1d), with each node being a sum of contributions from its incoming edges, and each edge conveying the value of the previous node multiplied by a Green's function. In this approach, the \(SU(N)\) symmetry is accounted for by merely an additional multiplication of certain edges by a constant factor, while all connected diagrams of order \(n\) can be summed in at most \(\mathcal{O}(n^{3}4^{n})\) steps independently of \(N\). This is reduced for the special case of \(N=1\) (spinless fermions or bosons) and \(SU(2)\) Hubbard model to \(\mathcal{O}(n^{3}3^{n})\). The factorisation of the sum, which serves to minimise the number of repeated uses of each Green's function value, is the essence of the speed-up. As a byproduct, the result is also symmetrised over all \(n!\) permutations of interaction lines, helping to further reduce the variance of the DiagMC evaluation of the series. Following Ref. [11] (and [13] in the case of a divergent series), the exponential computational cost of this approach implies polynomial scaling of the calculation time with the required accuracy. The approach admits a vector formulation, which is potentially suitable for a realisation on a quantum computer with a further exponential speed-up. We apply the combinatorial summation (CoS) technique to a calculation of the equation of state (EoS) of the \(2D\)\(SU(N)\) Hubbard model in the case of \(N=6\), which is relevant for experiments, but hard for numerical methods. We first address the low-temperature regime studied very recently by Pasqualetti _et al._[38], where the system was realised using the 6 nuclear spin states of \({}^{173}\)Yb atoms loaded in an optical lattice, and the experimentally obtained EoS was cross-benchmarked against unbiased DQMC calculations. The range of the CoS technique is then explored by extending the calculations to lower temperatures and greater interaction strengths, where the sign problem is known to rapidly intensify [23] and experimental data for \(N=6\) cannot be reliably captured by numerical methods [37]. At the low-temperature/strong-coupling boundary of the studied regime, traits of a developing (pseudo-)gapped state are observed. ## II Combinatorial summation For simplicity, let us confine ourselves to the fermionic \(SU(N)\) Hubbard model from the start, which is defined by the Hamiltonian \[\hat{H}=-t\sum_{\langle i,j\rangle,\sigma}\left(\hat{c}^{\dagger}_{i\sigma}\hat{c }_{j\sigma}+H.c.\right)+\frac{U}{2}\sum_{i,\sigma_{1}\neq\sigma_{2}}\hat{n}_{i \sigma_{1}}\hat{n}_{i\sigma_{2}}-\mu\sum_{i,\sigma}\hat{n}_{i\sigma}. \tag{1}\] Here the operators \(\hat{c}^{\dagger}_{i\sigma}\) and \(\hat{c}_{i\sigma}\) create and annihilate a fermion on site \(i\) with the spin \(\sigma=1,\ldots,N\), respectively, \(\hat{n}_{i\sigma}=\hat{c}^{\dagger}_{i\sigma}\hat{c}_{i\sigma}\) is the number operator, \(t\) the hopping amplitude, \(U\) the on-site interaction, \(\mu\) the chemical potential, and \(\langle i,j\rangle\) implies that the summation is over nearest-neighbour lattice sites. A thermodynamic observable, such as, e.g., the average potential energy, is expressed diagrammatically (Fig. 1a,b) as the sum of all connected closed-loop diagrams obtained by linking vertices \(\alpha\), representing a point on the lattice \(i_{\alpha}\) and in imaginary time \(\tau_{\alpha}\), by the interaction lines \(V_{\sigma_{\alpha}\sigma_{\alpha}^{\prime}\sigma\sigma\sigma_{\beta}^{\prime }}(\alpha,\beta)=\frac{U}{2}\delta_{\alpha_{\alpha},\sigma_{\alpha}^{\prime} }\delta_{\sigma_{\beta},\sigma_{\beta}^{\prime}}(1-\delta_{\sigma_{\alpha}, \sigma_{\beta}})\delta_{i_{\alpha},i_{\beta}}\delta(\tau_{\alpha}-\tau_{ \beta})\) and non-interacting propagators (Green's functions) \(G^{0}_{\sigma}(\alpha,\beta)=-\langle\mathcal{T}\hat{c}_{i_{\beta}\sigma}( \tau_{\beta})\hat{c}^{\dagger}_{i_{\alpha}\sigma}(\tau_{\alpha})\rangle_{0}\), where \(\mathcal{T}\) is the time ordering operator and the statistical average \(\langle\ldots\rangle_{0}\) is taken with the Hamiltonian at \(U=0\), and summing or integrating the result over all its \(\sigma_{\alpha},i_{\alpha},\tau_{\alpha}\) variables. It is well known--and used in finite-size determinant diagrammatic Monte Calro methods [41; 42]--that the sum of all combinations of \(n\) interactions with the propagators is generated by the determinant of a \(2n\times 2n\) matrix \(g_{\alpha\beta}=G^{0}(\alpha,\beta)\), \(\alpha,\beta=1,\ldots,2n\) (the spin indices are omitted for clarity), multiplied by the corresponding values of \(V(\alpha,\beta)\). This way the \(2n!\) terms can be produced extremely efficiently in \(\mathcal{O}(n^{3})\) operations, but having to eliminate the unwanted disconnected diagrams from the determinant afterwards requires at least an exponential number of steps [10]. Our strategy, in contrast, will be to not generate the disconnected diagrams from the start. ### Evaluation of the determinant A good starting point is the algorithm for division-free calculation of the determinant [43] based on its permutation cycle decomposition. In terms of the cycle covers \(\mathcal{C}=(\alpha_{1},\alpha_{2},\ldots\alpha_{c_{1}})\ldots(\alpha_{c_{m-1 }+1},\alpha_{c_{m-1}+2},\ldots,\alpha_{2n})\), representing an ordered sequence of matrix indices (called elements) grouped into \(m\) cycles by the parenthesis, the determinant becomes \[\det\{g_{\alpha\beta}\}=\sum_{\mathcal{C}}\operatorname{sign}\mathcal{C} \cdot\operatorname{weight}\mathcal{C}, \tag{2}\] where \(\operatorname{sign}\mathcal{C}=(-1)^{2n+m}\) and \(\operatorname{weight}\mathcal{C}=(g_{\alpha_{1}\alpha_{2}}\ldots g_{\alpha_{ c_{1}}\alpha_{1}})\ldots(g_{\alpha_{m-1}+1\alpha_{c_{m-1}+2}}\ldots g_{ \alpha_{2n}\alpha_{c_{m-1}+1}})\). For instance, the cycle cover \(\mathcal{C}=(1\,2\,5\,3)(4\,8\,7)(6)\) has \(\operatorname{sign}\mathcal{C}=(-1)^{3}\) and \(\operatorname{weight}\mathcal{C}=(g_{12}g_{25}g_{3}g_{31})(g_{48}g_{87}g_{74})( g_{66})\). In this form, one easily recognises Feynman's rules for constructing the diagrams [5], with the cycles corresponding to fermionic loops. It is useful to view building each \(\mathcal{C}\), one element at a time, by an ordered walk of \(2n\) steps, where at each step \(l\) the current element is \(e\) and the new element \(e^{\prime}\) is selected according to some rules, while the current weight \(\mathcal{C}\) is multiplied by \(g_{ee^{\prime}}\), as well as by an additional \(-1\) when the cycle is closed. An expression like Eq. (2) is then evaluated as a sum over all such walks. The central observation [43] is that, when different walks are executed in parallel, there will be many for which the step \(l\) is identical. Thus, before step \(l\) the weights of all such walks constructed up to this point can be combined, and the multiplication by \(g_{ee^{\prime}}\) applied to the sum. This suggests linking all walks in a graph, such as that in Fig. 1c, where the result of the summation before each step is stored in the nodes and the steps are the edges. An optimal structure of the graph minimises the number of times the multiplication by \(g_{ee^{\prime}}\) needs to be performed, and finding it is the task of dynamic programming. In the case of the determinant, the total number of edges can be made only polynomial in \(n\). A unique element \(e\) must appear in \(\mathcal{C}\) only once, which in general makes step \(l\) dependent on all the steps before it. However, it was demonstrated in Ref. [43] that all terms with repeated elements will cancel out due to the sign structure, provided the lowest element in each cycle within \(\mathcal{C}\), called the cycle head \(h\), is present in \(\mathcal{C}\) only once. Then, for each \(\mathcal{C}\) with a repeated element, there will be exactly one such \(\mathcal{C}^{\prime}\) that \(\operatorname{weight}\mathcal{C}^{\prime}=\operatorname{weight}\mathcal{C}\), but the number of its cycles differs by one, i.e. \(\operatorname{sign}\mathcal{C}^{\prime}=-\operatorname{sign}\mathcal{C}\). This is straightforward to ensure if, at each step \(l\), the head of the current cycle \(h\) is stored alongside the current element \(e\), and the next step is either to any other element \(e^{\prime}>h\) within the cycle, or starts a new cycle with \(h^{\prime}>h\) and \(e^{\prime}=h^{\prime}\). Therefore, each unique node must carry the three numbers \([l,h,e]\), \(l=0,\ldots 2n\); \(h,e=1,\ldots 2n+1\). The resulting graph, computing the determinant in \(\mathcal{O}(n^{4})\) floating-point operations, is illustrated in Fig. 1c. ### Approach 1: modification of the determinant Since there is a one-to-one correspondence between a particular path in the graph and the diagram it generates, the task of omitting the disconnected diagrams from the determinant can be formulated as that of identifying the corresponding paths and eliminating them selectively. Preserving all other paths is in principle accomplished by duplicating certain nodes along the unwanted paths and re-routing the paths to be kept through the copies, as in the example in Fig. 1d. This suggests that the information \([l,h,e]\), which uniquely identifies the nodes in the determinant, is incomplete for a diagrammatic series obeying more general rules, and the node label must be extended by some additional record \(\mathcal{R}\). If what constitutes \(\mathcal{R}\) is identified, the right graph can be constructed from the start by the same one-step-at-a-time algorithm, only now the two nodes \([l_{1},h_{1},e_{1}]\otimes\mathcal{R}_{1}\) and \([l_{2},h_{2},e_{2}]\otimes\mathcal{R}_{2}\) are considered identical, and are merged, only if \(\mathcal{R}_{1}=\mathcal{R}_{2}\), in addition to \(l_{1}=l_{2}\), \(h_{1}=h_{2}\), \(e_{1}=e_{2}\). In principle, the information in \(\mathcal{R}\) should be kept minimal to prevent spawning redundant nodes, but a sub-optimal graph can always be pruned in the end, without changing its value, by merging all nodes with equal \([l,h,e]\) that connect to the same nodes at the next level. A disconnected diagram is produced when not all of its cycles (fermionic loops) end up linked by the interaction lines. Thus, an obvious choice for \(\mathcal{R}\) is the list of vertices visited until the current step and grouped together according to their cycles, with the groups merged at each step if the corresponding cycles become linked by an interaction. Denoting each group by \(\{\ldots\}\) and listing the current unfinished group last, the highlighted path in Fig. 1c would become \([0,1,1]\otimes\{1\}\rightarrow[1,2,2]\otimes\{1\}\{2\}\rightarrow[2,3,3] \otimes\{2\}\{13\}\rightarrow[3,4,4]\otimes\{1\,3\}\{2\,4\}\rightarrow\) result, and it is now obvious that it produces a disconnected diagram because the two groups in \(\mathcal{R}=\{1\,3\}\{2\,4\}\) cannot be linked. Note that, for this choice of \(\mathcal{R}\), the cancellation between terms with repeated elements, relied on in the calculation of the determinant, is in general between a connected and disconnected term. Thus it is generally necessary to also prohibit sequences \(\mathcal{C}\) with repeated elements. The cancellation can still be usefully employed in certain cases, as explained below. For the \(SU(N)\) Hubbard interaction in the form (1), where the same-spin coupling is excluded, the sum over different combinations of spin indices implies that each diagram comes with the spin- and topology-dependent multiplicity factor \(M=\sum_{\sigma_{1},\ldots,\sigma_{n}}\prod_{\text{interactions}}(1-\delta_{ \sigma_{i},\sigma_{j}})/2\), where \(m\) is the number of loops and each interaction in the product connects a loop with spin \(\sigma_{i}\) to that with spin \(\sigma_{j}\), as in the example in Fig. 1b. A strength of our approach is that an arbitrary factor can be accounted for by merely (i) grouping the diagrams with the same \(M\) together and (ii) multiplication of the value of each node at the penultimate level \(l=2n-1\) by \(M\). To this end, we also store in \(\mathcal{R}\) a matrix of connections between the cycles, which does not need to be optimal, and prune the final graph to minimise its size. Despite the combinatorial structure of \(\mathcal{R}\), this algorithm is already efficient at diagram orders typically accessible in calculations. Indeed, since the Monte Carlo variance of integration over the vertex positions and times scales exponentially with diagram order \(n\)[11], in correlated regimes only contributions from \(n\lesssim 10\) can typically be evaluated with \(<10\%\) statistical error required for precision reconstruction of observables [44]. Fig. 2 shows the actual number of floating point operations required to sum all connected diagrams of order \(n\), with this approach labelled there as CoS-1. For the \(SU(2)\) Hubbard model, where each diagram has multiplicity \(M=1\) and an efficient algorithm (CDet [10]) exists, CoS-1 already exhibits competitive performance. In the \(SU(N)\) case, the computational cost of CoS-1 is independent of \(N\) (for \(N>4\)) and appears exponential for orders \(n\lesssim 6\), although it is expected to eventually rise combinatorially. Nonetheless, at large \(n\), CoS-1 is superseded by an approach of exponential complexity described below. ### Approach 2: constructing connected diagrams from the start There is much freedom in how the graph summing a particular series is designed, and the following general principles can aid its efficiency: (i) allowing unwanted or unphysical sequences \(\mathcal{C}\) might be useful if they cancel in the final sum, and (ii) walks can traverse the cycle covers \(\mathcal{C}\) and be grouped in arbitrarily order, provided all the required sequences are generated in the end. Principle (i) was key for computing the determinant, but it has another use here: we can formally allow on-site interactions between same-spin fermions in the Hamiltonian (1) since the resulting diagrams cancel. Instead of the topology-dependent factor \(M\), the diagrammatic rules [5] for fully spin-symmetric interactions prescribe that each fermionic loop is multiplied by the number of spins, implying a multiplication of each node that closes a cycle merely by \(N\). Although having to construct diagrams that cancel is a hindrance at lower orders, the simpler diagrammatic rules allow for a more efficient scaling at \(n\gtrsim 5-6\). Our recipe for organising the walks that constitute the graph has so far been borrowed from the determinant, forcing us to keep track in \(\mathcal{R}\) of how different cycles are connected. This is not necessary if we reorganise the walks to generate only connected diagrams from the start. Since for generic Hamiltonians we cannot rely on the cancellation of terms with repeated elements, we at Figure 2: Number of floating-point operations (FLOP) required to evaluate the sum of the integrands of all Feynman diagrams of order \(n\): **(a)** for connected diagrams of the \(SU(2)\) Hubbard model summed by the algorithm of Sec. II.2 (CoS-1) and that of Sec. II.3 (CoS-2), for which the curve closely follows \(\approx n^{3}3^{n}/8\); the reference dotted line \((2n)^{3}2^{n}+3^{n}\) indicates the theoretical scaling of the CDet algorithm of Ref. [10]; **(b)** for connected diagrams in the \(SU(N)\) case, with the curve for CoS-2 following \(\approx n^{3}4^{n}/7\). Shown as CoS-GW is the computational cost of summing the skeleton (bold-line) series in terms of the renormalised Green’s function \(G\) and screened interaction \(W\). least must keep track of the elements visited up to the current step \(l\), \(\mathcal{R}=\{e_{1},e_{2},\ldots e_{l}\}\), and ban adding \(e\) to \(\mathcal{C}\) if \(e\in\mathcal{R}\). Demoting the role of \(h\) in the node label \([l,h,e]\otimes\{e_{1},e_{2},\ldots e_{l-1},e\}\) to being merely the first element in the current cycle, we can generate only connected diagrams if each new cycle starts with the element that is paired by an interaction to one of the already visited ones \(e_{i}\in\{e_{1},e_{2},\ldots e_{l-1},e\}\), e.g. the smallest in \(\mathcal{R}\) that is not already paired, for uniqueness. It is easy to see that the number of floating point operations in this graph is only exponential, \(\mathcal{O}(n^{3})4^{n}\), and that the information about visited elements carried in \(\mathcal{R}\) is minimal for this order of traversing the sequences \(\mathcal{C}\), i.e. the graph cannot be pruned any further. The computational cost of this algorithm, labelled CoS-2, is shown for the \(SU(N)\) case in Fig. 2b. In our calculations, we employ the CoS-1 approach for \(n<5\) and CoS-2 for \(n\geq 5\). In systems where there is no non-trivial factor associated with each fermionic loop, as, e.g., in the \(SU(2)\) Hubbard model, or for \(N=1\), cancellations between cycle covers with repeated elements can still be utilised to reduce the cost further to \(\mathcal{O}(n^{3})3^{n}\). To this end, \(\mathcal{R}\) only needs to store the list of interactions that a visited element belongs to, and whether only one vertex of the interaction or both have been visited, i.e. 3 possibilities for each interaction. Since there is no record of which of the two vertices of an interaction has been visited, both options for the element that starts a new cycle need to be allowed, with the cycle cover that ends up repeating the vertex cancelling out, as in the case of the determinant. The complexity of this algorithm is plotted for the \(SU(2)\) case in Fig. 2a. Finally, sums of skeleton (bold-line) diagrams in arbitrary channels can be straightforwardly generated in our approach. For instance, the computational cost of producing an expansion in terms of the full (interacting) Green's function \(G\) and screened interaction \(W\)[45] by a simple extension of the CoS-2 algorithm is plotted in Fig. 2b as CoS-GW. The challenge of restricting the series to irreducible diagrams in both channels is met here by supplementing the nodes in the CoS-2 graph with a record \(\mathcal{R}\) that keeps track of connectivity when a propagator or interaction line is cut, similarly to the CoS-1 approach of Sec. II.2. Curiously, there is no notable cost increase relative to the CoS-1 for connected diagrams. The versatility of the CoS platform could enable more efficient algorithms for skeleton series in the future. ### Vector variant and quantum speed-up The CoS algorithm can be cast in a vector form, in which the graph remains of a polynomial in \(n\) size with the nodes uniquely identified by \([l,h,e]\) (as in Fig. 1c), but operates on a _vector_ of values \(|\psi\rangle=\sum_{\mathcal{R}}v_{\mathcal{R}}|\mathcal{R}\rangle\), with the floating-point numbers \(v_{\mathcal{R}}\) used to construct its value and the vectors \(|\mathcal{R}\rangle\) of an orthonormal set \(\{|\mathcal{R}\rangle\}\) being responsible for filtering valid diagram configurations. For the algorithm of Sec. II.3 (CoS-2), \(|\mathcal{R}\rangle\) is a direct product of \(2n\) orthonormal states \(|0\rangle\) or \(|1\rangle\), indicating whether an element \(e\) has been visited (\(|1\rangle_{e}\)) or not (\(|0\rangle_{e}\)), so that \(\mathcal{R}=\{e_{1},e_{2},\ldots e_{l}\}\) corresponds to \(|\mathcal{R}\rangle=|1\rangle_{e_{1}}|1\rangle_{e_{2}}\ldots|1\rangle_{e_{l}}|0 \rangle_{e_{l+1}}\ldots|0\rangle_{e_{2n}}\). The subspace of \(\{|\mathcal{R}\rangle\}\) to be passed on by each edge is selected using the projection operators \(\hat{P}^{0}_{e}=|0\rangle_{e}\langle 0|_{e}\), \(\hat{P}^{1}_{e}=|1\rangle_{e}\langle 1|_{e}\), \(\hat{P}^{0}_{e}=|1\rangle_{e}\langle 0|_{e}\) and \(\hat{P}^{1}_{e}=|0\rangle_{e}\langle 1|_{e}\). Specifically, an edge adding a new element within a cycle, \([l,h,e_{1}]\rightarrow[l+1,h,e_{2}]\), must project out all contributions in which the element \(e_{2}\) has already been visited before multiplying the result by \(g_{e_{1}e_{2}}\) and adding it to the next node, \[[l,h,e_{1}]\rightarrow[l+1,h,e_{2}]:\\ |\psi_{2}\rangle:=|\psi_{2}\rangle+g_{e_{1}e_{2}}\hat{P}^{0}_{e _{2}}|\psi_{1}\rangle, \tag{3}\] where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are the vectors stored in the nodes \([l,h,e_{1}]\) and \([l+1,h,e_{2}]\), respectively. Edges that start a new cycle with an element \(h_{2}\) act on the subspace in which \(h_{2}\) is paired by an interaction to the lowest unpaired visited vertex in \(|\mathcal{R}\rangle\), \[[l,h_{1},e_{1}]\rightarrow[l+1,h_{2},h_{2}]:\\ |\psi_{2}\rangle:=|\psi_{2}\rangle-g_{e_{1}h_{1}}\hat{P}^{0}_{ h_{2}}\hat{P}^{1}_{h_{2}}\prod_{e<h_{2}}\big{[}\hat{P}^{1}_{\bar{e}}\hat{P}^{1}_{e }+\hat{P}^{0}_{e}\big{]}|\psi_{1}\rangle, \tag{4}\] where \(|\psi_{1}\rangle\) and \(|\psi_{2}\rangle\) are the vectors stored in the nodes \([l,h_{1},e_{1}]\) and \([l+1,h_{2},h_{2}]\), respectively, and \(\bar{e}\) is the vertex paired to \(e\) by an interaction. Following this recipe, at the result node we obtain a pure state \(|\psi_{\text{result}}\rangle=v|1\rangle_{1}|1\rangle_{2}\ldots|1\rangle_{2n}\) with \(v\) being the value of the graph. On a classical computer, the elementary base vectors have to be represented by two components, \(|0\rangle=(1,1)^{T}/\sqrt{2}\), \(|1\rangle=(1,-1)^{T}/\sqrt{2}\), implying that \(|\psi\rangle\) is a \(2^{2n}\)-component vector, and the edges (3), (4) take \(\mathcal{O}(4^{n})\) floating-point operations to evaluate. Given that the number of edges scales as \(\mathcal{O}(n^{4})\), the computational cost of this approach, \(\mathcal{O}(n^{4}4^{n})\), is a factor \(\propto n\) higher than that of the CoS-2 algorithm of Sec. II.3. Nonetheless, an efficient processing of vector operations, e.g. by GPUs, could make the vector implementation faster in practice. The ability to efficiently operate with vector superpositions makes the quantum computer a promising platform for this approach. To this end, the graph defines a quantum circuit processing the state \(|\psi\rangle=\sum_{\mathcal{R}}|v_{\mathcal{R}}\rangle|\mathcal{R}\rangle\), where \(|v_{\mathcal{R}}\rangle\) encodes the value and \(|\mathcal{R}\rangle\) is represented by \(2n\) qubits. Projections can be generally performed by unitary quantum gates, while the multiplication by the matrix elements of \(g_{\alpha\beta}\) could be implemented, e.g., by quantum floating-point arithmetic [46; 47]. Provided a practical quantum implementation incurs at most polynomial computational overheads, the \(\mathcal{O}(n^{4})\) graph could be evaluated in a polynomial number of operations on a quantum processor. The result could then be used in the Monte Carlo integration over vertex coordinates on a classical processor, similarly to the quantum-classical approach [48]. An interesting possibility to explore is making the Metropolis sampling quantum as well [49; 50], e.g., through a mapping of the graph value to the quantum eigenvalue/eigenvector problem [51], which could enable a further speed-up. ## III Results We compute the average particle number per lattice site \(\langle n\rangle\) as a function of the chemical potential \(\mu\), expressed in our approach as an expansion in the powers of the Hubbard coupling \(U\), \(\langle n\rangle(T,\mu,U)=\sum_{m=0}^{\infty}a_{m}(T,\mu)U^{m}\), in the thermodynamic limit. The series coefficients \(a_{m}\) are obtained by the Monte Carlo integration of all connected diagrams of order \(m\) over the positions of the \(2m\) vertices in space-imaginary-time. Thus, \(a_{m}\) are known numerically exactly with statistical error bars, while the only source of systematic error is the truncation of the series at order \(n\). Although the series turns out to be divergent in all regimes of interest, being able to evaluate \(a_{m}\) up to \(n=8\) with \(<5\%\) statistical error (and fractions of percent at lower orders) enables an accurate reconstruction of the result with controlled precision. The recent study by Pasqualeti _et al._[38] has revealed a perfect agreement between the DQMC calculations and experimental measurements of the EoS of the \(2D\)\(SU(N)\) Hubbard model down to \(T/t=0.3\) and a coupling value up to \(U/t=2.3\) for \(N=6\). Fig. 3a shows the partial sum for the density at these parameters and \(\mu/t=0.575\) as a function of the truncation order \(n\). The series is seen to wildly diverge, but its analytic structure is rather simple, with the dominating singularity at \(U_{c}/t\approx-0.9(1)\), which allows us to reconstruct the result following the approach developed in Ref. [44]. Specifically, we employ the Dlog-Pade [52] technique, taking into account the statistical error bars of \(a_{m}\) and making sure that the systematic error of the evaluation--detected by the variation of the answer with free parameters of Dlog-Pade--is negligible. The result for the series in Fig. 3 is \(\langle n\rangle=1.335(5)\). As a benchmark, we proceed to obtain the \(\langle n\rangle(\mu)\) curve at \(T/t=0.3\), \(U/t=2.3\), plotted in Fig. 4, and find it to be in perfect agreement [67] with that computed and measured in Ref. [38]. The singularity at a real \(U\) is indicative of a phase transition exhibited by the attractive \(SU(N)\) Hubbard model, which is likely to a superfluid state. When the series for the relevant susceptibility is considered, the divergence at \(U_{c}\) is an accurate tool for characterising the critical point [53]. Leaving the calculation of susceptibilities for a more focused study, we plot in Fig. 3b a crude estimate of \(U_{c}\) from the divergence of density at \(T/t=0.3\). [68] Ibarra-Garcia-Padilla _et al._[23] demonstrate that the sign problem in DQMC rapidly intensifies with lowering \(T\) and increasing \(U\) and \(N\) at the considered densities, as long as the system remains compressible. To explore the more challenging regime, the EoS was obtained by DiagMC at a lower temperature \(T/t=0.15\) (see Figure 4), for which the \(\langle n\rangle(\mu)\) curve is below that for \(T/t=0.3\), indicating that the system is in the metallic state at \(U/t=2.3\) and in this range of \(\mu\)[54]. We further evaluate the series at larger values of \(U\) up to \(U=8\), where a faint shoulder around \(\langle n\rangle=1\) is seen to emerge. This is consistent with the development of a (pseudo-)gapped state. At these couplings, the systematic error of resummation beyond the convergence radius becomes comparable to the propagated statistical error, and the combined error bar (a sum of the two errors) shown in Fig. 4 grows substantially. Nonetheless, the analytic structure of the series appears free from singularities near a positive real \(U\), such as those in the \(SU(2)\) Hubbard model at similar parameters [55]. There, the growth of the antiferromag Figure 3: **(a)** Partial sum of the diagrammatic series for density \(\langle n\rangle\) as a function of the truncation order \(n\) for \(N=6\) and \(T/t=0.3\), \(\mu/t=0.575\), \(U/t=2.3\). The horizontal line is the result of a reconstruction of the value from the series, \(\langle n\rangle=1.335(5)\). **(b)** The location of the singularity \(U_{c}\) responsible for the series divergence at \(N=6\) and \(T/t=0.3\) as a function of the chemical potential \(\mu\) (corresponding to \(\langle n\rangle(\mu)\sim 1\)–\(2.5\) in this range and at \(U=U_{c}\)). Figure 4: Equation of state for the \(2D\)\(SU(N)\) Hubbard model with \(N=6\) for \(T/t=0.3,0.15\) and \(U/t=2.3,4,8\). netic (AFM) correlation length beyond \(\sim 10\) lattice sites was shown to be responsible for a near-critical behaviour of the diagrammatic expansions at these temperatures and \(\langle n\rangle=1\) already at \(U/t\sim 3\). Also in contrast to the \(SU(3)\) case, where an insulating AFM ground state at \(\langle n\rangle=1\) emerges already at \(U/t\approx 5.5\)[29] and strong AFM correlations (with a transformation upon heating) are observed up to \(T/t\sim 0.5\) at \(U/t=8\)[24], for \(N=6\) AFM correlations appear weak down to \(T/t=0.15\) at this coupling. Thus, there is no fundamental difficulty to reduce the errors bars and access larger \(U\) values at the expense of a polynomially-longer calculation. ## IV Discussion The introduced approach represents a versatile platform for evaluating Feynman's diagrammatic series: It is naturally applicable to fermionic as well as bosonic systems, to expansions in bare coupling and renormalised or skeleton series [5], to expansions derived from the homotopic action [13], in and out of equilibrium with the extension by the Keldysh formalism [56; 57], and may find use in other advanced approaches based on the diagrammatic theory [58; 59; 60; 61]. Being intrinsically division-free, the technique is compatible with diagrammatic methods based on algorithmic integration over Matsubara frequency [62] or imaginary time [63], in which dynamic correlation functions are computed directly without the need for numerical analytic continuation, and an efficient way of summing the diagrams would be crucial for accessing strongly correlated regimes. The vector formulation of the algorithm is a promising foundation for realising DiagMC on a quantum computer by mapping the polynomial-size graph to a quantum circuit, with the Quantum DiagMC offering an exponential speed-up over the classical counterpart. On a classical computer, the exponential scaling of the number of operations needed to evaluate all terms of a given order places [11] the CoS approach in the class of numerical methods with polynomial computational complexity. The rigid graph structure lends itself to efficient hardware acceleration and parallelisation, while the partial summation and subtraction at intermediate levels of the graph reduces the bit complexity, making the algorithm robust against rounding errors. The example application to the EoS of the \(2D\)\(SU(6)\) Hubbard model provides controlled benchmarks for ongoing theoretical and experimental studies, aimed at accessing lower temperatures and novel quantum many-body states. As a byproduct of a diagrammatic calculation, the analytic structure of the series offers additional insights in the physics of the system. The results suggest a phase transition in the attractive \(SU(N)\) Hubbard model at a coupling strength as low as \(U_{c}/t\sim-1\) up to temperatures \(T/t\lesssim 0.5\), and absence of strong AFM correlations in the repulsive case at the considered temperatures and interaction strengths, at which the \(SU(2)\)[55; 64] and \(SU(3)\)[24; 29] Hubbard models are already in the (quasi-)AFM state. In the \(SU(2)\) case, the formulation in the thermodynamic limit enabled DiagMC to attain controlled accuracy in the regime where correlations are intrinsically long-range and are difficult to capture reliably by finite-size methods even in the absence of the sign problem [55; 64]. Such regimes of the \(SU(N)\) model is where the developed technique can prove particularly useful. The possibility of a direct calculation of entropy in the DiagMC approach [54] could be instrumental for thermometry in experiments with ultracold atoms that are currently testing the limits of state-of-the-art theoretical methods. ###### Acknowledgements. The author is grateful to Kaden Hazzard for illuminating and stimulating discussions, to Eduardo Ibarra-Garcia-Padilla, Sohail Dasgupta, Kaden Hazzard, and Richard Scalettar for sharing their DQMC data, and to Sohail Dasgupta, Simon Folling, Kaden Hazzard, Eduardo Ibarra-Garcia-Padilla, Giulio Pasqualetti, and Richard Scalettar for a fruitful exchange of results and ideas. This work was supported by EPSRC through Grant No. EP/X01245X/1.
Feynmanの図式系列は、無限に相互作用する量子粒子系を厳密に理論的に記述するための共通言語であり、精密計算技術の基礎にもなります。ここでは、効率的な接続または骨格Feynman図の総和のための普遍的な枠組みを紹介します。これは、ダイナミックプログラミングによる積分関数の総和の明示的な組合せ構造に基づいています。古典計算では、図式の順番に指数関数的な計算コストを要し、量子計算では、その計算コストは多項式にまでなります。この手法を、2D SU(N) Hubbardモデルの状態方程式を、実験的に関連する領域で、一貫性の高い図式Monte Carlo計算で実証します。
2309.04208
Prediction of even and odd sunspot cycles
Here we study the prediction of even and odd numbered sunspot cycles separately, thereby taking into account the Hale cyclicity of solar magnetism. We first show that the temporal evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. We find that the parameters describing even sunspot cycles can be predicted quite accurately using the sunspot number 41 months prior to sunspot minimum as a precursor. We find that the parameters of the odd cycles can be best predicted with maximum geomagnetic aa index close to fall equinox within a 3-year window preceding the sunspot minimum. We use the found precursors to predict all previous sunspot cycles and evaluate the performance with a cross-validation methodology, which indicates that each past cycle is very accurately predicted. For the coming sunspot cycle 25 we predict an amplitude of 171 +/- 23 and the end of the cycle in September 2029 +/- 1.9 years. We are also able to make a rough prediction for cycle 26 based on the predicted cycle 25. While the uncertainty for the cycle amplitude is large we estimate that the cycle 26 will most likely be stronger than cycle 25. These results suggest an increasing trend in solar activity for the next decades.
Timo Asikainen, Jani Mantere
2023-09-08T08:39:07
http://arxiv.org/abs/2309.04208v1
# Prediction of even and odd sunspot cycles ###### Abstract Here we study the prediction of even and odd numbered sunspot cycles separately, thereby taking into account the Hale cyclicity of solar magnetism. We first show that the temporal evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. We find that the parameters describing even sunspot cycles can be predicted quite accurately using the sunspot number 41 months prior to sunspot minimum as a precursor. We find that the parameters of the odd cycles can be best predicted with maximum geomagnetic _aa_ index close to fall equinox within a 3-year window preceding the sunspot minimum. We use the found precursors to predict all previous sunspot cycles and evaluate the performance with a cross-validation methodology, which indicates that each past cycle is very accurately predicted. For the coming sunspot cycle 25 we predict an amplitude of \(171\pm 23\) and the end of the cycle in September 2029 \(\pm\) 1.9 years. We are also able to make a rough prediction for cycle 26 based on the predicted cycle 25. While the uncertainty for the cycle amplitude is large we estimate that the cycle 26 will most likely be stronger than cycle 25. These results suggest an increasing trend in solar activity for the next decades. ## 1 Introduction Prediction of the sunspot number has been an everlasting interest in the space science community since the discovery of the sunspot cycle by Schwabe (1844). Sunspot number is an indirect indicator of many different solar phenomena, e.g., total and spectral solar radiation (e.g. Krivova et al., 2011; Frohlich, 2012), coronal mass ejections (e.g. Richardson and Cane, 2012), solar flares and magnetic active regions (e.g. Toriumi et al., 2017). Its cyclic variation can even be used as a pacemaker to time different aspects of solar activity, solar wind and resulting geomagnetic variations (Chapman et al., 2021; Leamon et al., 2022). Therefore, there is considerable practical interest in predicting the evolution of future sunspot cycle(s). This is especially true in today's technological society where space hazards pose a significant threat, e.g., to satellites, communications and electric grids on ground (e.g. Lanzerotti, 2001). Another interest for predicting sunspots arises from the relatively recently recognized influences of variable solar radiation and solar wind activity on Earth's climate system (Gray et al., 2010; Ward et al., 2021; Salminen et al., 2020). Over the period of about last 100 years a vast array of different methods ranging from statistical methods to intensive physical simulations have been developed for predicting sunspots. As an unbiased introduction to all of them would be a futile effort here, the interested reader is referred to several excellent reviews on the subject by Hathaway (2009), Pesnell (2012) and Petrovay (2020). According to the classic solar dynamo theory poloidal solar magnetic field in the solar minimum gets stretched to the toroidal magnetic field of the next cycle, which then produces sunspots and magnetic active regions that ultimately make up the poloidal field in the next solar minimum (Parker, 1955; Babcock, 1961; Leighton, 1969; Charbonneau, 2020). Physically motivated solar cycle predictions are based on numerical modeling of the solar dynamo process and the transport of magnetic flux on the solar surface (Charbonneau, 2020; Nandy, 2021; Karak, 2023). However, some of the most successful, yet much simpler prediction methods are based on precursors that serve as indicators for the strength of the coming solar cycle. The so called polar field precursor methods have found a good correlation between the magnetic field observed at the solar polar region up to a few years before the sunspot minimum and the amplitude of the next sunspot cycle (Schatten et al., 1978; Petrovay, 2020; Kumar et al., 2021, 2022). As the polar field reflects the strength of the poloidal phase of the solar magnetic field the precursor methods are deeply rooted in the core idea of the dynamo theory. Because the polar field has been systematically measured only since 1970s some longer running proxy measures for the polar field have also been used. The most successful polar field proxies are based on geomagnetic activity measures close to the solar minimum (e.g. Ohl, 1966; Du et al., 2012). It has been shown that certain geomagnetic activity indices, e.g., the _aa_ index correlate quite well with the open solar magnetic flux carried by the solar wind (e.g. Lockwood et al., 1999, 2014). Close to the solar minimum the geomagnetic activity is dominantly driven by high speed solar wind streams (e.g. Richardson and Cane, 2012), which emanate from coronal holes that eventually form the solar polar field in the declining phase of the cycle (Krieger et al., 1973; Bame et al., 1976). During these times the geomagnetic activity is most directly connected to the polar field and, thereby acts as a good proxy for the amplitude of the next sunspot cycle. In accordance with the solar dynamo theory the sunspot cycle prediction methods often consider only the predictability of the cycle based on the previous cycle. Most of these methods do not typically take into account the 22-year Hale cycle of solar magnetism, which is well known in the solar cycle (e.g. Gnevyshev and Ohl, 1948; Takalo and Mursula, 2018; Leamon et al., 2022) and geomagnetic phenomena (e.g. Chernosky, 1966; Takalo, 2021; Chapman et al., 2021). However, some recent studies have considered the even/odd cycle parity in the context of sunspot cycle prediction (e.g. Du, 2020; Kakad and Kakad, 2021; Penza et al., 2021; Du, 2022b; Nagovitsyn and Ivanov, 2023). Here we study the prediction of sunspot cycles accounting for the 22-year Hale cycle by considering the differences in the even and odd numbered sunspot cycles. In Section 2 we first present our data and then in Section 3 proceed to show that the time evolution and shape of all sunspot cycles are extremely well described by a simple parameterized mathematical expression. In Section 4 we discuss the mutual dependencies of the parameters describing the sunspot cycles and in Section 5 we show that the parameters can be predicted using precursors found partly from past sunspot number values and partly from geomagnetic activity, which is often used as a precursor proxy for solar polar magnetic field in the sunspot minimum. Most importantly, though, we find that even and odd sunspot cycles obey different statistics therefore implying Hale cyclicity in their predictability. Separately these statistical relations are stronger and more significant than those based on combin ing even and odd cycles together. Using these statistics we construct in Section 6 a new method to separately predict even and odd sunspot cycles and apply it to the coming solar cycle 25. We also find an interesting connection between consecutive odd-even sunspot cycle pairs, which allows us to make rough early predictions for cycle 26 as well. In Section 7 we discuss the results and give our conclusions. ## 2 Data In this work we use monthly values of the version 2 Sunspot Number (SSN) obtained from the SILSO World Data Center (1749-2023). At the time of writing this paper the SSN v2 series covers time from 1749 to October 2022 therefore covering full sunspot cycles 1-24 and about two years from the start of sunspot cycle 25. The monthly values of SSN were first smoothed with a 13-month window, where the first and last month are given a weight of 0.5 and all other months a weight of 1. Using this smoothed SSN curve we identified the times of sunspot minima and maxima. In addition to the sunspot data we use in this work the geomagnetic \(aa\) index. Recently Lockwood et al. (2018a,b) presented a homogenized version of the \(aa\) index, which inter-calibrates the observations of the different stations used to compose the \(aa\) index. They also correct the index for the secular drift in the location of the auroral oval (due to secular changes in Earth's magnetic field) in relation to the observing stations. The homogenized \(aa\) index was obtained from the supplementary data of Lockwood et al. (2018b), which offers the 3-hourly values for years 1868 to 2017. We first extended this series forward in time from Jan 2018 to Dec 2021 by calculating the monthly means of the homogenized \(aa\) index and by calibrating the raw \(aa\) index (obtained from ISGI: [http://isgi.unistra.fr/](http://isgi.unistra.fr/)) against the corresponding homogenized values. The calibration was found by fitting a regression line to the logarithmic monthly averaged homogenized \(aa\) (\(aa_{H}\)) and raw (\(aa\)) values using the data between 1980 and 2017 when the \(aa\) index is based on the latest pair of observatories (Hartland in England and Canberra in Australia). The best fit line was \[\log(aa_{H})=1.049(\pm 0.004)\times\log(aa)-0.257(\pm 0.011). \tag{1}\] The uncertainties of the fit parameters have been indicated in parentheses. Note that this scaling uses logarithmic values, since in logarithmic scale the residuals of the fit are closely homoscedastic (constant variance) while in linear scale they display large heteroscedasticity thereby compromising the basic assumptions of the least-squares fit. The 3-hourly raw \(aa\) values since Jan 2018 were then scaled with Eq. 1 and the resulting data was appended to the homogenized \(aa\) index time series. We also extended the \(aa\) index backward in time using the daily magnetic declination based \(Ak(D)\) indices recorded at the Helsinki geomagnetic observatory since 1844 (Nevanlinna, 2004) and obtained from [https://space.fmi.fi/MAGN/K-index/](https://space.fmi.fi/MAGN/K-index/). These values have previously been used successfully to extend the \(aa\) index time series backward from 1868 to 1844 (e.g. Lockwood et al., 2014). Unlike Lockwood et al. (2014), who used annual averages, we calibrated here the daily \(Ak(D)\) values against the simultaneous homogenized \(aa\) index values by Lockwood et al. (2018b) for the overlapping time period from 1.1.1868 to 31.12.1879, where the \(Ak(D)\) data is rather continuous. Starting from 1880 the \(Ak(D)\) data series has large data gaps. We found that the daily \(Ak(D)\) values can be scaled to the corresponding daily homogenized \(aa\) index values by the following equation \[\log(aa_{H})=1.199(\pm 0.012)\times\log(Ak(D)+5\text{ nT})-0.97(\pm 0.04). \tag{2}\] Also here the logarithmic scale ensures homoscedasticity of the fit residuals. The correlation between the scaled \(Ak(D)\) values and the homogenized \(aa\) index values is 0.84 indicating a rather reliable scaling. Figure 1 shows the annual averages of homogenized \(aa\) index (blue), scaled raw \(aa\) index (red) (shown since year 1980) and scaled Helsinki \(Ak(D)\) values (shown for years 1844-1879). The times when the different curves overlap indicate a very good correspondence between the different datasets. The extended \(aa\) index time series is formed by the scaled \(Ak(D)\) data from 1844-1867, by the homogenized \(aa\) index from 1868-2017 and by the scaled raw \(aa\) index since 2018. For the purposes of this study we will use monthly averages of the extended \(aa\) index composite. ## 3 Parameterization of the sunspot cycle Sunspot cycles are often fitted with a parameterized curve (e.g. Stewart and Panofsky, 1938; Hathaway et al., 1994; Volobuev, 2009). Stewart and Panofsky (1938) showed that the sunspot cycles could be described roughly by curves of the form \(c(t-t_{0})^{a}e^{-b(t-t_{0})}\), where \(a,b,c\) and \(t_{0}\) are free parameters. Hathaway et al. (1994) used a slightly modified version \[f(t)=\frac{a(t-t_{0})^{3}}{\exp((t-t_{0})^{2}/b^{2})-c} \tag{3}\] and fitted this model curve to sunspot cycles 1-21. They also showed that many of the parameters correlated rather well with each other thereby offering a possibility to reduce the effective number of parameters in the curve down to one free parameter. Many studies have since used similar parameterizations. However, to be useful in predicting the sunspot cycles the parameters of the future cycle should be predicted by some means. Several studies have found relatively good correlations between different precursors and the amplitude of sunspot cycle or the maximum SSN during the Figure 1: Annual averages of homogenized \(aa\) index (blue), scaled raw \(aa\) index (red) (shown since year 1980) and scaled Helsinki \(Ak(D)\) values (shown for years 1844-1879). cycle. The parameterizations described above (e.g., Eq. 3) may not be optimal in light of these correlations, because for those the amplitude (maximum) of the sunspot cycle depends on a combination of several parameters. Therefore, we formulated a new parameterization for the sunspot curve, where the parameters are perhaps better interpreted in terms of well known solar cycle properties (amplitude, rise time, asymmetry etc.). We use here an asymmetric Gaussian curve of form \[f(t)=A\exp\left(-\frac{(t-B)^{2}}{(C\times g(t,B))^{2}}\right), \tag{4}\] where \(A\) is the sunspot cycle maximum, \(B\) is the time of the sunspot maximum measured from the sunspot minimum beginning the cycle (i.e., \(B\) is the cycle rise time), \(C\) is the time scale of the rising phase (cf. standard deviation of a Gaussian) and function \(g(t,B)\) is defined as \[g(t,B)=\left\{\begin{array}{ll}1,&\mbox{if $t\leq B$,}\\ D,&\mbox{if $t>B$.}\end{array}\right. \tag{5}\] Therefore the parameter \(D\) appearing in \(g(t,B)\) describes the asymmetry of the time scales in the declining and rising phases. The more positive the \(D\) is the longer the declining phase is compared to the rising phase. We then fitted the Eq. 4 to the 13-month smoothed SSN of each sunspot cycle separately. Each cycle was defined from the time of the sunspot minimum to the next minimum. The 4-parameter fit was done with the non-linear Levenberg-Marquardt optimization implemented in Matlab software. Although the consecutive sunspot cycles are known to be overlapping (so that the cycle already starts before the minimum SSN time) the fitting of our parametric model is largely dependent on the whole cycle and results are not strongly sensitive to the data around sunspot minima. This was tested either by leaving out up to 1 year of data from the beginning of the cycles or by extending the previous fitted cycle and subtracting it from the next cycle. In either case the fitted parameter values remained practically the same. Figure 2 shows the time series of the 13-month smoothed SSN in black, the 4-parameter model fits for cycles 1-24 in red and the Hathaway et al. (1994) fit of Eq. 3 in blue. One can see that the fitted curves describe all the individual cycles reasonably well, although some of the detailed structure in the SSN cycles cannot be described by a smooth asymmetric Gaussian. Such structures are for example the very sharp-peaked cycles like cycles 1-4 and 8-11 and the often seen double peaks (see, e.g., Karak et al. (2018)), which are quite prominent, e.g., in cycles 22-24. However, the rising and declining phases of each cycle are well captured by the fit and the cycle amplitudes are quite close to the real cycle amplitudes. The average \(R^{2}\)-value of the 4-parameter fit over all cycles is 0.973, indicating that over 97% of the variability in the SSN cycles is captured by the model curves. The average root-mean-squared error for all the cycles is 8.7 and there is no statistically significant difference in this between even and odd numbered sunspot cycles. We also note that for most cycles the asymmetric Gaussian function used here is very close to the function (Eq. 3) used by Hathaway et al. (1994). However, in some cycles (3, 4, 8, 10) there is a clear difference with the asymmetric Gaussian providing a somewhat better fit. ## 4 Relationships between cycle parameters Let us next consider the relationships between the fitted values of the four parameters over all cycles. Figure 3 shows as scatter plot the relationship between parameters \(C\) and \(D^{-1/2}\). In the figure the odd numbered cycles have been indicated with blue dots and even numbered cycles with red squares. Figure 2: Time series of 13-month smoothed sunspot number (black), parameterized fits for each sunspot cycle (red curves) and the fit by Hathaway et al. (1994) for comparison (blue curves). The sunspot cycle numbers are indicated on the plot as well. The top panel shows cycles 1-8, middle panel cycles 9-16 and bottom panel cycles 17-24. One can see that all the cycles depict quite a strong correlation between \(C\) and \(D^{-1/2}\) and that there is no significant difference between the odd and even cycles. The correlation coefficient between \(C\) and \(D^{-1/2}\) over all the cycles is 0.94 and it is highly significant (p-value is \(10^{-11}\)). This relationship indicates that cycles with a steep rising phase (i.e., small \(C\)) have a relatively more gradual (i.e., large \(D\) and small \(D^{-1/2}\)) declining phase and vice versa. The correlation in Figure 3 is therefore a manifestation of the well known property of sunspot cycles that small amplitude cycles tend to be more symmetric about the sunspot maximum time than large amplitude cycles (Waldmeier, 1968). For our purposes the tight relationship between \(C\) and \(D\) allows us to eliminate the parameter \(D\) from the sunspot model curve and thereby reduce the number of free parameters to 3. The parameter \(D\) is therefore replaced in Eqs. 4 and 5 by expression \[D=(0.25(\pm 0.05)+0.226(\pm 0.02)\times C)^{-2}\,, \tag{6}\] which corresponds to the linear fit (yellow line) in Figure 3. After replacing \(D\) with Eq. 6 we repeated the fitting procedure for each sunspot cycle, but now using the remaining 3-parameter model. The correlation between \(C\) and \(D^{-1/2}\) is so high that the 3-parameter model is roughly equally good as the 4-parameter model in describing each sunspot cycle. The average \(R^{2}\)-value for the 3-parameter model is 0.959 and therefore not much smaller than 0.973 for the 4-parameter model. Note also that the elimination of \(D\) from the model does not significantly alter the values of the remaining parameters \(A\), \(B\) and \(C\). The correlations of the corresponding parameters of the 4-parameter and 3-parameter fits exceed 0.97. Figure 4 shows the relationship between the parameter \(B\) (time of SSN maximum counted from SSN minimum in years) and cycle amplitude \(A\). One can see that there is a general anti-correlation between \(A\) and \(B\). This anti-correlation between the cycle rise time and amplitude has been long known as the Waldmeier effect (Waldmeier, 1935). Here one can see that this anti-correlation is somewhat stronger (cc = -0.88, p-value = \(10^{-4}\)) in even cycles than in odd cycles (cc = -0.69, p-value = 0.014). The correlation between \(A\) and \(B\) found here is clearly higher than, e.g., the correlation between SSN maximum and cycle rise time found by Karak and Choudhuri (2011) (cc=-0.5). It therefore appears that the parameters of the fitted SSN model curve indicate the Waldmeier effect more robustly than the exact SSN data. This is likely because of the sensitivity of the Waldmeier effect to the of timing and height of the sunspot maximum, which can be difficult to determine if the cycle has multiple peaks Karak and Choudhuri (2011). The linear fits to even and odd cycles depicted by the red and blue lines in Figure 4 are given by equations \[B_{\mbox{even}} = 5.9(\pm 0.4)-0.013(\pm 0.003)\times A_{\mbox{even}} \tag{7}\] \[B_{\mbox{odd}} = 7.2(\pm 1.0)-0.016(\pm 0.006)\times A_{\mbox{odd}}\cdot \tag{8}\] The correlation coefficients and the slopes/intercepts of the best fit regression lines are different in the even and odd cycles only with a weak statistical significance (the p-values for the differences exceed 0.12). However, the mean squared errors of the linear fits to the even and odd cycles are highly significantly different (0.14 for even cycles and 1.01 for odd cycles and the p-value for the difference is 0.003). This result indicates that the Waldmeier effect is more strongly valid for even cycles than for odd cycles. Figure 5 shows the relationship between the final parameter \(C\) (time scale of rising phase) and cycle amplitude \(A\). Here one can also see a general anti-correlation, which is yet another manifestation of the Waldmeier effect. Actually, there is also a quite good correlation (cc = 0.91, p-value = \(8\times 10^{-10}\)) between the \(B\) and \(C\) parameters, which explains the two manifestations of the Waldmeier effect. The linear fits to even and odd cycles depicted by the red and blue lines in Figure 4 are given by equations \[C_{\mbox{\small even}} = 3.5(\pm 0.4)-8(\pm 2)\times 10^{-3}\times A_{\mbox{\small even}} \tag{9}\] \[C_{\mbox{\small odd}} = 4.7(\pm 0.7)-1.3(\pm 0.4)\times 10^{-2}\times A_{\mbox{\small odd}}. \tag{10}\] Figure 3: Relationship between the \(C\) and \(D^{-1/2}\) parameters of the 4-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The yellow line depicts the linear fit to all the cycles having a correlation coefficient of 0.94 (p-value is \(10^{-11}\)) The conclusions about the differences between even and odd cycles are the same as for \(B\) vs. \(A\) relation in Figure 4. I.e., the linear relationships or the correlations are only weakly statistically significantly different, but the difference in the mean squared errors is highly significant (p-value is 0.038). Figure 4: Relationship between the \(A\) and \(B\) parameters of the 3-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The blue and red lines depict the linear fits to odd and even cycles respectively. Overall the results in Figures 4-5 indicate that the cycle amplitude \(A\) could further be used to reduce the number of parameters in the sunspot cycle fit. Moreover, there are indications that the accuracies of these fits are significantly different in even and odd cycles. Figure 5: Relationship between the \(A\) and \(C\) parameters of the 3-parameter fits. Blue dots (red squares) indicate even (odd) numbered cycles. The cycle numbers have been further indicated beside all points. The blue and red lines depict the linear fits to odd and even cycles respectively. ## 5 Precursors for cycle parameters Based on the above relations we could further reduce the SSN model curve to a one parameter model. However, each simplification of the model makes it less flexible and decreases the model accuracy. Therefore it is useful to first consider whether the cycle parameters \(A\), \(B\) and \(C\) can be directly predicted by some suitable precursor. Cameron and Schussler (2007) showed that the solar activity level three years before the sunspot minimum starting the cycle is a relatively good predictor for the amplitude of the cycle. This result for sunspot numbers was shown by Petrovay (2020) (his Figure 6), who found a correlation of 0.8 between maximum sunspot number of the cycle and the sunspot number taken 3 years before the sunspot minimum that starts the cycle. A correlation of 0.8 implies that only about 62% variability in cycle amplitudes could be explained by the past sunspot number, thereby offering a rather mediocre accuracy in predicting the coming cycle amplitudes as also Petrovay (2020) mentions. However, still motivated by this result we calculated in Figure 6 the correlation coefficient between the cycle amplitude \(A\) and the lagged sunspot number as a function of the time lag in months before the SSN minimum. Unlike in past studies we did the calculation here separately for even and odd cycles and using the 2nd power of SSN (SSN\({}^{2}\)), which we found to produce slightly larger correlations than SSN (the difference, however, is rather small and not statistically significant). One can see that for the even cycles the correlation is systematically higher than for the odd cycles. The difference between even and odd cycles is quite large, but because of the rather low number of data points (only 12 even and odd cycles) the 95% confidence limits remain rather large even when the correlation coefficient is high. However, there is a location around the lag of 41 months (about 3.4 years) before sunspot minimum where the correlations do differ from each other statistically significantly. This lag of optimal correlation is quite close to the 3 years found by Cameron and Schussler (2007). Note, however, that the good correlation is not specific to exactly the optimum of 41 months but is seen over a broad range of lags from 34 to 44 months. We chose the lag of 41 months for a closer inspection in Figure 7, which displays all the three parameters \(A\), \(B\) and \(C^{1/2}\) as a function of the sunspot number taken 41 months before the sunspot minimum (SSN(41)). One can see that the SSN(41)\({}^{2}\) correlates quite well not only with cycle amplitude \(A\), but with all the three parameters in the even cycles. In the odd cycles there is evident correlation too, but it is clearly much lower than for the even cycles due to the larger scatter. In particular the SSN(41)\({}^{2}\) vs. cycle amplitude \(A\) resembles the plot in Figure 6 of Petrovay (2020), but now reveals a large difference between even and odd cycles. Recently Du (2020) also found that the sunspot number 39 months before the sunspot minimum is a precursor for the maximum SSN of the next cycle and this relation is stronger for even cycles. A similar finding was done by Nagovitsyn and Ivanov (2023). We shall later discuss the reasons for the better correlation in even cycles but for now we concentrate on the fact that SSN(41) can be used as a quite accurate predictor of the three cycle parameters for the even cycles. For cycle amplitude the correlation coefficient is 0.984 (p-value = \(8\times 10^{-9}\)) indicating that about 96.9% of the variation in cycle amplitude may be predicted with SSN(41). For cycle rise time \(B\) the correlation is slightly lower -0.86 (p-value = 0.0003) and for \(C^{1/2}\) (time scale of rising phase) the correlation is -0.89 (p-value = \(10^{-4}\)). The linear fits to the even cycles indicated by the yellow lines in Figure 7 are given by equations \[A_{\mathrm{even}}\ =\ 88(\pm 6)+0.0103(\pm 0.0006)\times\mathrm{SSN}(41)^{2}, \tag{11}\] \[B_{\text{even}} = 4.8(\pm 0.3)-1.3(\pm 0.3)\times 10^{-4}\times\text{SSN}(41)^{2}, \tag{12}\] \[C_{\text{even}} = \left(1.70(\pm 0.05)-3.1(\pm 0.5)\times 10^{-5}\times\text{SSN}(41)^{2} \right)^{2}. \tag{13}\] We also studied the correlation between cycle amplitude and geomagnetic activity, which is known to be a good precursor for the sunspot cycle amplitude. We tested various different quantities calculated from the extended homogenized geomagnetic \(aa\) index. We found that the best predictor for the cycle amplitude for odd sunspot cycles is given by the maximum average September-October \(aa^{3}\) value within the 3-year window extending backward from the sunspot minimum. Figure 8 shows Figure 6: Correlation coefficient between cycle amplitude \(A\) and lagged 13-month smoothed sunspot number (SSN\({}^{2}\)) as a function of the time lag in months before the SSN minimum. Blue (red) curve indicates the odd (even) cycles and the correspondingly colored regions indicate the 95% confidence limits for the correlation coefficients. The green dot indicates the maximum correlation at 41 months before the SSN minimum for the even cycles. the relationship between the cycle amplitude and this geomagnetic activity measure separately for the odd cycles (blue) and even cycles (red). The yellow line indicates the best linear fit of \[A=130(\pm 7)+0.0036(\pm 0.0004)\times\max_{[0,3\mathrm{y}]}\left(aa_{\mbox{Sep-Oct}}^ {3}\right). \tag{14}\] The linear correlation coefficient between the odd cycle amplitude and maximum \(aa_{\mbox{Sep-Oct}}^{3}\) is 0.981 (p \(<9\times 10^{-5}\)). One can also see from Figure 8 that the corresponding relation for the even cycles is much worse, but still statistically significant (correlation 0.76, p = 0.027). It should be Figure 7: Dependence of \(A\), \(B\) and \(C\) parameters on 13-month smoothed sunspot number 41 months before the sunspot minimum (SSN(41)\({}^{2}\)). Blue dots (red squares) indicate odd (even) numbered cycles. The cycle numbers have been further indicated beside all points. The red lines depict the linear fits to even numbered cycles. noted though, that trying different quantities calculated from the \(aa\) index one can obtain higher correlations for the even cycles as well, but it seems that none of such correlations exceeds the extremely good correlation between even cycle amplitude and SSN(41) found above. We also evaluated the correlation between the cycle amplitude and the geomagnetic precursor defined above as a function of time lag counted from the sunspot minimum preceding the cycle. I.e., instead of taking the maximum \(aa^{3}_{\mbox{Sep-Oct}}\) in a 3-year window ending at the cycle minimum Figure 8: Relationship between cycle amplitude \(A\) and maximum September-October \(aa^{3}\) from a 3-year window before SSN minimum. The blue points indicate odd cycles and red squares the even cycles. The blue line indicates the linear fit to the odd cycles. The green circle indicates the estimated amplitude for cycle 25. the window was shifted backward in time by varying lags. The resulting correlations are shown in Figure 9 separately for odd (blue) and even (red) cycles. One can see that the highest correlations for odd cycles are indeed found up to 16 months before the cycle minimum. There is a drop in correlation between 16-55 months before minimum but the correlation then rises again and remains rather higher (about 0.8) between 55-100 months before the SSN minimum. Note however, that this does not imply a longer lead time for prediction, since the timing of the SSN minimum is still needed to determine the precursor. Figure 9 also shows that the correlation for even cycles is clearly much lower than for the odd cycles at practically all lags. However, because of the somewhat limited number of data points the statistical 95% confidence limits for the correlations are rather large. Even Figure 9: Correlation coefficient between cycle amplitude \(A\) and lagged maximum monthly September-October \(aa^{3}\) within a 3 year window. Blue (red) curve indicates the odd (even cycles) and the correspondingly colored regions indicate the 95% confidence limits for the correlation coefficients. though there are indications for systematically higher correlation for the odd cycles, there is no firm evidence that the correlations are statistically significantly different for the even and odd cycles. The specific recipe of calculating the geomagnetic activity based precursor for odd cycles was found by trial and error, manually testing a few tens of different (rather random) possibilities. This raises the question of whether the found precursor is statistically significantly better than some other, perhaps more common, choice like a mere the average \(aa\) index in the year of sunspot minimum. We tested the possibility of randomly obtaining a geomagnetic activity based precursor as good as the one found above. This was done by generating \(10^{4}\) random geomagnetic precursor time series by varying the calendar months from which \(aa\) index is taken (2 random calendar months), the length of the time window (randomly selected between 1 and 5 years) and the exponent assigned for the \(aa\) index (randomly selected between 1 to 3). In addition we randomly varied whether we take the maximum of the yearly values within the time window (as we did for the chosen precursor) or the average. For each such randomly generated precursor we calculated the correlation between the precursor and the following cycle amplitude as a function of lag up to 11 years and found the maximum correlation. This procedure simulates the act of randomly choosing a recipe for determining the precursor and finding the maximal correlation over all these lags. Finally, we randomly grouped the remaining correlations into groups of 10 values and determined the maximum correlation in each group. This simulated the act of testing 10 random precursors and choosing the one that gives the maximal correlation at some lag. Now, comparing the correlation coefficient (0.981) for the precursor used above in Figures 8 and 9 to the maximal correlations of these randomly generated precursors in shows that there is only a probability of less than 4.5% to randomly obtain a correlation higher than 0.981. This indicates that the recipe for the geomagnetic precursor we have chosen for the odd cycles is indeed statistically significant by 95% significance level. We also compared the geomagnetic precursor found above to a more commonly used geomagnetic precursor, which is the annual average of the \(aa\) index at the solar minimum, for which the correlation with the following cycle amplitude is 0.834. The correlation for our precursor was 0.981 and the difference of it from 0.834 is statistically highly significant (p-value for the difference is about 0.0007). The length of the sunspot cycle is also an interesting and important quantity. However, it is difficult to estimate from the predicted sunspot cycle curve because typically the SSN does not drop to zero at the end of the cycle. We studied the association of the cycle length and the four cycle parameters, but found no strong relationships, which would allow one to estimate the length of the cycle based on the parameters of the same cycle. However, we found that the length of the cycle seems to significantly correlate with the ratio of \(D\) and \(C\) parameters (i.e., \(D/C\)) of the _previous cycle_. This relationship is shown in Figure 10. The correlation coefficient between \(D/C\) of the previous cycle and length of the current cycle is 0.66 (p = 5.7 \(\times\)\(10^{-4}\)) and is statistically very significant. There are three cycles (6, 13 and 23) which are clear outliers. Excluding these cycles yields an even higher and more significant correlation of 0.88 (p = 2.6 \(\times\)\(10^{-7}\)). We note that these correlations are both significantly higher than, e.g., the correlation between the cycle length and the amplitude of the same cycle (about -0.5) (Wolf, 1861; Petrovay, 2020). The yellow line in Figure 10 shows the linear fit excluding the cycles 6, 13 and 23. The equation for this fit is \[L_{i}=9.2(\pm 0.4)+1.5(\pm 0.3)\frac{D_{i-1}}{C_{i-1}}, \tag{15}\] where \(L_{i}\) is the length of cycle \(i\) in years and \(D_{i-1}\) and \(C_{i-1}\) refer to the parameters of previous cycle \(i-1\). ## 6 Prediction of sunspot cycles ### Cross-validated predictions for past cycles and prediction for cycle 25 Based on the discussion above we can predict the sunspot cycle once the time of the sunspot minimum starting the new cycle is known. We use the 3-parameter description (\(A\), \(B\) and \(C\)) for the sunspot cycle of Eqs. 4-5. Parameter \(D\) has been eliminated using Eq. 6. For the even sunspot cycles we directly estimate the three \(A\), \(B\) and \(C\) parameters with \(\mathrm{SSN}(41)^{2}\) as shown in Figure 7. Figure 10: Relationship between the cycle length and the \(D/C\) ratio of the previous cycle. The numbers of each data point refer to the current cycle. The blue points (red squares) correspond to odd (even) cycles and the green point to the estimated length of the cycle 25. The yellow line indicates the linear fit excluding the three cycles 6, 13 and 23. For the odd cycles we first estimate the cycle amplitude \(A\) from the geomagnetic precursor as shown in Figure 8 and then estimate \(B\) and \(C\) using the linear relationships for the odd cycles given depicted in Figures 4 and 5 respectively. Using this approach we predicted past even sunspot cycles starting from cycle 2 and each odd sunspot cycle starting from cycle 11. We predicted each cycle separately with the so-called leave-one-out cross validation method. This means that when predicting the \(i\):th cycle all the fits between different parameters and precursor values discussed above were determined by using data from all other cycles except the \(i\):th cycle. Therefore, the fitted relationships between the parameters and precursors (Eqs. 8, 10, 11, 12, 13, 14) change from one predicted cycle to the next. This variability in the model parameters together with the residual variability incorporates the total prediction uncertainty of the model. It is important to note that the numerical values in Eqs. 11, 12 and 13 for \(A\), \(B\) and \(C\) of even cycles and in Eqs. 14, 8 and 10 for \(A\), \(B\) and \(C\) of odd cycles as well as their standard errors correspond to the fitted values when all available sunspot cycles are used in the fit. These values are therefore appropriate for prediction of sunspot cycles from cycle 25 onward. An important step in applying the cross-validation method is to estimate the prediction error of the model. Therefore, when proceeding through all the past cycles 1-24 (excluding odd cycles 1, 3, 5, 7 and 9, for which no geomagnetic precursor could be determined) we obtained the residuals of the 13-month smoothed \(\mathrm{SSN}^{3/4}\) values and the corresponding predicted values each time neglecting the cycle to be predicted as \[r=\mathrm{SSN}^{3/4}-\mathrm{SSN}^{3/4}_{\mbox{pred}}. \tag{16}\] The exponent of \(3/4\) in the above equation was used because the residuals calculated this way were more homoscedastic, i.e., the residual variance was more uniform over different values of \(\mathrm{SSN}\) compared to regular residuals in linear scale (exponent of 1 in Eq. 16). We then used the collection of residuals from Eq. 16 to generate a spread of possible sunspot number predictions for the \(i\):th predicted cycle. This was done by bootstrapping \(10^{5}\) values for the residuals (i.e., randomly resampling with replacement from the collection of residuals) for each predicted monthly sunspot number value in the cycle. The residuals were added to the \(\mathrm{SSN}^{3/4}_{\mbox{pred}}\) values and the result was then converted back to linear scale by the transformation \[\mathrm{SSN}_{k}=\left(\mathrm{SSN}^{3/4}_{\mbox{pred}}+r_{k}\right)^{4/3}, \tag{17}\] where \(k=1,2,...,10^{5}\), \(r_{k}\) indicates the \(k\):th bootstrapped residual and \(\mathrm{SSN}_{k}\) indicates the \(k\):th predicted SSN value for a particular month. These \(10^{5}\) values form an ensemble of \(\mathrm{SSN}\) predictions for each monthly \(\mathrm{SSN}\) value in the predicted cycle. Figure 11 shows the predicted past cycles (green curves) and their uncertainty ranges (blue shading) together with the 13-month smoothed sunspot number (black curve) and the optimal 4-parameter model curves (red curves) as in Figure 2. The figure also shows the predicted curve for cycle 25 (magenta curve) and as a comparison the cycle 25 prediction offered by the Space Weather Prediction Center ([https://www.swpc.noaa.gov/products/solar-cycle-progression](https://www.swpc.noaa.gov/products/solar-cycle-progression)). One can see that the predicted past cycles are rather close to both the optimal 4-parameter model curves and the 13-month smoothed sunspot number. Particularly noteworthy is the fact that the amplitude of all of the predicted past cycles quite accurately matches the real amplitude of the sunspot cycles. This is true for the very small cycles 6, 12, 14 and 24 and also for the largest cycles, e.g., cycle 19. Furthermore, by definition about 95% of the monthly values of the 13-month smoothed sunspot number are within the 2-standard deviation uncertainty range of the predicted curve. This extremely good performance of the past predictions gives strong reasons to believe that the prediction of the sunspot cycle 25 is reliable as well. Figure 12 shows a closeup of cycle 25 prediction in the same format as in Figure 11, but with the addition of the monthly unsmoothed values of SSN also included as the thin black curve. Our predicted curve peaks in May 2024 and the peak value of the cycle is \(171\pm 23\) (1 standard deviation uncertainty). Using Eq. 15 and the \(D\) and \(C\) parameters of cycle 24 the predicted length of cycle 25 is about \(9.7\pm 1.9\) years (2 standard deviation confidence interval). This indicates that the end of cycle 25 is attained likely in September 2029 with the 2-standard deviation uncertainty range extending from October 2027 to July 2031. At the time of writing this paper the cycle 25 has already begun and the recorded SSN is already significantly higher than the prediction offered by the SWPC. On the other hand, the recorded SSN follows quite closely our predicted curve, which indicates that the cycle 25 will be considerably stronger than cycle 24 and roughly the same size as cycle 23. There have also been a range of other predictions for cycle 25. For example, Upton and Hathaway (2018) used advective flux transport modeling to predict that cycle 25 amplitude will be about 95% of cycle 24 amplitude which would make cycle 25 the weakest cycle in 100 years. Bhowmik and Nandy (2018), on the other hand, used solar dynamo modeling to predict that cycle 25 would be slightly stronger (about 14%) than cycle 24. Pesnell and Schatten (2018) used the SODA index (based on a combination of solar polar magnetic field and solar F10.7 index) to predict an amplitude of 135\(\pm\)25 for cycle 25, i.e., slightly larger than cycle 24. Kumar et al. (2021) used various polar field precursors extracted, e.g., from solar magnetograms, to predict a cycle amplitude of 126\(\pm\)3 for cycle 25. Kumar et al. (2022) used the correlation between the rise rate of polar field and the amplitude of the next cycle to predict an amplitude of 137\(\pm\)23 for cycle 25. Sarp et al. (2018) used non-linear time series modeling approach to predict a clearly stronger cycle 25 with an amplitude of 154\(\pm\)12 peaking in early 2023. Li et al. (2018) used statistical modeling to reach a similar prediction with a predicted amplitude of 168.5\(\pm\)16.3 and peak of the cycle in October 2024. Both Sarp et al. (2018) and Li et al. (2018) predictions are fairly close, but slightly lower than our prediction. A quite different prediction was given by McIntosh et al. (2020), who used the timing of termination of toroidal bands of solar activity to predict a rather strong cycle 25 with amplitude of 233\(\pm\)21. However, this prediction was recently revised to an amplitude of 184\(\pm\)17 (McIntosh et al., 2022), which is in agreement with our prediction when considering the ranges of uncertainty. Du (2022) used the correlation between the cycle rising rate and cycle amplitude to predict the cycle 25 based on the 2 years of data from the beginning of the cycle. They predicted the cycle amplitude to be 135.5\(\pm\)33.2, which is somewhat lower than our prediction, although not in complete disagreement given the uncertainty range. Penza et al. (2021) found a correlation between the parameters describing the sunspot cycle shape of even and subsequent odd cycles. Based on this they predicted the cycle 25 to be similar or slightly larger than cycle 24. While the studies discussed above are only a subset of all predictions made for cycle 25 it seems that our prediction is clearly above the average in predicted cycle amplitude and also clearly above the SWPC prediction issued by the Solar Cycle Prediction Panel. ### Attempt at predicting the cycle 26 Above we found that for even numbered cycles the 13-month smoothed sunspot number evaluated 41 months before the start of cycle provides an extremely good estimate for the amplitude and other parameters of the cycle. This leads to an interesting question: how accurately could one use the predicted cycle 25 SSN curve to provide a prediction of cycle 26? Evidently the uncertainty of such a prediction would be rather large but we shall here attempt to make one also for cycle 26. The first step is to evaluate how well the predicted SSN 41 months before the sunspot minima actually correspond to the true values, which were used as a precursor for even cycles. Figure 13 shows the relationship between the real and predicted 13-month smoothed SSN evaluated 41 months before the minima that end each sunspot cycle. Both odd and even cycles seem to adhere to the same overall linear relationship, which has a high correlation of 0.832 (p = 10\({}^{-5}\)). The linear relationship is given by the equation \[\mathrm{SSN}(41)_{\mbox{real}}=20(\pm 10)+0.72(\pm 0.12)\times\mathrm{SSN}(41)_{ \mbox{pred}}. \tag{18}\] Ideally there would be one-to-one relationship between the real and predicted values, but the modeled SSN curves seem to systematically slightly underestimate (overestimate) small (large) SSN(41) values. We can use this fit and its 95% prediction error limits (see Figure 13) to predict the SSN 41 months before the sunspot minimum that starts cycle 26. The timing of this minimum is determined by the timing of the minimum that starts cycle 25 and the cycle 25 length evaluated above from Eq. 15. Once the SSN(41) value to be used as a precursor for cycle 26 is known we can use Eq. 11 to estimate the cycle amplitude. Because of the uncertainties associated to the length of cycle 25 and the scaling of predicted SSN(41) according to Eq. 18 we evaluated the spread of possible cycle 26 amplitudes using a Monte Carlo simulation having 10\({}^{4}\) rounds. In each round we randomly generated a value for the length of the cycle 25 within the range of its uncertainty. This was then used to calculate the timing of 41 months before the end of cycle 25 and the SSN value at that time using the predicted SSN curve for cycle 25 (Figure 12). This value was then used to calculate a prediction for the cycle amplitude using Eq. 11. The histogram of the Monte Carlo simulation results is shown in Figure 14. As expected the results indicate a quite a large uncertainty range covering practically all past sunspot cycles. However, despite the large range of uncertainty some interesting and non-trivial aspects are seen. The median of the results implies that cycle 26 would be even slightly stronger than cycle 25. In fact, based on these results the probability that cycle 26 will be weaker than cycle 25 is only about 19%. The results also imply an even clearer difference to cycle 24, which was the weakest cycle of the last 100 years. The probability that cycle 26 would be weaker than cycle 24 is only 0.8%. cycle parameters were significantly stronger in even cycles compared to the odd ones. For example the well-known Waldmeier effect that cycle amplitude and its rise time are inversely correlated was more strictly valid for even cycles. A similar finding for the Waldmeier effect was also reported, e.g., by Du (2022a) and Dikpati et al. (2008). This result implies that for some reason the even numbered cycles possess a smaller dimensionality than odd numbered cycles. Such a systematic difference between even and odd cycles may imply a connection to the more fundamental 22-year magnetic Hale cycle of the Sun. Du (2022a) also found significant differences in the Waldmeier effect between the two hemispheres with the southern hemisphere displaying the normal Waldmeier effect, which is stronger in even cycles, while the northern hemisphere displayed an inverse-Waldmeier effect, which is stronger in odd cycles. This implies that there may also be a connection between the Waldmeier effect and the hemispheric asymmetry in the sunspot number. Our approach to prediction of sunspot cycles is based on statistical precursors for the cycle parameters. Following Cameron and Schussler (2007) we found that the 13-month smoothed sunspot number evaluated 41 months before the sunspot minimum that begins a cycle, SSN(41), is on average a fair predictor for the next cycle when considering all sunspot cycles. However, we found that this relation is much stronger for even cycles than for odd cycles, for which the SSN(41) could not be very accurately used as a predictor. The advantage of SSN(41) precursor is that we can predict for even cycles all the cycle parameters (which correlate well with the amplitude) quite accurately. Cameron and Schussler (2007) explained the tendency of past SSN to correlate with the next cycle amplitude as a result of overlapping sunspot cycles. While the past cycle is still declining the new cycle begins. Because of the Waldmeier effect the stronger the new cycle will be the faster it will rise to the maximum and the earlier the intersection time of the old and the new cycle is attained. The earlier the intersection time, the higher the SSN 41 months earlier in the declining phase of the previous cycle. The fact that this connection is here found to be more strictly valid for even cycles arises because of the fact that the Waldmeier effect is also tighter in even cycles. The question of why the Waldmeier effect is different in even and odd cycles is still open and should be investigated in future studies. For odd cycles we found that the maximum Sep-Oct geomagnetic \(aa\) index within 3 years preceding the sunspot minimum that begins the cycle is an extremely good precursor for the cycle amplitude. It has been long recognized that geomagnetic activity close to solar minimum reflects the strength of the open solar flux, which is at these times connected to the poloidal magnetic flux extending from the Sun's polar coronal holes. It is curious, however, that the best precursor for odd cycles was found to be connected to geomagnetic activity close to the fall equinox. Some other past studies have used geomagnetic activity averaged in other some ways over different periods of time and found good correlations to the next cycle amplitude. While our Sep-Oct \(aa\) precursor is probably not statistically significantly better than some of these other measures there might be a physical reason for the preference of Sep-Oct season. It is known that geomagnetic activity has strong seasonal variation, which is largely due to the Russell-McPherron (RMP) effect (Russell and McPherron, 1973; Lockwood et al., 2020). The RMP effect describes the fact that interplanetary magnetic field (IMF), which is oriented along the solar equatorial plane, projects onto the Z-axis of the GSM coordinate system close to the fall and spring equinoxes and thereby may enhance energy input from the solar wind into the magnetosphere/ionosphere system. The RMP effect is also dependent on the polarity of the IMF so that during fall (spring) equinox the IMF pointing away from (towards) the Sun enhances solar wind energy input and therefore leads into larger geomagnetic activity as well. Often both polarities of the IMF are seen within a solar rotation but especially close to solar minima the heliospheric current sheet is often flatter, which allows Earth to be more exposed to the magnetic polarity connected to Sun's northern (southern) pole in fall (spring) due to the Earth's changing heliographic latitude over the course of the year (Rosenberg and Coleman Jr., 1969). According to the 22-year Hale cycle the northern solar pole has a positive magnetic polarity close to sunspot minima preceding odd cycles, which leads to a dominance of away sector close to fall equinoxes (Hiltula and Mursula, 2007; Vokhmyanin and Ponyavin, 2012). Furthermore, it has been shown that the heliospheric current sheet is systematically tilted southward in the declining phase (Hiltula and Mursula, 2006), which further enhances the dominance of the away sector in fall and decreases the dominance of the toward sector in spring prior to odd cycles. Therefore, it is expected that also the geomagnetic activity portrayed by the _aa_ index is most sensitively proportional to the strength of the IMF (i.e., to open solar magnetic flux connected to solar polar field) at these times. A detailed confirmation of this interpretation is warranted, but out of the scope of this study. In addition to cycle amplitude and other parameters we found a curious statistical relation between the length of the sunspot cycle and the ratio of \(D\) (cycle asymmetry) and \(C\) (time scale of rising phase) of the preceding cycle. The fact that such properties of the preceding cycle somehow are connected to the length of the next cycle again highlights the fundamental 22-year Hale cyclicity of solar magnetism. While there is no physical explanation for this relationship at the moment it can be statistically used to estimate the cycle length perhaps a bit more accurately than previous metrics (Petrovay, 2020). Using the found precursors we used cross-validation to test their prediction accuracy by predicting the past solar cycles. For all past cycles the predictions were very close to the real sunspot cycles thereby giving strong confidence that the prediction of future cycles would be equally successful. We proceeded to predict the odd cycle 25 and found that its amplitude will be \(171\pm 23\), thus about 1.6 times stronger than cycle 24. There are already clear indications that our prediction closely follows the progression of the cycle 25 which has already started at the time of writing this paper. It is also noteworthy that the prediction issued by the Solar Cycle 25 Prediction Panel at the SWPC suggests cycle 25 to be similar to cycle 24, which is already now clearly below the current sunspot levels. Using the predicted cycle 25 and the fact that SSN(41) could be used as a predictor for the even cycles we provided a rough prediction also for cycle 26. As expected the uncertainty range of the prediction was rather large, but based on the results it seems rather likely that the cycle 26 will be stronger than both cycles 24 and 25. Therefore, we find no evidence for an imminent drop of solar activity to a grand solar minimum as suggested by several past studies (e.g. Abreu et al., 2008; Owens et al., 2011; Lockwood et al., 2011). Overall these results display the capability to predict even and odd cycles using different precursors with rather high accuracy. The results also clearly indicate a connection between odd-even pairs of sunspot cycles and highlight the 22-year Hale cyclicity. Accordingly, the Hale cyclicity should be considered more carefully also by more physically motivated dynamo and flux transport prediction models of solar activity. ###### Acknowledgements. We acknowledge the financial support by the Academy of Finland to the PROSPECT (project no. 321440). The sunspot number data was obtained from World Data Center SILSO, Royal Observatory of Belgium, Brussels ([https://www.sidc.be/SILSO/home](https://www.sidc.be/SILSO/home)).
太陽活動の周期を偶数と奇数に分割して、太陽磁気的なハレ周期を考慮し、この研究を行う。最初に、太陽活動の周期のすべての進化と形は、単純なパラメータ化された数学的な式で非常に良好に記述されていることを示す。偶数太陽活動の周期のパラメータを、太陽活動の最小値から41ヶ月前の太陽活動数を使用して、非常に正確に予測することができる。奇数太陽活動の周期のパラメータは、太陽活動の最小値が到来する3年以内の磁気的な気象指標の最大値に近づくことが最適である。これらの予測に基づき、過去の太陽活動の周期を予測し、クロスバリデーション手法を用いてパフォーマンスを評価した結果、過去のサイクルは非常に正確に予測できた。来年の太陽活動周期25を予測すると、振幅は171 +/- 23、2029年9月
2309.03567
The Devil is in the Tails: How Long-Tailed Code Distributions Impact Large Language Models
Learning-based techniques, especially advanced Large Language Models (LLMs) for code, have gained considerable popularity in various software engineering (SE) tasks. However, most existing works focus on designing better learning-based models and pay less attention to the properties of datasets. Learning-based models, including popular LLMs for code, heavily rely on data, and the data's properties (e.g., data distribution) could significantly affect their behavior. We conducted an exploratory study on the distribution of SE data and found that such data usually follows a skewed distribution (i.e., long-tailed distribution) where a small number of classes have an extensive collection of samples, while a large number of classes have very few samples. We investigate three distinct SE tasks and analyze the impacts of long-tailed distribution on the performance of LLMs for code. Our experimental results reveal that the long-tailed distribution has a substantial impact on the effectiveness of LLMs for code. Specifically, LLMs for code perform between 30.0\% and 254.0\% worse on data samples associated with infrequent labels compared to data samples of frequent labels. Our study provides a better understanding of the effects of long-tailed distributions on popular LLMs for code and insights for the future development of SE automation.
Xin Zhou, Kisub Kim, Bowen Xu, Jiakun Liu, DongGyun Han, David Lo
2023-09-07T08:53:16
http://arxiv.org/abs/2309.03567v1
# The Devil is in the Tails: How Long-Tailed Code Distributions Impact Large Language Models ###### Abstract Learning-based techniques, especially advanced Large Language Models (LLMs) for code, have gained considerable popularity in various software engineering (SE) tasks. However, most existing works focus on designing better learning-based models and pay less attention to the properties of datasets. Learning-based models, including popular LLMs for code, heavily rely on data, and the data's properties (e.g., data distribution) could significantly affect their behavior. We conducted an exploratory study on the distribution of SE data and found that such data usually follows a skewed distribution (i.e., long-tailed distribution) where a small number of classes have an extensive collection of samples, while a large number of classes have very few samples. We investigate three distinct SE tasks and analyze the impacts of long-tailed distribution on the performance of LLMs for code. Our experimental results reveal that the long-tailed distribution has a substantial impact on the effectiveness of LLMs for code. Specifically, LLMs for code perform between 30.0% and 254.0% worse on data samples associated with infrequent labels compared to data samples of frequent labels. Our study provides a better understanding of the effects of long-tailed distributions on popular LLMs for code and insights for the future development of SE automation. ## I Introduction Data distribution refers to the way in which a set of data is spread out or distributed across different values. It is a critical factor for machine learning-based approaches, as it is the presence of repetitive patterns within the data distribution that enables the automated tasks to be performed using machine learning tools [1, 2]. Understanding the data distribution in software engineering (SE) could guide researchers to better design automatic tools to help developers. Hindle et al. [3] reported a well-known finding on data distribution namely the naturalness of code: code exhibits a high degree of repetition, which makes code predictable by language models. This fundamental fact supports researchers in employing language models in various SE tasks [4, 5, 6, 7] to automatically generate/predict code. Beyond this important finding, we further observe that the data distribution of SE datasets could be long-tailed: some specific code tokens, APIs, libraries, or tools could massively occur in many software systems while a vast number of others only have few occurrences. Examining this long-tailed distribution is crucial to the software engineering domain not only because of its prevalence in real-world datasets [8, 9, 10] but also its substantial impact on the performance of machine learning models [11]. An example of the long-tailed distribution in the software engineering domain can be observed in the distribution of Common Weakness Enumeration (CWE) [12]. Figure 1 demonstrates the distribution of security patches across various CWE types gathered from 1,560 open-source software repositories [12].1 We observe that the top 6 out of 113 CWE types contain almost 50% of the samples while the remaining 107 types of CWE have fewer occurrences each. Certainly, a CWE type with low occurrence does not mean it is less important. For instance, 14 out of the top 25 most dangerous CWE types, in the year of 2022 [13], were categorized as a part of the 107 infrequent CWE types. An unidentified vulnerability potentially brings critical issues to individuals or organizations when it is exploited by attackers [14, 15]. Footnote 1: Figure 1 only presents the 20 most frequent CWE types and the remaining 93 types with lower frequency are omitted for better visualization. In this study, we focus on investigating the existence of long-tailed distributions in labels of SE datasets, as learning-based approaches with labeled data have become the mainstream for most SE tasks [16, 17]. Labels in SE datasets are vital as they represent the intents and goals of the tasks [18]. Formally, the **long-tailed distribution exists in the labels of Fig. 1: CWE type distribution of security patches [12]. **SE datasets** if _a small proportion of labels has a vast number of data samples, while a large number of other labels have very few data samples._ To facilitate our analysis, we further categorize all labels into two categories: **head labels**, which refer to the most frequently occurring labels accounting for 50% of the data samples, and **tail labels**, which consist of infrequent labels and account for the remaining 50% of the data samples. For instance, when considering CWE types as labels, the top 6 CWE types depicted in Figure 1 correspond to the head labels, while the other 107 CWE types (including 93 omitted types) correspond to the tail labels according to the portion of their data samples. Without a thorough exploration of the long-tailed distribution, we may not fully realize the poor performance of learning-based approaches on the tail. A motivating example in Table I demonstrates the vulnerability type prediction results of the state-of-the-art approach, TreeVul [12]. This demonstrates the accuracy scores on patches with the infrequent type "CWE-863" (i.e., a tail label), which accounts for only 1.3% of all patches in the entire dataset. Notably, TreeVul achieves a significantly low accuracy (11.1%) on this specific type compared to the overall accuracy on the entire dataset (73.1%). In contrast, for the frequent type "CWE-79" (i.e., a head label), TreeVul achieves a much higher accuracy (88.5%). To comprehensively investigate the impact of long-tailed code distribution on learning-based SE models, we focus on two specific categories of SE models: 1) popular Large Language Models (LLMs) for code including CodeBERT [19] and CodeT5 [20], and 2) the state-of-the-art approaches of SE tasks. Our focus on LLMs for code is based on their widespread popularity and extensive adoption within the SE community [21, 22, 23]. A wide range of research endeavors aimed at enhancing the performance of LLMs for code [24], delving into their robustness [25], and unveiling the inherent operational mechanisms of LLMs for code [26]. Moreover, other studies explored techniques for compact model compression [27] and scrutinizing the effects of data quality issues (e.g., label correctness) on the effectiveness of LLMs for code [28]. However, it is noteworthy that the SE community has not yet substantially investigated to understand the impact of the long-tailed code distribution. This underpins our choice to consider LLMs for code as the studied models. Additionally, we studied the state-of-the-art (SOTA) approaches to distinguish the impacts of long-tailed distribution on the most advanced approaches for SE tasks. Interestingly, the SOTA approaches of the studied tasks are all built upon LLMs. In this study, we investigate the long-tailed distribution in labeled SE datasets, focusing on three valuable downstream tasks. We design an automatic long-tailed distribution analysis tool namely LTAnalyzer. LTAnalyzer first identifies the suitable unique labels for both classification and generation tasks and quantifies the long-tailedness based on the Gini coefficient [29]. We use LTAnalyzer to identify the existence of long-tailed distributions in the datasets of three valuable downstream tasks. To understand the impact of long-tailed distribution, we compare the model performance on the head and tail data for the state-of-the-art approaches and popular LLMs for code such as CodeBERT and CodeT5. We observe that those models perform 30.0-254.0% worse on the tail data compared to the head. We explore the potential solutions (i.e., Focal Loss [30] and LRT [31]) to improve the performance on the tail, which are originally proposed in computer vision. Specifically, these solutions assign higher weights to the tail during training so that learning-based models may learn more effective features of the tail. The solutions could improve the model performance on the tail images by 9.0% and 29.0%, respectively. However, they are relatively ineffective in SE datasets, resulting in only marginal improvements in accuracy scores ranging from 0.3% to 1.4%. Alternatively, we further explore with a simple approach to identify the tail data during inference (the labels are unknown). Specifically, we fine-tune the CodeBERT [19] model to perform binary classification (i.e., predicting whether test data belongs to the tail or the head). The generated predictions of automatic SE tools on the tail data are more likely to be wrong and not reliable. We aim to identify the tail data and warn users about the potential unreliability of the predictions generated by SE tools because they are made on the tail data. Results show that the simple approach could achieve an accuracy of 66.7% to 84.4% in identifying the tail in studied tasks. We structure our study by answering the following research questions: 1. **RQ1. To what extent do long-tailed distributions exist in the studied SE datasets?** 2. **RQ2. How do the long-tailed distributions affect the effectiveness of learning-based SE techniques?** 3. **RQ3. How effective are the potential solutions to address long-tailed distributions for SE tasks?** 4. **RQ4. How accurately can we identify the tail?** Based on our findings, we provide insightful implications for future research: (1) Researchers should explicitly report the performance of their approaches in tails. (2) New approaches are needed to address the long-tailed distribution. Researchers should consider taking the rich label relationships in SE datasets into account. (3) Identifying the tail data is a promising direction to mitigate the negative impacts of long-tailed distribution. In summary, our contributions are as follows: * To the best of our knowledge, we are the first to study the impact of long-tailed code distribution on LLMs for code. * We found that the long-tailed distribution has a substantial impact on the performance of LLMs for code. \begin{table} \begin{tabular}{l|c|c} \hline **CWE Type** & **Data Ratio** & **Accuracy** \\ \hline CWE-863: Incorrect Authorization & 1.3\% & 11.1\% \\ \hline CWE-79: Improper Neutralization & \multirow{2}{*}{15.8\%} & \multirow{2}{*}{88.5\%} \\ of Input During Web Page Generation & & \\ \hline \end{tabular} \end{table} TABLE I: Results of TreeVul on patches of CWE-863 (one infrequent type) and CWE-79 (one frequent type) * We developed an automated tool that can detect and quantify the long-tailed distribution's existence and degree. ## II Preliminaries and Related Work In this section, we clarify the definitions of long-tailed distributions in various communities. Then, we describe the related work and clarify the differences. ### _Long-tailed Distribution_ It is worth noting that the "long-tailed distribution" has different focuses in the business/statistics community and the computer science community. In the business/statistics communities [32, 9], long-tailed distribution is defined based on mathematical forms, such as exponential distributions, and is mainly used for theoretical analysis, such as examining whether a business sale distribution follows a specific exponential distribution or not. In contrast, the computer science community uses long-tailed distribution as a general concept to indicate skewed distributions [11, 10]. In other words, a "long-tailed distribution" is synonymous with a "skewed distribution" when many classes of labels are available. The computer science community focuses mainly on analyzing the impact of long-tailed distributions on machine learning models. The majority of studies exploring the impact of long-tailed distributions on machine learning models are conducted in the field of computer vision. Within this domain, researchers mainly focus on image classification tasks [11, 31] and have discovered that machine learning models face difficulties in accurately predicting labels for images in the tail [11]. In this study, we adopt the computer science community's definition and usage of the term "long-tailed" to define and analyze the long-tailed distribution in software engineering data. Different from the studies in computer vision, our study investigates classification and generation tasks in SE. Furthermore, in the realm of SE, some existing studies [33, 34] have also noticed the long-tailed distribution phenomenon in code. For instance, Lopes and Ossher [33] unveiled that the sizes of Java projects conform to a long-tailed distribution: a considerable number of Java projects belong to the small or medium scale category, while only a limited portion of these projects are of large magnitude. Similarly, Borges et al. [34] identified that the popularity pattern of software artifacts exhibits a long-tailed distribution, signifying that a small fraction of software artifacts can attain widespread popularity. However, those existing studies mainly described their observation of long-tailed code distribution in a specific task/dataset. In contrast, our study first studies the existence of long-tailed code distributions in three distinct SE tasks that have not been explored in prior studies. Subsequently, we investigate how such long-tailed code distributions influence the performance of popular LLMs for code. ### _Class Imbalance in SE_ The issue of class imbalance in software engineering (SE) is closely relevant to the long-tailed distribution. The class imbalance problem is a well-known issue in SE, particularly in defect prediction. In defect prediction, there are naturally many more non-defective instances than defective ones [35, 36, 37]. Class imbalance negatively impacts the accuracy of defect prediction models [38, 39, 40]. To address this, data oversampling techniques such as random oversampling [41] and SMOTE [42, 43, 44] are widely employed to generate more instances for the minority classes. Random oversampling increases the number of samples in the minority class by randomly repeating samples from that class, while SMOTE generates new samples for the minority class through linear interpolation between minority neighbors. Besides, SMOTE is mainly used to generate new samples for hand-crafted feature datasets [42, 43, 44]. Class imbalance typically refers to an uneven distribution of data among a few classes, often only two. In contrast, a long-tailed distribution represents an extreme case of class imbalance where there are numerous classes or labels, creating a more challenging problem. For instance, computer vision datasets [10, 45] have 365-8,148 classes, while SE datasets [7, 6, 12] can have 117-99,317 classes, as confirmed later in this paper. Therefore, the long-tailed distribution requires more sophisticated designs to address a vast number of tail classes/labels with rare data samples. In this study, we intentionally avoid investigating the techniques of random oversampling and SMOTE for handling class imbalance in the context of long-tailed distribution. This decision is grounded in practical considerations. While random oversampling entails duplicating minority class samples to achieve a balanced dataset among all classes, its applicability diminishes significantly under long-tailed distributions. To illustrate, envision a scenario with 500 classes, where one head class possesses 1,000 samples, and the remaining 499 classes each consist of only 5 samples. Implementing random oversampling would yield a dataset containing 500,000 samples, magnifying its size by approximately _142_ times in comparison to the original dataset. This substantial inflation of the dataset would notably hamper the training process of SE models and necessitate substantially greater computational resources. Given that one of our studied datasets encompasses 99,317 classes with at least 5,000 occurrences for head classes, the random sampling technique becomes unfeasible. Additionally, we notice that the SMOTE technique is tailored for hand-crafted features. However, the input code data in our studied tasks consists of non-structural source code snippets, thereby precluding the direct application of SMOTE to generate new code samples for tail classes. ## III Methodology ### _Research Questions_ Figure 2 provides an overview of our research questions and their interrelationships. Formally, our study focuses on addressing the following research questions: 1. **RQ1. To what extent do long-tailed distributions exist in the studied SE datasets?** Long-tailed distributions have been observed in various types of data, such as images and business sales. However, the presence and extent of long-tailed distributions in SE datasets have not been explored yet. Therefore, to investigate this research question, we utilize our proposed long-tailed distribution analysis tool LTAnalyzer to confirm the existence of long-tailed distributions in the datasets of three downstream SE tasks: API sequence recommendation, code revision recommendation, and vulnerability type prediction. 2. **RQ2. How do the long-tailed distributions affect the effectiveness of learning-based SE techniques?** The impact of long-tailed distributions on learning-based SE techniques has not been thoroughly explored yet. In this research question, we investigate the impact of long-tailed distributions on the effectiveness of learning-based SE techniques, with a focus on two popular LLMs for code (CodeBERT and CodeT5) and state-of-the-art approaches for three studied tasks. 3. **RQ3. How effective are the potential solutions to address long-tailed distributions for SE tasks?** Several techniques have been proposed in computer vision to specifically address long-tailed distributions in images [11, 30, 31]. Although these techniques were not initially developed for SE datasets, they have the potential to be effective solutions to address the long-tailed distributions in SE datasets. Thus, in this research question, we investigate the effectiveness of two techniques (Focal Loss [30] and LRT [31]) from computer vision on long-tailed SE datasets. Focal Loss is a widely used solution to mitigate the effect of long-tailed distribution in general. LRT is the state-of-the-art mitigation solution in computer vision. 4. **RQ4. How accurately can we identify the tail?** Learning-based SE approaches may struggle to learn effective features for tail data, which can lead to erroneous predictions. During inference, we do not know whether the test data belongs to the head or tail while SE tools are more likely to make wrong predictions on the tail than the head. To address this, identifying tail data during inference can be useful for warning users about the reliability of predicted labels. Therefore, this research question aims to investigate how accurately we can classify whether a test data instance belongs to the head or tail with a fine-tuned CodeBERT model. ### _Study Tasks_ According to a recent literature review [17], the three most commonly studied task formulations in software engineering between 2014 and 2019 were generation-based, classification-based, and retrieval-based tasks. In this study, we prioritize the top two actively studied task formulations: tasks based on generation and classification. To select suitable generation and classification tasks to study the extent and impact of long-tailed distribution, we follow two considerations: (1) A candidate task should have a dataset mined from a considerable number of software projects. This enhances the representativeness of the dataset. (2) A candidate task contains diverse labels with a relatively large number of unique labels. Following the considerations, we choose three downstream tasks as the study subjects: two generation-based tasks, which are API sequence recommendation [7] and code revision recommendation [6], along with a multi-class classification-based task, vulnerability type prediction [12]. **API Sequence Recommendation [7]** is a generation (_sequence-to-sequence_) task to generate API sequence recommendations for the latter part of the code based on the input of natural language (NL) queries and the prior source code contexts. We utilized a large-scale dataset recently released by Irsan et al. [7]. This dataset is built on a Java dataset containing 50,000 compilable Java projects provided by Martins et al. [46]. Irsan et al. [7] extracted all the methods from these projects using the Eclipse JDT parser and then extracted the method declaration, method body, and Javadoc annotation from each method. They further extracted the API sequence from the method body using the same parser. For the inputs and outputs, Irsan et al. defined the NL query as the first sentence of the Javadoc annotation, the code context as the method declaration and the first three lines of the method's source code, and the ground truth API sequence as the API sequence extracted from the source code after the first three lines. In addition, there are a large number of (i.e., 99,317) unique APIs available in the dataset shared by Irsan et al. [7]. **Code Revision Recommendation** is another _sequence-to-sequence_ task where the DL model takes bi-modal inputs (\(code\) and \(comments\)), and generates the \(revised\,code\). The comments are provided by the code reviewers, which explain the necessary improvements toward the originally submitted \(code\). The \(revised\,code\) is the revised version of the submitted code to address the comments. We utilized a large-scale dataset released by Tufano et al. [6]. The dataset was constructed from two sources: 1) Github, comprising open-source projects (4,901 projects) using the web application shared by Dabic et al. [47], and 2) Gerrit, consisting of code review data about 6,388 projects from six Gerrit installations. Tufano et al. extracted triplets \(\langle code,comments,revised\,code\rangle\) from both the GitHub and the Gerrit projects. Furthermore, the ground truth code revisions in this dataset are diverse as they address comments from different reviewers across a large number of projects and the reviewers could have various intentions when Fig. 2: Overview of our research methodology. requesting a code revision [48]. **Vulnerability Type Prediction** is a multi-label classification task that aims to automatically predict the fine-grained CWE category for an input security patch (i.e., a commit) which can reduce the burden of manual analysis and categorization by human experts. Notably, the CWE categories are organized in a tree-like hierarchy, and the level of abstraction in the predicted CWE category is determined by a given depth parameter \(d\), which corresponds to the predicted CWE category at the \(d\)-th level of the CWE tree. Pan et al. [12] defined three different levels of target depths: 1, 2, and 3, where d=3 represents the most fine-grained and accurate level in their study. In our work, we focus on d=3, as it represents the most accurate CWE category for an input security patch. We used a large security patch dataset shared by Pan et al. [12], collected from the National Vulnerability Database (NVD), consisting of 6,541 commits from 1,560 GitHub open source software repositories. In addition, this dataset contains 113 unique CWE types, which covers more CWE types than its prior works [49, 50, 51]. API sequence recommendation has continuously been studied in the SE community for the past five years [52, 53, 54, 7]. Code revision recommendation, as part of code review automation, has garnered increasing attention in two years [55, 56, 6]. Vulnerability type prediction is an emerging task that classifies security patches into fine-grained software vulnerability types, facilitating a better understanding of security patches and supporting early remediation [12]. These three tasks cover different stages in the software development and maintenance life cycle (i.e., implementation, code review, and vulnerability analysis), and their labels are also distinct from each other: APIs, code changes, and vulnerability types. Please note that we did not choose defect prediction and vulnerability prediction as the studied tasks, although they are popular. We aim to study the long-tailed distribution in the labels of SE datasets, which requires diverse labels in the datasets. Most studies [57, 58, 40, 38] on defect/vulnerability prediction aim for binary classification. ### _Studied Models_ We investigated two categories of models for each task: 1) two widely used LLMs for code, CodeBERT [19] and CodeT5 [20] fine-tuned for each task, and 2) the state-of-the-art approach for each task. CodeBERT is a bimodal pre-trained encoder model that is pre-trained on masked language modeling [59] and replaced token detection [60] tasks. CodeT5 is a pre-trained encoder-decoder model that utilizes multiple pre-training tasks [20], i.e., masked span prediction, masked identifier prediction, and identifier tagging. We adopted CodeBERT and CodeT5 in a straightforward way. If there are multiple input sources (e.g., code and texts), we concatenated the input sources and fed the concatenated inputs to the models to predict/generate outputs. For API sequence recommendation, we incorporate the state-of-the-art MulaRec proposed by Irsan et al. [7]. MulaRec uses CodeBERT [19] as the backbone model, alongside the introduction of an innovative multi-modal fusion module. This multi-modal fusion module effectively aggregates features from natural language queries and programming language contexts. For code revision recommendation, we include T5-Review proposed by Turfano et al.[6]. It is pre-trained on 1.4 million code snippets from the CodeSearchNet dataset[61] and the text from Stack Overflow [62]. For vulnerability type prediction, we employ TreeVul invented by Pan et al. [12]. TreeVul is built upon CodeBERT [19] and it explicitly leverages the relations of the CWE category with its ancestors in the hierarchy. ### _Long-tailed Distribution Analysis Tool: LTAnalyzer_ In order to facilitate the analysis of the long-tailed distribution in a given dataset, we develop a tool named LTAnalyzer which takes a labeled dataset of a task as input and outputs the extent of long-tailedness. This tool enables the quantification of long-tailedness based on the data distribution. #### Iii-D1 Analyzing Data Distribution The definition of data distribution is given by \(P(X)\), where \(X\) is the unique label set and \(P(X)\) is the data frequencies of each unique label within the entire dataset. It is straightforward to identify the unique labels in classification tasks if the labels are mutually exclusive, meaning that a data sample can only belong to a single label. **Identifying Unique Labels for Generation Tasks.** For generation models, there is no clear concept of labels as they employ a ground truth sequence to evaluate the performance while each meaningful token or a set of tokens is used to optimize the model during training. Specifically, learning-based models [63] usually generate the output sequence token-by-token instead of a whole sequence at once, indicating that individual tokens may be more appropriate labels for generation tasks. However, a single token neither establishes meaningful semantics nor represents a data instance. To investigate the distribution of the datasets, we define different labels for each generation task by considering the individual characteristics. We select the tokens that build complete or independent semantics such as a single API. In detail, as an API may be the finest-grained artifact for training the API sequence generation, we define an API as a label for the API recommendation task. On the other hand, the revised part of the code in the code revision recommendation plays the key in optimizing the model training. We structure the revised code in terms of token-level edits and utilize them as labels (e.g., An edit operation as \(\langle\)public, ""\(\rangle\) and \(\langle\)"", protected\(\rangle\)) rather than a whole code chunk. The following cases demonstrate more formalized label identification from generation-based approaches: **Case 1.** In API sequence recommendation, the objective is to suggest _a sequence of APIs_, where a single API is a fundamental unit. Since a single API represents relatively complete functionality, we consider _a single API_ as a unique label in this task. **Case 2.** In code revision recommendation, the goal is to suggest the _revised code_ that improves the quality of the submitted code following reviewers' comments. Although the ground truth is a sequence of code tokens, the changed code tokens (e.g., newly added or deleted) are specifically more important than those tokens remaining unchanged. To emphasize the changed tokens, we follow prior works [64, 65] to adopt _difflib_[66] to extract the token-level edits from the code changes (input code \(\rightarrow\) revised code). A token-level edit is defined as a pair of code tokens (token before, token after). For instance, if the edit action is add, then the token before will be an empty string, and the token after will be the newly added token in the revised code. In this case, we consider each unique token-level edit as one unique label. #### Iii-B2 Quantifying Long-Tailedness Accurate measurement of the long-tailedness of data is an important aspect of understanding the dataset [11]. In this work, we adopt the Gini Coefficient [29] as the metric to quantify the long-tailedness. The Gini Coefficient is recommended as an ideal metric by a systematic survey of long-tailed distribution studies in computer vision [11] because it is not affected by the number of data samples in the datasets. We use \(x_{i}\) to represent the number of data samples that belong to a label \(i\). A dataset contains \(n\) labels in total, with an average number of samples of \(\bar{x}=\frac{\sum_{i=1}^{n}x_{i}}{n}\). Then the Gini coefficient is defined as half of the relative mean absolute difference [29], formally represented as \[Gini=\frac{\sum_{i=1}^{n}\sum_{j=1}^{n}|x_{i}-x_{j}|}{2n^{2}\bar{x}}\] When the numerator (i.e., \(\sum_{i=1}^{n}\sum_{j=1}^{n}|x_{i}-x_{j}|\)) is large, it means that there are big differences among the number of samples belonging to different labels, indicating a higher long-tailedness. A Gini coefficient of 0% denotes perfect equality, where each label has the same number of data samples, while a coefficient of 100% signifies complete inequality, where one label possesses all the data samples. Therefore, the higher the Gini coefficient, the stronger the degree of long-tailed distribution is. ## IV Experiments In this section, we present experimental results to answer the research questions one by one. ### _RQ1: Extent of LT Distribution_ We study the SE datasets by (1) visualizing the distribution, and (2) quantifying the extent of long-tailedness. **Visualizing the data distribution**. Figure 3 visualizes the data distribution in all of the studied tasks. The labels for each task are sorted in descending order according to their frequency, i.e., \(\frac{\#\text{occurrence of a label}}{\#\text{occurrence of all labels}}\). Each label is then assigned a unique label ID ranging from 0 to the total number of unique labels minus one. As shown in Figure 3, the datasets for all three tasks exhibit long-tailed distributions. Only a small fraction of labels have high frequencies, while a broad range of other data points has low frequencies of occurrence. **Numbers of Head and Tail Labels**. We count the number of head and tail labels in the studied datasets. In the API sequence recommendation dataset, the top 213 frequent APIs (out of 99,317 unique APIs) account for half of the data samples. In the code revision recommendation dataset, the top 272 token-level edits (out of 42,825 edits) account for half of the samples. Meanwhile, in the vulnerability type prediction dataset, the top 6 frequent CWE types (out of 113) make up half of the samples. In addition, although the tail labels have fewer data samples each, there are many important labels in the tails. For instance, 14 out of the top 25 most dangerous CWE types, in the year of 2022 [13], belong to the tails. This observation motivates us to analyze the performance of SE models on the tail in the next research question. **Quantifying the extent of long-tailedness**. Using the Gini coefficient [29], we have quantified the degree of the long-tailedness present in each dataset, as demonstrated in Table II. A higher Gini coefficient signifies a higher long-tailedness, implying that a greater number of data samples belong to fewer head classes/labels. To gain a deeper understanding of the extent of the long-tailedness in SE datasets, we compared it with popular long-tailed computer vision (CV) data, such as ImageNet-LT [10], Places-LT [10], iNaturalist 2018 [45], and LVIS v0.5 [67] by using the same metric, Gini coefficients, in Table II. It is noteworthy that SE datasets have Gini coefficients ranging from 0.777 to 0.932, while those for CV datasets ranged from 0.524 to 0.825. The results demonstrate that on average our studied SE datasets have higher long-tailedness than the well-known long-tailed datasets. This could be due to the fact that the studied SE datasets [7, 6, 12] have a larger number of unique labels (ranging from 113 to 99,317 Fig. 3: Visualization of code distributions in three studied tasks. labels) compared to CV datasets (from 65 to 8,148) [10, 45, 67]. **Answer to RQ1**: The datasets of the three studied tasks display different degrees of long-tailed distributions. Within our investigations, some SE datasets may exhibit higher long-tailedness compared to CV datasets. ### _RQ2: Impacts on DL Models_ In this research question, we investigate the performance of SE models on the tail and head data respectively. **Experimental Setup**. In order to assess the impact of the long-tailed distribution, we began by splitting the test set equally into two parts: the head and tail **data** ((input, ground truth) pairs), based on the frequency of their labels. The split was performed as follows: 1. [leftmargin=*] 2. _Head Data:_ consisted of 50% of the data samples whose labels were more frequent. 3. _Tail Data:_ comprised of the remaining 50% of the data samples whose labels were less frequent. To clarify, if there is a label containing data samples that overlap between the head and tail data, we assign the label to the group that results in the smallest difference between the two groups. For example, assume that there is a label \(x\) that contains 10 samples that overlap between the head and tail groups. Prior to adding label \(x\) to either group, the head group contains 85 samples and the tail group contains 90 samples. In this case, we assign label \(x\) to the head group, since the difference between the two groups after the assignment will be 5, which is smaller than the difference (i.e., 15) that would result from assigning label \(x\) to the tail group. For generation-based tasks, ground truth sequences are usually used to evaluate the model performance. A ground truth sequence may contain multiple labels such as multiple APIs or token-level edits. In such cases, we computed the sum of the reciprocal frequencies of labels in ground truth sequences as the sequence-level score, i.e., \(\sum\limits_{x_{i}\in x_{set}}\frac{1}{freq(x_{i})}\), where \(freq(.)\) is the frequency of a label (e.g., an API and a token-level edit) in the entire dataset and \(x_{set}=[x_{0},x_{1},...,x_{n}]\) refers to the labels contained in the ground truth for a (input, ground truth) pair. We use the reciprocal frequencies of labels to highlight the contributions of infrequent fine-grained labels (tails) in the sequence-level score. Then, the head and tail subsets were then identified by analyzing the sequence-level scores: half of the data with higher sequence-level scores (containing more infrequent labels) is the tail data and the rest is the head data. To analyze the performance of the studied DL models, we use two widely-used metrics accuracy and Exact Match (EM) [68, 6, 56]. Specifically, for the classification task, we adopt accuracy which measures the proportion of correctly classified samples out of the total number of samples. For generation-based tasks, we adopt EM which measures the percentage of cases where the generated answer exactly matches the ground truth. Due to the limited space, we put the results in terms of other widely used metrics (e.g., BLEU score and F1 score) in the appendix of the replication package. **Experimental Results**. The experimental results for each task are depicted in Figure 4, where the blue, red, and yellow bars represent the model performance on the head, tail, and the whole test set, respectively. In the case of the API sequence recommendation task, we observe that the state-of-the-art model (MulaRec), CodeBERT, and CodeT5, perform 30.0%, 53.3%, and 32.7% better on the head data than on the tail data, respectively. Similarly, in the code revision recommendation task, T5-Review, CodeBERT, and CodeT5 exhibit 243.9%, 254.0%, and 203.7% better performance on the head data than on the tail data, respectively. For the vulnerability type prediction task, TreeVul, CodeBERT, and CodeT5 achieve 43.6%, 39.4%, and 60.9% more accurate results on the head data compared to the tail data. In our previous experiments, we divided the test set into only two large groups based on frequency or sequence-level scores. In addition, we conducted a more thorough analysis by investigating the model performance in more small groups. To achieve this, we sorted the test set in descending order with respect to label frequencies for the classification task. For generation tasks, we sorted the test set in ascending order with respect to sequence-level scores, i.e., from sequences containing few infrequent labels to those with many infrequent labels. Then, we divided the sorted test set into 10 groups, with each group consisting of an equal number (10% of the data) of data samples. Afterward, we separately compute the evaluation metrics for the predictions made by a SE model on each of the ten groups of test data. The line plot in Figure 5 illustrates the results of the experiment, with the red, green, and pink lines representing the state-of-the-art, CodeT5, and CodeBERT models, respectively. In the API sequence recommendation task, we found that for groups with the highest frequency (first 10% of data), most models achieved EM scores higher than 65%. However, for groups with the lowest frequency (last 10% of data), the EM scores ranged from 12-25%. This indicates the model performance drops when the class/label frequencies decrease. Moving on to the code revision recommendation task, we observed about 3-7% EM scores in the group with the lowest frequent labels (last 10% of data). This is poor compared to the 23-42% EM scores obtained for the first 10% of data. Regarding the task of vulnerability type prediction, we observe that the impact of long-tailedness was less than that in the preceding two tasks, as evidenced by the absence of a consistent decline in model performance as the class/label \begin{table} \begin{tabular}{c|c|c} \hline **Domains** & **Datasets** & **Gini Coef.** \\ \hline \multirow{3}{*}{SE} & API Sequence Rec. [7] & 0.907 \\ & Code Revision Rec. [6] & 0.932 \\ & Vulnerability Type Pred. [12] & 0.777 \\ \hline \multirow{3}{*}{CV} & ImageNet-LT [10] & 0.524 \\ & Places-LT [10] & 0.671 \\ & iNaturalist 2018 [45] & 0.620 \\ & LVIS v0.5 [67] & 0.825 \\ \hline \end{tabular} \end{table} TABLE II: Long-tailedness of SE datasets and CV datasets frequencies decreased. Instead, we uncover a distinct staircase pattern in the graph, where the results were better (80-98% in accuracy) for the more frequent data labels (first 40% of the data), while the performance significantly dropped (31-73% in accuracy) for less frequent data labels (last 60% of the data). **Qualitative Analysis**. To illustrate the impact of long-tailed distributions, we present one case for each task in Figure 6. Figure 6(a) illustrates the input (text annotation and code context), ground truth API sequences, and API sequences generated by MulaRec in the API sequence recommendation. MulaRec successfully generated the first two APIs (_Reader.read_ and _System.arraycopy_) that are relatively frequent with 426 and 6,205 data samples in the dataset, respectively. However, it failed to generate the infrequent API _String.copyValueOf_ which has only 47 data samples and belongs to the tail. Instead, MulaRec generated a wrong API (_String.<init>_) which contains 5,046 samples. Figure 6(b) presents the input, the ground truth revised code, and the generated code by T5-Review. The ground truth aims to modify the access level of a method from public to protected with two token-level edits: deleting public and inserting protected in the corresponding position. T5-Review succeeded in deleting public, which has 565 data samples. However, it failed to insert protected due to limited data instances, i.e., only 79 belong to the corresponding label. Notably, the token-level edit "insert protected" is one of the tail labels. In the case of vulnerability type prediction, Figure 6(c) exhibits the predictions of TreeVul on patches of type CWE-661, which make up only 1.1% of all patches in the dataset and belongs to one tail label in RQ1. We observe that TreeVul achieves an accuracy of 42.8% on this type, which is significantly lower than the accuracy on the entire dataset (73.1%). Moreover, TreeVul wrongly classifies 14.3% of patches of CWE-661 into CWE-384, CWE-119, CWE-200, and CWE-441, respectively. Those erroneous predictions (i.e., CWE-384, CWE-119, CWE-200, and CWE-441) are not relevant to the ground truths CWE-611, indicating that the model struggles to learn the features of this type because of long-tailed distribution. **Answer to RQ2**: The experimental results demonstrated that the long-tailed distribution significantly affects the effectiveness of DL models. While these models perform well on head data with frequent data instances, they face difficulties in tail data with infrequent data instances. Across the three tasks, we observed that the performance difference of the models on the head and tail data ranged from 30.0% to 254.0%, highlighting the challenge that DL models face in providing accurate recommendations for tail data. ### _RQ3: Effectiveness of LT Solutions_ To address the issue of long-tailed distribution, various studies have proposed solutions, e.g., Focal Loss [30] and LTR [31] techniques, to improve model performance on tail data. While these solutions were not initially developed for SE datasets, it is still valuable to examine their efficacy on long-tailed SE datasets. **A Specific Challenge in Learning Tail Data.** Researchers [11] have proposed a theoretical explanation for the difficulty in learning tail data. The head classes have a Fig. 4: Model performances on different subsets of the test datasets in three studied tasks. Fig. 5: Model performances on 10 small groups of the test sets of three studied tasks. substantially higher number of training samples, which results in the impact of the tail classes being overshadowed by the head classes during model parameter updates. As a result, the model is trained in a direction that understands heads well, but not tails. **Solutions for Long-tailed Distribution.** To address the negative impacts of long-tailed distributions, researchers designed various solutions that involve assigning higher weights to tail samples and lower weights to head samples. These approaches aim to prevent DL models from being dominated by the head data. In this research question, we investigate the effectiveness of two different solutions for addressing the long-tailed distribution: the widely used Focal Loss [30] and the state-of-the-art solution, namely LTR [31]. 1. [leftmargin=*] 2. _Focal Loss (FL)_[30] a classical solution for dealing with long-tailed distributions by introducing an adjustment factor to the standard cross-entropy loss to focus training on difficult samples (i.e., the samples where DL models have low confidence in their predictions). FL increases the loss of difficult samples that a model has low-confidence predictions on. As difficult samples are mostly composed of the tail data for the long-tailed data, FL improves the learning of tail data [11]. In addition, FL is designed for classification tasks. 3. _Long-Tailed Recognition via Weight Balancing (LTR)_[31] achieved the state-of-the-art results in computer vision tasks. It adopts a two-stage training paradigm: (1) learning features using the standard cross-entropy loss, and (2) further tuning classifiers through class-balanced loss [69] with weight decay [70] and the MaxNorm constraint [71], which assigns higher weights to the tail. Although we investigate one classification task and two generation tasks, the mitigation techniques for long-tailed distribution are for classification tasks. To the best of our knowledge, there is no long-tailed distribution mitigation solution for generation tasks, yet. Within the context of our study, a significant differentiation between generation tasks and classification tasks is that in generation tasks, the samples have multiple fine-grained labels (APIs/edits), while in classification tasks, the samples have only one label. On the one hand, we observed that the FL is flexible and does not require a sample with a single label [30]. Therefore, we also investigated its effectiveness in generation tasks. On the other hand, LTR uses the class-balanced loss, which assumes that there is only a single label for each sample [69]. As a result, we only evaluated its effectiveness in the classification task in this study. **Experimental Results.** The aforementioned mitigation solutions are strategies that re-weight the standard cross-entropy loss based on the label frequencies. Therefore, we apply them to the studied DL-based SE approaches. Table III displays the performance of the SE models after integrating \begin{table} \begin{tabular}{l c c} \hline \hline **A**omationationation: **Heads on line skipping components and query lines** \\ \hline \hline **C**our Control: & & \\ \hline \hline **S**irific & **F**eed.line(header in) & \\ \hline \hline [MISSING_PAGE_POST] the potential solutions. We highlight the best performance in each split (head/tail/all) for each task by using boldface numbers. The results indicate that FL consistently diminishes the effectiveness of SE models in generation tasks, i.e., API sequence recommendation and code revision recommendation, regardless of the split. Specifically, the integration of FL results in a 1.5-9.2% and 0.9-4.8% absolute percentage reduction in EM for API sequence recommendation and code revision recommendation, respectively. Regarding the vulnerability type prediction, we found that FL and LTR resulted in enhanced accuracy on the tail for most cases. Specifically, FL yielded a 2.3% and 2.7% absolute percentage increase in tail accuracy for CodeBERT and CodeT5, respectively, while it caused a 1.2% decrease in tail accuracy for TreeVul. In contrast, LTR consistently improved the performance on the tail for all models by an absolute percentage of 0.6-3.8%. Interestingly, we observed that FL sacrifices the performance on the head to achieve a potential improvement on the tail while LTR can maintain the head performance for CodeBERT and TreeVul. In summary, the studied long-tailed solutions could improve the tail performance of the vulnerability type prediction by 0.6-3.8%. Nevertheless, these solutions are not guaranteed to produce consistent enhancements and only have a marginal effect on the overall results of the complete test set, ranging from 0.3% to 1.4%. **Properties of SE datasets**. Based on the experimental results, we found that the mitigation solutions are not effective enough for software engineering datasets. It possibly stems from the fact that these solutions treat each label as completely distinct, ignoring the relationships among them. In computer vision datasets, labels such as "plane" and "cart" in image classification are mostly independent and it is not common to analyze the label relationships [10]. In contrast, software engineering datasets contain rich information regarding relationships among different labels. For instance, a specific Common Weakness Enumeration (CWE) type can have several sibling CWE types that share the same parent type in the CWE type tree and describe similar vulnerabilities [12]. An API usually is used as a part of combinations with other APIs [7]. Token-level edits may occur simultaneously as shown in Figure 6(b). Moreover, the relationship information among different labels in SE datasets can be obtained from historical data (e.g., API usages or code changes) or expert knowledge-based systems maintained by the community (e.g., the CWE System). **Answer to RQ3**: The widely-used solutions for addressing long-tailed distributions have proven to be relatively ineffective in SE datasets, resulting in only marginal improvements ranging from 0.3% to 1.4% across the complete datasets. This challenge highlights the difficulty of effectively learning the tail part of SE datasets. Thus, we encourage future research to explore more potential solutions to this issue. ### _RQ4: Tail Data Detection_ Identifying tail data is potentially useful in warning users about the reliability of generated predictions of learning-based SE tools. Therefore, this research question seeks to explore the effectiveness of identifying tail data solely based on input data. **Experimental Setup**. We conduct a further investigation by fine-tuning the CodeBERT model to perform binary classification, i.e., predicting the probability of input data belonging to either the tail or the head. Specifically, we consider the tail as the positive class and the head as the negative class. We train CodeBERT to minimize cross-entropy loss. **Results**. As shown in Table IV, the fine-tuned CodeBERT demonstrated an ability to accurately identify the tail data in the datasets of API sequence recommendation and vulnerability type prediction, achieving accuracy scores of 84.4% and 81.7%, respectively. However, its performance on the code revision recommendation dataset was substantially lower with an accuracy of 66.7%. **Potential Application**. Recent research has highlighted that incorrect prediction outcomes (i.e., false positives) can result in developers' discontent and reduced trust in automated SE approaches [72, 73, 74]. We are aware that SE models show particularly lower effectiveness on tail data as addressed in RQ2 and RQ3. One potential utility of the tail data detection tool is to either explicitly advise users to disregard predictions on the tail data or compel automated SE tools to output a response like "_I am uncertain about the answer due to lack of relevant knowledge_." Such an application might enhance user satisfaction by aiding in the identification and filtering out of prediction results regarding tail data that are more prone to be inaccurate. **Answer to RQ4**: Experimental results demonstrated accurate identification (84.4% and 81.7% in accuracy) of the tail in API sequence recommendation and vulnerability type prediction. However, our approach achieves a lower accuracy of 66.7% in code revision recommendations. This suggests that the difficulty of identifying the tail varies depending on a task and the associated dataset. ## V Discussion ### _How Do Traditional Machine Learning Models Perform?_ In addition to the previously studied DL models, there exists another category of widely used methods known as traditional Machine Learning (ML) models. In our evaluation, we take popular traditional machine learning models such as Logistic Regression (LR) [75], Decision Tree (DT) [76], Support Vector Machines (SVM) [77], and Random Forest (RF) [78] into \begin{table} \begin{tabular}{l|c|c|c} \hline **Datasets** & **API Seq. Rec.** & **Revision Rec.** & **Vulner. Type Pred.** \\ \hline **Accuracy** & 84.4 & 66.7 & 81.7 \\ \hline **F1** & 84.0 & 66.4 & 81.5 \\ \hline \end{tabular} \end{table} TABLE IV: Results of identifying the tail account. However, it is important to note that traditional ML models are solely designed for classification tasks. Therefore, we solely perform experiments on the vulnerability type prediction task. Specifically, we adopt the widely used Bag-of-Word (also called Bag-of-Token) embedding [79] to turn the input data into vectors and feed the vectors into the traditional ML models. Table V illustrates the performances of traditional ML models (numbers in grey) on the head, tail, and whole data in accuracy. We also included the performance of the state-of-the-art approach, TreeVul, in this table. The results show that similar to DL, traditional ML models perform better on the head data compared to the tail data. ### _Implications_ **Researchers need to be cautious about using average results.** In the field of SE, it is common practice to report average results, such as average accuracy, on all samples in the test set. However, this practice inadvertently conceals the model's shortcomings on the tail, as it performs exceptionally well on the head data but poorly on the tail data. For example, in vulnerability type prediction, where there are 117 categories, TreeVul achieves an accuracy of 60.6% on the majority of the types (107 tail types), but due to the high accuracy of the 6 head types (87.0%), the average result is 73.1%. However, TreeVul cannot reach an accuracy of 73.1% on most of the CWE types (107 tail types) in this task. To gain a more comprehensive understanding of both the strengths and limitations of an approach, we recommend that researchers report both the average results and the results on heads and tails separately if long-tailed distributions exist in their datasets. **A customized method to effectively learn the tail in software engineering data is needed. Our future work will take the rich label relationships into account.** Our study sheds light on the ineffectiveness of adopting computer vision solutions for long-tailed distribution problems in SE datasets. One possible reason is that computer vision solutions do not account for the rich label relationships in SE data. These relationships can be valuable resources for learning the features of tail labels. A potential approach is to group similar tail labels into clusters, referred to as abstracted classes, which could provide more data samples and improve the learning of effective features for the abstracted class that consists of tail labels. ### _Threats to Validity_ Our findings are subjected to the studied models and datasets. Therefore, it may not be generalizable to all software engineering tasks. To mitigate this limitation, we selected three distinct tasks to cover two representative categories of software engineering tasks, i.e., classification and generation tasks. The findings of our study for RQ2 are based on the even division of test data into head and tail data, which may not be the optimal approach for uncovering insights into long-tailed code distributions. To address this limitation, we also split the test data into 10 small groups. This splitting helped to mitigate the potential bias and confirmed the consistency of our findings. Our study primarily focused on DL-based approaches. The generalizability of our findings to traditional machine learning models, such as Logistic Regression, is uncertain. To address this limitation, we conducted experiments using four traditional machine learning models. Finally, we would like to note that we made our replication package 2 publicly available for future studies to repeat our experiments and extend our work. Footnote 2: [https://github.com/soarsmu/LT4Code](https://github.com/soarsmu/LT4Code) ## VI Conclusion and Future Work We presented an empirical study on the long-tailed distribution of SE data. Specifically, we investigated this phenomenon on three distinct SE downstream tasks, namely API sequence recommendation, code revision recommendation, and vulnerability type prediction. Our study revealed that the long-tailed distribution has a significant impact on the performance of deep learning-based SE approaches. Specifically, deep learning models performed 30.0% to 254.0% worse on tail data compared to head data. In this study, we explored the effectiveness of solutions designed to mitigate the negative impact of the long-tailed data distribution in images. However, we found that these solutions are not effective enough in addressing long-tailed distribution in SE data, and we call for future work to address this issue. Moving forward, we plan to extend our investigation to more SE tasks and design a novel approach to addressing long-tailed distribution in code by considering the inner relationships among labels. Moreover, we are also interested in utilizing recent popular generative AI models like ChatGPT to generate samples for the tail classes, thereby mitigating the issue of long-tailed distributions in some SE datasets. **Acknowledgement.** This research / project is supported by the National Research Foundation, Singapore, under its Industry Alignment Fund - Pre-positioning (IAF-PP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
学習ベースの技術、特にコード用の大規模言語モデル(LLM)は、様々なソフトウェアエンジニアリング(SE)タスクで高い人気を獲得しています。しかし、既存の多くの研究は、より優れた学習ベースのモデルを設計することを中心にしており、データの特性に目を向けられていません。学習ベースのモデル、特にコード用の人気LLMは、データに大きく依存し、データの特性(例:データの分布)が彼らの動作に影響を与える可能性があります。私たちは、SEデータの分布について調査を行い、その結果、データは通常、偏った分布(つまり、長尾分布)に沿っています。これは、少数クラスに多くのサンプルが蓄積されている一方で、多数のクラスには非常に少ないサンプルが多いことを意味します。私たちは、3つの異なるSEタスクを調査し、長尾分布がコード用LLMの性能に与える影響を分析しました。実験結果によると、長尾分布はコード用
2303.00035
Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy
This work considers the problem of Distributed Mean Estimation (DME) over networks with intermittent connectivity, where the goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server. To mitigate the impact of intermittent links, nodes can collaborate with their neighbors to compute local consensus which they forward to the central server. In such a setup, the communications between any pair of nodes must satisfy local differential privacy constraints. We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes and, subsequently, propose a novel differentially private collaborative algorithm for DME to achieve the optimal tradeoff. Finally, we present numerical simulations to substantiate our theoretical findings.
Rajarshi Saha, Mohamed Seif, Michal Yemini, Andrea J. Goldsmith, H. Vincent Poor
2023-02-28T19:17:03
http://arxiv.org/abs/2303.00035v1
# Collaborative Mean Estimation over Intermittently Connected Networks with Peer-To-Peer Privacy ###### Abstract This work considers the problem of _Distributed Mean Estimation_ (DME) over networks with intermittent connectivity, where the goal is to learn a global statistic over the data samples localized across distributed nodes with the help of a central server. To mitigate the impact of intermittent links, nodes can collaborate with their neighbors to compute local consensus which they forward to the central server. In such a setup, the communications between any pair of nodes must satisfy _local differential privacy_ constraints. We study the tradeoff between collaborative relaying and privacy leakage due to the additional data sharing among nodes and, subsequently, propose a novel differentially private collaborative algorithm for DME to achieve the optimal tradeoff. Finally, we present numerical simulations to substantiate our theoretical findings. This work was supported by the AFOSR award #002484665, a Huawei Intelligent Spectrum grant, and NSF grants CCF-1908308 & CNS-2128448. ## I Introduction Distributed Mean Estimation (DME) is a fundamental statistical problem that arises in several applications, such as model aggregation in federated learning [1], distributed K-means clustering [2], distributed power iteration [3], etc. DME presents several practical challenges, which prior research [4, 5, 6, 7, 8] has considered, including the problem of straggler nodes, where nodes cannot send their data to the parameter server (PS). Typically, there are two types of stragglers: \((\mathrm{i})\)_computation stragglers_, in which nodes cannot finish their local computation within a deadline, and \((\mathrm{ii})\)_communication stragglers_, in which nodes cannot transmit their updates due to communication blockage [9, 10, 11, 12, 13, 14]. The problem of communication stragglers can be solved by relaying the updates/data to the PS via neighboring nodes. This approach was proposed and analyzed in [15, 16, 17], where it was shown that the proposed collaborative relaying scheme can be optimized to reduce the expected distance to optimality, both for DME [15] and federated learning [16, 17]. While the works [15, 16, 17] show that collaborative relaying reduces the expected distance to optimality, exchanging the individual data across nodes incurs privacy leakage caused by the additional estimates that are shared among the nodes. Nonetheless, this potential breach of privacy has not been addressed in the aforementioned works. To mitigate the privacy leakage in DME, we require a rigorous privacy notion. Within the context of distributed learning, _local differential privacy_ (LDP) [18] has been adopted as a gold standard notion of privacy, in which a user can perturb and disclose a sanitized version of its data to an _untrusted_ server. LDP ensures that the statistics of the user's output observed by adversaries are indistinguishable regardless of the realization of any input data. In this paper, we focus on the _node-level_ LDP where the neighboring nodes, as well as any eavesdropper that can observe the local node-node transmissions during collaborations, cannot infer the realization of the user's data. There has been extensive research into the design of distributed learning algorithms that are both communication efficient and private (see [19] for a comprehensive survey and references therein). It is worth noting that LDP requires a significant amount of perturbation noise to ensure reasonable privacy guarantees. Nonetheless, the amount of perturbation noise can be significantly reduced by considering the intermittent connectivity of nodes in the learning process [20]. The intermittent connectivity in DME amplifies the privacy guarantees; it provides a boosted level of anonymity due to partial communication with the server. Various random node participation schemes have been proposed to further improve the utility-privacy tradeoff in distributed learning, such as Poisson sampling [21], importance sampling [22, 23], and sampling with/without replacement [20]. In addition, Balle et al. investigated in [24], the privacy amplification in federated learning via random check-ins and showed that the privacy leakage scales as \(O(1/\sqrt{n})\), where \(n\) is the number of nodes. In other words, random node participation reduces the amount of noise required to achieve the same levels of privacy that are achieved without sampling. So far, works in the privacy literature, such as [18, 19, 20, 21, 22, 23, 24], have not considered intermittent connectivity along with collaborative relaying, where nodes share their local updates to mitigate the randomness in network connectivity [15, 16, 17]. Thus, this paper aims to close this theoretical gap. To this end, we first show that there exists a tradeoff between collaborative relaying and privacy leakage due to data sharing among nodes for DME under intermittent connectivity assumption. We introduce our system model and proposed algorithm in SSII, followed by its utility (MSE) and privacy analyses in SSIII and SSIV respectively. We quantify this tradeoff by formulating it as an optimization problem and solve it approximately due to its non-convexity. Finally, we demonstrate the efficacy of our private collaborative algorithm through numerical simulations. ## II System Model for Private Collaboration Consider a distributed system with \(n\) nodes, each having a vector \(\mathbf{x}_{i}\in\mathbb{R}^{d}\), \(\|\mathbf{x}_{i}\|_{2}\leq\mathds{R}\) for some known \(\mathds{R}>0\). The nodes communicate with a parameter server (PS), as well as with each other over intermittent links with the goal of estimating their mean, \(\mathbf{\overline{x}}\triangleq\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\) at the PS (Fig. 1). For any estimate \(\widehat{\mathbf{x}}\) of the mean, the evaluation metric for any estimate is the mean-square error (MSE), given by \(\mathcal{E}\triangleq\mathbb{E}\|\widehat{\mathbf{x}}-\mathbf{\overline{x}}\|_ {2}^{2}\). ### _Communication Model_ As shown in Fig. 1, node \(i\) can communicate with the PS with a probability \(p_{i}\), with the link modeled using a Bernoulli random variable \(\tau_{i}\sim\mathrm{Ber}(p_{i})\). Similarly, node \(i\) can communicate with another node \(j\) with probability \(p_{ij}\), i.e., \(\tau_{ij}\sim\mathrm{Ber}(p_{ij})\). The links between different node pairs are assumed to be statistically independent, i.e., \(\tau_{i}\perp\tau_{j}\) for \(i\neq j\), \(\tau_{ij}\perp\tau_{ml}\) for \((i,j)\neq(m,l)\), \((j,i)\neq(m,l)\), and \(\tau_{ij}\perp\tau_{l}\) for \(i,j,l\in[n]\). The correlation due to channel reciprocity between a pair of nodes \(i,j\in[n]\) is denoted by \(\mathrm{E}(\tau_{ij})\equiv\mathbb{E}[\tau_{ij}\tau_{jl}]\). We assume that \(\mathrm{E}_{\{i,j\}}\geq p_{ij}p_{ji}\), i.e., \(\mathbb{P}(\tau_{ij}=1|\tau_{ji}=1)\geq\mathbb{P}(\tau_{ij}=1)\) Furthermore, \(p_{ii}=1\;\forall\;i\in[n]\), and if node \(i\) can never transmit to \(j\), we set \(p_{ij}=0\). We denote \(\mathbf{p}\equiv(p_{1},\ldots,p_{n})\) and \(\mathbf{P}\equiv(p_{ij})_{i,j\in[n]}\in[0,1]^{n\times n}\). ### _Privacy Model_ The nodes are assumed to be honest but curious. They are _honest_ because they faithfully compute the aggregate of the signals received, however, they are _curious_ as they might be interested in learning sensitive information about nodes. Each node uses a local additive noise mechanism to ensure the privacy of its transmissions to neighboring nodes. We consider local privacy constraints, wherein node \(i\) trusts another node \(j\) to a certain extent and hence, randomizes its own data accordingly when sharing with node \(j\) using a synthetic Gaussian noise (see [18]) to respect the privacy constraint while maintaining utility. We present a refresher on differential privacy and Gaussian mechanism in App. A. ### _Private Collaborative Relaying for Mean Estimation_ We now introduce our algorithm, **PriCER**: **Private C**ollaborative **E**stimation via **R**elaying. PriCER is a two-stage semi-decentralized algorithm for estimating the mean. In the first stage, each node \(j\in[n]\) sends a scaled and noise-added version of its data to a neighboring node \(i\in[n]\) over the intermittent link \(\tau_{ji}\). The transmitted signal is given by, \[\widetilde{\mathbf{x}}_{ji}=\tau_{ji}(\alpha_{ji}\mathbf{x}_{j}+\mathbf{n}_{ji}) \tag{1}\] Here, \(\alpha_{ji}\geq 0\) is the weight used by node \(j\) while sending to node \(i\), and \(\mathbf{n}_{ji}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_{d})\) is the multivariate Gaussian privacy noise added by node \(j\). Here, \(\sigma^{2}\) is the variance of each coordinate, and \(\mathbf{I}_{d}\in\mathbb{R}^{d\times d}\) is the identity matrix. We denote the weight matrix by \(\mathbf{A}\equiv(\alpha_{ij})_{i,j\in[n]}\). Consequently, node \(i\) computes the local aggregate of all received signals as, \[\widetilde{\mathbf{x}}_{i}=\sum_{j\in[n]}\tau_{ji}(\alpha_{ji}\mathbf{x}_{j}+ \mathbf{n}_{ji}). \tag{2}\] We quantify our privacy guarantees using the well-established notion of differential privacy [18]. By observing \(\widetilde{\mathbf{x}}_{ji}\), node \(i\) should not be able to distinguish between the events when node \(j\) contains the data \(\mathbf{x}_{j}\) versus when it contains some other data \(\mathbf{x}_{j}^{\prime}\). In other words, we are interested in protecting the local data of node \(j\) from a (potentially untrustworthy) neighboring node \(i\). We assume that the privacy noise added by different nodes are uncorrelated, i.e., \(\mathbb{E}[\mathbf{n}_{il}^{\top}\mathbf{n}_{jm}]=0\) for all \(i,j,l,m\in[n]\) as long as \(i,j,l,m\) are not all equal. In the second stage, each node \(i\) transmits \(\widetilde{\mathbf{x}}_{i}\) to the PS over the intermittent link \(\tau_{i}\), and the PS computes the global estimate. The pseudocode for PriCER is given in Algs. 1 and 2 ``` 0: Non-negative weight matrix \(\mathbf{A}\) 0:\(\widetilde{\mathbf{x}}_{i}\) for all \(i\in[n]\) 1:for each \(i\in[n]\)do 2: Locally generate \(\mathbf{x}_{i}\) 3: Transmit \(\widetilde{\mathbf{x}}_{ij}=\alpha_{ij}\mathbf{x}_{i}+\mathbf{n}_{ij}\) to nodes \(j\in[n]:j\neq i\) 4: Receive \(\widetilde{\mathbf{x}}_{ji}=\tau_{ji}(\alpha_{ji}\mathbf{x}_{j}+\mathbf{n}_{ji})\) from \(j\in[n]:j\neq i\) 5: Set \(\widetilde{\mathbf{x}}_{ii}=\alpha_{ii}\mathbf{x}_{i}+\mathbf{n}_{ii}\) 6: Locally aggregate available signals: \(\widetilde{\mathbf{x}}_{i}=\sum_{j\in n}\widetilde{\mathbf{x}}_{ji}\) 7: Transmit \(\widetilde{\mathbf{x}}_{i}\) to the PS 8:endfor ``` **Algorithm 1**PriCER-Stage 1 for local aggregation ## III Mean Squared Error Analysis The goal of PriCER is to obtain an unbiased estimate of \(\overline{\mathbf{x}}\) at the PS. Since each node sends its data to all other neighboring nodes, the PS receives multiple copies of the same data. Lemma III.1, below gives a sufficient condition to ensure unbiasedness. This is the same condition as [15, Lemma 3.1], and holds true even for PriCER. **Lemma III.1**: _Let the weights \(\{\alpha_{ij}\}_{i,j\in[n]}\) satisfy_ \[\sum_{j\in[n]}p_{j}p_{ij}\alpha_{ij}=1, \tag{3}\] _for every \(i\in[n]\). Then, \(\mathbb{E}\big{[}\widehat{\mathbf{x}}\;|\;\{\mathbf{x}_{i}\}_{i\in[n]}\big{]}= \overline{\mathbf{x}}=\frac{1}{n}\sum_{i=1}^{n}\mathbf{x}_{i}\)._ We prove this lemma in App. B. Fig. 1: An intermittently connected distributed learning network. Blue and black dotted lines denote intermittent node-PS and node-node connections. Communication between any two nodes must satisfy local differential privacy constraints. Under this unbiasedness condition, we derive a worst-case upper bound for the MSE in Thm. 3.2. **Theorem 3.2**: _Given \(\mathbf{p},\mathbf{P}\) and \(\mathbf{A}\) such that (3) holds, and \(\mathbf{n}_{ij}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_{d})\)\(\forall\,i,j\in[n]\), the MSE with PriCER satisfies,_ \[\mathbb{E}\|\widehat{\mathbf{x}}-\mathbf{x}\|_{2}^{2}\leq\mathrm{R}^{2} \sigma_{\mathrm{tv}}^{2}(\mathbf{p},\mathbf{P},\mathbf{A})+\sigma_{\mathrm{pr }}^{2}(\mathbf{p},\mathbf{P},\sigma), \tag{4}\] _where \(\sigma_{\mathrm{tv}}^{2}(\mathbf{p},\mathbf{P},\mathbf{A})\) is an upper bound on the variance induced by the stochasticity due to intermittent topology given by,_ \[\sigma_{\mathrm{tv}}^{2}(\mathbf{p},\mathbf{P},\mathbf{A}) \triangleq\frac{1}{n^{2}}\left[\sum_{i,j,l\in[n]}p_{j}(1-p_{j})p_{ij}p_{ij} \alpha_{ij}\alpha_{lj}\right.\] \[\left.+\!\!\!\!\sum_{i,j\in[n]}\!\!\!\!p_{ij}p_{j}(1-p_{ij})\alpha _{ij}^{2}+\!\!\!\!\sum_{i,l\in[n]}\!\!\!\!p_{i}p_{l}(\mathrm{E}_{\{i,l\}}\!-\!p _{il}p_{li})\alpha_{li}\alpha_{il}\right],\] _and \(\sigma_{\mathrm{pr}}^{2}(\mathbf{p},\mathbf{P},\sigma)\) is the variance due to the privacy given by,_ \[\sigma_{\mathrm{pr}}^{2}(\mathbf{p},\mathbf{P},\sigma)\triangleq\frac{1}{n^{ 2}}\sum_{i,j\in[n]}p_{j}p_{ij}\sigma^{2}d. \tag{5}\] _Thm. 3.2 is derived in App. C. From (4), we see that \(\sigma_{\mathrm{pr}}^{2}(\mathbf{p},\mathbf{P},\sigma)\) is the price of privacy. For a non-private setting, i.e., \(\sigma=0\), the privacy induced variance \(\sigma_{\mathrm{pr}}^{2}(\mathbf{p},\mathbf{P},\sigma)=0\), and Thm. 3.2 simplifies to [15, Thm. 3.2]. In the following section, we introduce our privacy guarantee and the corresponding constraints leading to a choice of weight matrix \(\mathbf{A}\) for the optimal Utility (MSE) - Privacy tradeoff of PriCER._ ## IV Privacy Analysis PriCER yields privacy guarantees because of two reasons: (i) the local noise added at each node, and (ii) the intermittent nature of the connections. We consider the local differential privacy when any eavesdropper (possibly including the receiving node) can observe the transmission from node \(i\) to node \(j\) in stage-1 of PriCER. Let us denote the local dataset of node \(i\) as \(\mathcal{D}_{i}\). In DME, \(\mathcal{D}_{i}\) is a singleton set and by observing the transmission from node \(i\) to node \(j\), the eavesdropper should not be able to differentiate between the events \(\mathbf{x}_{i}\in\mathcal{D}_{i}\) and \(\mathbf{x}_{i}^{\prime}\in\mathcal{D}_{i}\), where \(\mathbf{x}_{i}^{\prime}\neq\mathbf{x}_{i}\). The following Thm. 4.1 (derived in App. D) formally states this guarantee. **Theorem 4.1**: _Given \(\mathbf{n}_{ij}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_{d})\), \(\mathbf{x}_{i},\mathbf{x}_{i}^{\prime}\in\mathbb{R}^{d}\) with \(\|\mathbf{x}_{i}\|_{2},\|\mathbf{x}_{i}^{\prime}\|_{2}\leq\mathrm{R}\), and any \(\delta_{ij}\in(0,1]\), for pairs \((\epsilon_{ij},p_{ij}\delta_{ij})\) satisfying_ \[\epsilon_{ij}=\left\{\begin{array}{ccc}\left[2\ln\!\left(\frac{1.25}{ \delta_{ij}}\right)\right]^{\frac{1}{2}}\frac{2\alpha_{ij}\mathrm{R}}{\sigma}& \text{ if }\;p_{ij}>0,&\text{ and,}\\ 0&\text{ if }\;p_{ij}=0,&\\ \end{array}\right. \tag{6}\] _the transmitted signal from node \(i\) to node \(j\), \(\widetilde{\mathbf{x}}_{ij}=\tau_{ij}(\alpha_{ij}\mathbf{x}_{i}+\mathbf{n}_{ ij})\) is \((\epsilon_{ij},p_{ij}\delta_{ij})\)-differentially private, i.e., it satisfies_ \[\Pr(\widetilde{\mathbf{x}}_{ij}\!\in\!\mathcal{S}|\mathbf{x}_{i} \in\mathcal{D}_{i})\leq e^{\epsilon_{ij}}\mathrm{Pr}(\widetilde{\mathbf{x}}_{ ij}\!\in\!\mathcal{S}|\mathbf{x}_{i}^{\prime}\in\mathcal{D}_{i})+p_{ij} \delta_{ij}, \tag{7}\] _for any measurable set \(\mathcal{S}\)._ _Setting \(\delta:=p_{ij}\delta_{ij}\), we can immediately see that intermittent connectivity inherently boosts privacy, since for the same \(\delta\) for any pair \(i,j\in[n]\), the privacy level \(\epsilon_{ij}\) is proportional to \(\ln(1.25p_{ij}/\delta)^{\frac{1}{2}}\), implying that a smaller \(p_{ij}\) leads to a stronger privacy guarantee. Additionally, from (6), the privacy guarantee \(\epsilon_{ij}\) is directly related to the weight \(\alpha_{ij}\). That is, if node \(i\) trusts node \(j\) more, \(\epsilon_{ij}\) can be relatively large, and consequently, node \(i\) can assign a higher weight to the data it sends to node \(j\). On the other hand, if node \(i\) does not trust node \(j\) as much, a smaller value will be assigned to \(\alpha_{ij}\). In other words, for the same noise variance \(\sigma\), node \(i\) will scale the signal \(\alpha_{ij}\) so as to reduce the effective signal-to-noise ratio in settings where a higher privacy is required. Finally, when \(p_{ij}=0\), PriCER ensures \(\alpha_{ij}=0\), implying \(\epsilon_{ij}=0\), i.e., perfect privacy, albeit zero utility. Our weight optimization (SSV) aims to minimize the MSE subject to the privacy constraints imposed by (6)._ ## V Privacy Constrained Weight Optimization When deriving the utility-privacy tradeoff, our objective is to minimize the MSE at the PS subject to desired privacy guarantees, namely \((\underline{\epsilon}_{ij},\underline{\delta}_{ij}p_{ij})\) node-node differential privacy. Here, \(\underline{\epsilon}_{ij},\underline{\delta}_{ij}\) are pre-designated system parameters that quantify the extent to which node \(i\) trusts node \(j\) (or alternatively, how much it trusts the communication link \(i\to j\) against an eavesdropper). More specifically, we solve: \[\min_{\mathbf{A},\sigma} However, when using an alternating optimization we must choose a variance that can fulfill the unbiased condition in the weight optimization stage. In other words, PriCER needs to add a minimum amount of noise, \(\sigma_{\rm thr}\), in order to meet privacy constraints and unbiasedness conditions simultaneously. Thus, we introduce a necessary condition to ensure a non-empty feasible set when we optimize the weight for the chosen \(\sigma\). We visualize this in Fig. 2. Note that for a fixed \(i\in[n]\), the unbiasedness constraint together with \(\alpha_{ij}\geq 0\), defines a hyperplane \(\mathcal{H}\) in the positive quadrant of \(\mathbb{R}^{n}\) with respect to the optimization variables \(\{\alpha_{ij}\}_{j\in[n]}\). Moreover, for \(i\in[n]\), the constraints \(\alpha_{ij}\geq 0\) and \(\alpha_{ij}\leq\underline{\epsilon}_{ij}\sigma\cdot([2\ln(1.25/\underline{ \delta}_{ij})]^{\frac{1}{2}}2\mathrm{R})^{-1}\)\(\forall\;j\in[n]\), together define a box \(\mathcal{B}\) aligned with the standard basis of \(\mathbb{R}^{n}\) with one of the vertices at the origin. The edge length of this box along any of the axes is proportional to \(\sigma\) When \(\sigma=0\), i.e., no privacy noise is added, \(\mathcal{B}=\mathbf{0}_{n}\), where \(\mathbf{0}_{n}\) denotes the origin of \(\mathbb{R}^{n}\). Since \(\mathcal{H}\) does not pass through \(\mathbf{0}_{n}\), (11) is infeasible. This implies there is a minimum value of \(\sigma\) so that \(\mathcal{B}\) is big enough to have a non-zero intersection with \(\mathcal{H}\). More specifically, we require \(\sigma\) such that, \(\sigma\sum_{j\in[n]}p_{j}p_{ij}\underline{\epsilon}_{ij}\bigg{(}\Big{[}2\ln \big{(}\frac{1.25}{\underline{\delta}_{ij}}\big{)}\Big{]}^{\frac{1}{2}}2 \mathrm{R}\bigg{)}^{-1}\geq 1,\forall\;i\in[n]\), and hence, we have the last feasibility constraint in (9), where \[\sigma_{\rm thr}\triangleq 2\mathrm{R}\;\mathrm{max}_{i\in[n]}\Bigg{(} \underset{j\in[n]}{\sum}p_{j}p_{ij}\underline{\epsilon}_{ij}[2\ln(\frac{1.25} {\underline{\delta}_{ij}})]^{-\frac{1}{2}}\Bigg{)}^{-1}\geq 0.\] We now fix \(\mathbf{A}\) in (9) and minimize the PIV, i.e., \[\min_{\sigma}\frac{1}{n^{2}}\sum_{i,j\in[n]}p_{j}p_{ij}\sigma^{2}d\] (9) s.t.: \[\left[2\ln\!\left(\frac{1.25}{\underline{\delta}_{ij}}\right) \right]^{\frac{1}{2}}\frac{2\alpha_{ij}\mathrm{R}}{\sigma}\leq\underline{ \epsilon}_{ij}\;\;\forall i,j\in[n],\] \[\sigma\geq\sigma_{\rm thr}.\] It can be shown (App. SSE-A) that the update rule is given by: \[\sigma=\mathrm{max}\Bigg{\{}\underset{i,j\in[n]}{\max}\Bigg{\{} \underset{j\in[n]}{\max}\Bigg{\{}\left[2\ln\!\left(\frac{1.25}{\underline{ \delta}_{ij}}\right)\right]^{\frac{1}{2}}\frac{2\alpha_{ij}\mathrm{R}}{\underline {\epsilon}_{ij}}\Bigg{\}},\sigma_{\rm thr}\Bigg{\}} \tag{10}\] ### _Optimizing weights \(\mathbf{A}\) for a given variance \(\sigma\)_ Firstly, for a fixed \(\sigma\), we minimize the weights \(\mathbf{A}\), i.e., \[\min_{\mathbf{A}}\;\mathrm{R}^{2}\sigma_{\rm tv}^{2}(\mathbf{p}, \mathbf{P},\mathbf{A})\] s.t.: \[\alpha_{ij}\geq 0,\;\forall\;i,j\in[n],\quad\underset{j\in[n]}{ \sum}p_{j}p_{ij}\alpha_{ij}=1,\;\forall\;i\in[n],\] \[\left[2\ln\!\left(\frac{1.25}{\underline{\delta}_{ij}}\right) \right]^{\frac{1}{2}}\!\frac{2\alpha_{ij}\mathrm{R}}{\sigma}\leq\underline{ \epsilon}_{ij}\;\;\forall i,j\in[n]. \tag{11}\] The objective function of problem (11) is not convex. With this in mind, we adopt an approach similar to [15, 17] wherein (11) is minimized in two iterative stages - \((\mathrm{i})\) first, a convex relaxation of (11) is minimized using Gauss-Seidel method, and \((\mathrm{ii})\) the outcome is subsequently fine-tuned again, using Gauss-Seidel on (11). The convex relaxation is chosen to be: \[\min_{\mathbf{A}}\;\mathrm{R}^{2}\overline{\sigma}_{\rm tv}^{2}(\mathbf{p}, \mathbf{P},\mathbf{A})\;\;\text{s.t. the same constraints as (\ref{eq: Seidel on (11). Then, the update equation for fine tuning is of the same form as (14) and (16)-(19). However, we plug-in the updated quantity \(\widetilde{\alpha}_{ij}(\lambda_{i})\) for this case, which is now, \[\widetilde{\alpha}_{ij}(\lambda_{i}) =\Bigg{(}\frac{1}{2(1-p_{j}p_{ij})}\Bigg{(}-2(1-p_{j})\!\!\!\!\!\! \sum_{l\in[n]:l\neq i}\!\!\!\!\!p_{lj}\alpha_{lj}^{(\ell-1)}\] \[\quad-2p_{i}(E_{\{i,j\}}/p_{ij}-p_{ji})\alpha_{ji}^{(\ell-1)}+ \lambda_{i}\Big{)}\Big{)}^{+}. \tag{20}\] ## VI Numerical Simulations In Fig. 3, we consider a setup with \(n=10\) nodes that can collaborate over an Erdos-Renyi topology, i.e., \(P_{ij}=\mathrm{p_{c}}\) for \(j\neq i\) and \(P_{ii}=1\). The nodes can communicate to the PS with probabilities \(\mathbf{p}=[0.1,0.1,0.8,0.1,0.1,0.9,0.1,0.9,0.1]\), i.e., only three clients have good connectivity. Even though any node can communicate with any other node with a non-zero probability, they do not do so because they only trust a small number of immediate neighbors, which is varied along the \(\mathrm{x}\)-axis. If node \(i\) trusts node \(j\), we set \(\epsilon_{ij}=\epsilon_{\mathrm{high}}=10^{3}\) (low privacy), otherwise, \(\epsilon_{ij}=\epsilon_{\mathrm{low}}=0.1\) (high privacy). Moreover, \(\epsilon_{ii}=\epsilon_{\mathrm{high}}\). We also set \(\delta_{ij}=\delta=10^{-3}\). \(\mathrm{y}\)-axis shows the (optimized) objective value of (8), i.e., the (worst-case) upper bound to MSE. As is evident from Fig. 3, the MSE decreases as nodes trust more neighbors, as expected. In Fig. 4, we consider that the data at each node is generated from a Gaussian distribution \(\mathcal{N}(0,1)\), raised to the power of \(3\), and normalized. This generates a heavy-tailed distribution. Consequently, if a node that has a vector with a few large coordinate values is unable to convey its data to the PS due to a failed transmission, this can incur a significant MSE. In this setup only some nodes have good connectivity to the PS, i.e., \(p_{i}=p_{\mathrm{good}}=0.9\), and the remaining have \(p_{i}=p_{\mathrm{bad}}=0.2\) In the naive strategy, the PS averages whatever it successfully receives, i.e., it computes the mean estimate as \(\frac{1}{n}\sum_{i\in[n]}\tau_{i}\mathbf{x}_{i}\). Whereas, in our collaborative strategy, each node trust \(6\) other neighbors and can communicate with them with a probability \(P_{ij}=0.8\). Clearly, PriCER achieves a lower MSE than the naive strategy. The plots are averaged over \(50\) realizations. ## VII Conclusions In this paper, we considered the problem of mean estimation over intermittently connected networks with collaborative relaying subject to peer-to-peer local differential privacy constraints. The nodes participating in the collaboration do not trust each other completely and, in order to ensure privacy, they scale and perturb their local data when sharing with others. We have proposed a two-stage consensus algorithm (PriCER), that takes into account these peer-to-peer privacy constraints to jointly optimize the scaling weights and noise variance so as to obtain an unbiased estimate of the mean at the PS that minimizes the MSE. Numerical simulations showed the improvement of our algorithm relative to a non-collaborative strategy in MSE for various network topologies. Although this work considers peer-to-peer privacy, there can be other sources of privacy leakage such as central DP at the PS too. Moreover, adding correlated privacy noise may help reduce the MSE even further. Our future work will include investigating these questions in more detail. Fig. 4: Variation of MSE with number of good connectivity nodes Fig. 3: Variation of worst-case MSE with trustworthy neighbors
この研究は、断続的な接続を持つ分布式平均推定(DME)ネットワークにおける問題に着目しています。目的は、分散ノードにデータサンプルが分布的に存在する際、中央サーバーの助けを借りて、グローバル統計を学習することです。断続的なリンクの影響を軽減するために、ノードは隣接ノードと協力してローカル合意を計算し、中央サーバーに送信します。そのような設定では、任意のノード間の通信は、ローカル差分プライバシー制約を満たす必要があります。私たちは、ノード間の協力的な再伝送と、データ共有によるプライバシー漏洩の関係を検討し、その結果、DMEのための新しい差分プライバシーの協調アルゴリズムを提案します。最終的に、理論的な発見を裏付ける数値シミュレーションを提示します。
2309.15305
Non-standard quantum algebras and finite dimensional $\mathcal{PT}$-symmetric systems
In this work, $\mathcal{PT}$-symmetric Hamiltonians defined on quantum $sl(2, \mathbb R)$ algebras are presented. We study the spectrum of a family of non-Hermitian Hamiltonians written in terms of the generators of the non-standard $U_{z}(sl(2, \mathbb R))$ Hopf algebra deformation of $sl(2, \mathbb R)$. By making use of a particular boson representation of the generators of $U_{z}(sl(2, \mathbb R))$, both the co-product and the commutation relations of the quantum algebra are shown to be invariant under the $\mathcal{PT}$-transformation. In terms of these operators, we construct several finite dimensional $\mathcal{PT}$-symmetry Hamiltonians, whose spectrum is analytically obtained for any arbitrary dimension. In particular, we show the appearance of Exceptional Points in the space of model parameters and we discuss the behaviour of the spectrum both in the exact $\mathcal{PT}$-symmetry and the broken $\mathcal{PT}$-symmetry dynamical phases. As an application, we show that this non-standard quantum algebra can be used to define an effective model Hamiltonian describing accurately the experimental spectra of three-electron hybrid qubits based on asymmetric double quantum dots. Remarkably enough, in this effective model, the deformation parameter $z$ has to be identified with the detuning parameter of the system.
Ángel Ballesteros, Romina Ramírez, Marta Reboiro
2023-09-26T23:17:22
http://arxiv.org/abs/2309.15305v1
# Non-standard quantum algebras and finite dimensional \(\mathcal{PT}\)-symmetric systems ###### Abstract In this work, \(\mathcal{PT}\)-symmetric Hamiltonians defined on quantum \(sl(2,\mathbb{R})\) algebras are presented. We study the spectrum of a family of non-Hermitian Hamiltonians written in terms of the generators of the non-standard \(U_{z}(sl(2,\mathbb{R}))\) Hopf algebra deformation of \(sl(2,\mathbb{R})\). By making use of a particular boson representation of the generators of \(U_{z}(sl(2,\mathbb{R}))\), both the co-product and the commutation relations of the quantum algebra are shown to be invariant under the \(\mathcal{PT}\)-transformation. In terms of these operators, we construct several finite dimensional \(\mathcal{PT}\)-symmetry Hamiltonians, whose spectrum is analytically obtained for any arbitrary dimension. In particular, we show the appearance of Exceptional Points in the space of model parameters and we discuss the behaviour of the spectrum both in the exact \(\mathcal{PT}\)-symmetry and the broken \(\mathcal{PT}\)-symmetry dynamical phases. As an application, we show that this non-standard quantum algebra can be used to define an effective model Hamiltonian describing accurately the experimental spectra of three-electron hybrid qubits based on asymmetric double quantum dots. Remarkably enough, in this effective model, the deformation parameter \(z\) has to be identified with the detuning parameter of the system. * September 26 _Keywords_: non-standard Hopf algebras, \(\mathcal{PT}\)-symmetry, spectrum, exceptional points, effective Hamiltonians. ## 1 Introduction Since the celebrated work of Bender and Boetcher [1], the study of mathematical properties [2, 3, 4, 5, 6, 7, 8] and physical applications [9, 10] of non-hermitian Hamiltonians has grown exponentially. Among non-hermitian operators, those obeying Parity-Time Reversal symmetry (\(\mathcal{PT}\)-symmetry) have received particular attention, due to the rich structure that both its spectrum and dynamics [8] do present. A characteristic feature of these Hamiltonians is that its space of model parameters consists of two regions of well-defined structure: A region with real spectrum, where the eigenvectors of the Hamiltonian are also eigenvectors of the \(\mathcal{PT}\)-symmetry operator (the so-called exact \(\mathcal{PT}\)-symmetry phase) and another region where the spectrum includes complex pair conjugate eigenvalues (the broken \(\mathcal{PT}\)-symmetry phase), where the eigenstates of the Hamiltonian are not eigenstates of the \(\mathcal{PT}\)-symmetry operator. The boundary between these two regions is formed by the so-called Exceptional Points (EPs) [11, 12, 13]. At these latter values of the parameters of the model, two or more eigenvalues are degenerated and their eigenstates are coalescent. The consequence of the existence of EPs has been intensively analysed both theoretically [10, 11, 13] and experimentally [10, 12]. In this work, we investigate the features of the spectra of a family of \(\mathcal{PT}\)-symmetry Hamiltonians defined on a non-standard Hopf algebra deformation of \(sl(2,\mathbb{R})\)[14, 15, 16, 17, 18], whose finite-dimensional boson representations were constructed in [15, 16]. By suitably modifying these representations we will be able to establish a \(\mathcal{PT}\)-invariant realisation of the \(U_{z}(sl(2,\mathbb{R}))\) Hopf algebra. Therefore, in terms of these operators, we will be able to construct a family of \(\mathcal{PT}\)-symmetric Hamiltonians whose spectra will be analysed. Moreover, we shall show that this realisation of the quantum algebra \(U_{z}(sl(2,\mathbb{R}))\) is physically sound since it can be used to define effective Hamiltonians that reproduce the structure of the eigenvalues that have been recently obtained for Hamiltonian models describing realistic systems of semiconductor quantum dots. The paper is organised as follows. In Section 2 we review the essentials of the non-standard \(U_{z}(sl(2,\mathbb{R}))\) quantum algebra and find a boson representation for which its Hopf algebra structure is fully invariant under \(\mathcal{PT}\)-symmetry transformations. In Section 3, we construct two different families of Hamiltonians in terms of the \(\mathcal{PT}\)-symmetric generators of \(U_{z}(sl(2,\mathbb{R}))\). Afterwards, we study their spectra by introducing similarity transformations for them to obtain isospectral Hamiltonians, and we discuss the regions in the space of parameters showing \(\mathcal{PT}\)-symmetry and broken \(\mathcal{PT}\)-symmetry, as well as the associated sets of EPs. As an application of these non-hermitian Hamiltonians, in Section 4 we introduce a suitable choice of the Hamiltonian and its parameters that reproduces the spectra of a system of three electrons in an asymmetric two-dimensional double well, which has been recently implemented experimentally [19]. Conclusions and outlook are given in Section 5. ## 2 Formalism In this section, after reviewing the essentials of the non-standard \(U_{z}(sl(2,\mathbb{R}))\) quantum Hopf algebra [14, 15, 16, 17, 18], we shall introduce a boson realisation of this algebra that is shown to be invariant under the \(\mathcal{PT}\)-transformation. This will be the realisation used in the rest of the paper in order to define new non-hermitian and \(\mathcal{PT}\)-symmetric Hamiltonian models. ### Dyson's boson representation of \(sl(2,\mathbb{R})\) Let us begin by summarising the properties of the \(sl(2,R)\) Lie algebra (see, for instance, [20]) whose generators, \(\{L_{0},L_{+},L_{-}\}\), obey the well known commutation relations \[[L_{0},L_{\pm}]=\pm 2L_{\pm}\qquad[L_{+},L_{-}]=L_{0}\,. \tag{1}\] We point out that \(\mathcal{PT}\)-symmetric realisations of \(sl(2,R)\) can be constructed in terms of boson operators, e.g. \[L_{+}=-\mathbf{i}\ a^{\dagger}\qquad L_{0}=2\ a^{\dagger}a+\beta I\qquad L_{- }=-\mathbf{i}\ (a^{\dagger}a+\beta)\ a. \tag{2}\] where \(a^{\dagger}\) is the creation operator, \(a\) the annihilation operator and \(\beta\) a free real parameter which is directly related with the eigenvalue of the Casimir operator \(C=\frac{1}{2}L_{0}^{2}+L_{+}L_{-}+L_{-}L_{+}\). The realisation (2) is a modification of the Gelfan'd-Dyson (GD) one-boson realisation [21, 22]. It can be straightforwardly shown that (2) is invariant under the \(\mathcal{PT}\)-transformation, since \((\mathcal{PT})A(\mathcal{PT})^{-1}=-A\) for \(A\in\{a,a^{\dagger},\mathbf{i}\}\). As a consequence, many \(sl(2,R)\) Hamiltonian systems constructed in terms of the bosonic mapping (2) of the generators of \(sl(2,R)\) will obey \(\mathcal{PT}\)-symmetry. As an instructive example, we shall study the properties of the spectrum of the linear Hamiltonian \[H_{\mu}=\mu L_{-}+L_{+}. \tag{3}\] It is straightforward to prove that for real \(\mu\), this Hamiltonian is invariant under a \(\mathcal{PT}\)-transformation. Following [23, 24], in order to find the associated spectrum, we can introduce a similarity transformation \(\eta=\mathrm{e}^{\alpha L_{+}}e^{\beta L_{-}}e^{\gamma L_{z}}\) such that \(h_{\mu}=\eta H_{\mu}\eta^{-1}\). In the following Proposition, we state the characteristics of the spectrum of \(h\) depending on the sign of \(\mu\). **Proposition 1**.: Let \(\eta=e^{\alpha L_{+}}e^{\beta L_{-}}e^{\gamma L_{z}}\). a) If \(\alpha_{0}=-\frac{e^{2\gamma}}{2\sqrt{\mu}}\) and \(\beta=e^{-2\gamma_{0}}\sqrt{\mu}\), then \(h_{\mu}=\eta H_{\mu}\eta^{-1}\) is hermitian for \(\mu>0\). b) For \(\mu<0\), \(h_{\mu}\) is a diagonal matrix with complex pair-conjugate eigenvalues. Proof.: By using the relations presented in Appendix A, we have \[h_{\mu} = \eta(\mu L_{-}+L_{+})\eta^{-1}\] \[= L_{-}e^{-2\gamma}\left(\mu-\beta^{2}e^{4\gamma}\right)+L_{+} \left(e^{2\gamma}(\alpha\beta+1)^{2}-\alpha^{2}e^{-2\gamma}\mu\right)+\] \[+L_{z}\left(\alpha e^{-2\gamma}\mu-\beta e^{2\gamma}(\alpha\beta+1 )\right).\] In the bosonic realisation (2), \(L_{z}\) is the only hermitian generator. Then, to find the isospectral hermitian operator associated with \(H_{\mu}\) we need to cancel the terms including \(L_{+}\) and \(L_{-}\). Because of that, we obtain the equations \[0 = \left(e^{2\gamma}(\alpha\beta+1)^{2}-\alpha^{2}e^{-2\gamma}\mu \right)\,,\] \[0 = e^{-2\gamma}\left(\mu-\beta^{2}e^{4\gamma}\right)\,,\] whose solution is \((\alpha_{0},\beta_{0})=(-\frac{e^{2\gamma_{0}}}{2\sqrt{\mu}},e^{-2\gamma_{0}} \sqrt{\mu})\). In that case, \[h_{\mu}=\eta(\mu L_{-}+L_{+})\eta^{-1}=\sqrt{\mu}L_{0}=\sqrt{\mu}(2a^{\dagger }a+\beta I). \tag{4}\] Thus, \(h_{\mu}\) has real spectrum for \(\mu>0\) and has complex pair-conjugate eigenvalues for \(\mu<0\). ### The non-standard quantum algebra \(U_{z}(sl(2,\mathbb{R}))\) Many authors have considered different deformations of the algebra \(sl(2,R)\) and have applied them in different contexts (see, for instance, [25, 26, 27, 28]). We recall that among all possible deformations of a given Lie algebra, a distinguished class is defined by the Hopf algebra deformations of the Universal Enveloping Algebra (UEA) of such Lie algebra. These deformed Hopf algebras are called Quantum Universal Enveloping Algebras (QUEA) or, in short, quantum algebras, and are defined as simultaneous and compatible deformations of both the commutation rules of the Lie algebra and the coproduct map that defines its tensor product representations [29, 30]. In the case of \(sl(2,R)\), we will deal with the so-called non-standard quantum deformation \(U_{z}(sl(2,\mathbb{R}))\)[15, 16] (the standard deformation is the Drinfel'd-Jimbo one [31, 32]). Its generators, named \(\{j_{0}^{(z)},j_{+}^{(z)},j_{+}^{(z)}\}\), where \(z\) is a real deformation parameter, define the quantum algebra relations through the commutation rules \[[j_{0}^{(z)},j_{+}^{(z)}]=\tfrac{e^{2zj_{+}^{(z)}}-1}{z}\ \ \ \ [j_{0}^{(z)},j_{-}^{(z)}]=-2j_{-}^{(z)}+z(j_{0}^{(z)})^{2}\ \ \ \ [j_{+}^{(z)},j_{-}^{(z )}]=j_{0}^{(z)}\,, \tag{5}\] which are just a generalisation of (1), which is smoothly recovered in the \(z\to 0\) limit. Tensor product representations of the quantum algebra (5) are obtained through the so-called coproduct map \[\Delta(j_{0}^{(z)}) =1\otimes j_{0}^{(z)}+j_{0}^{(z)}\otimes\rme^{2zj_{+}^{(z)}},\] \[\Delta(j_{-}^{(z)}) =1\otimes j_{-}^{(z)}+j_{-}^{(z)}\otimes\rme^{2zj_{+}^{(z)}},\] \[\Delta(j_{+}^{(z)}) =1\otimes j_{+}^{(z)}+j_{+}^{(z)}\otimes 1, \tag{6}\] which defines an algebra homomorphism between \(U_{z}(sl(2,\mathbb{R}))\) and \(U_{z}(sl(2,\mathbb{R}))\otimes U_{z}(sl(2,\mathbb{R}))\). As expected, the limit \(z\to 0\) leads to the usual (undeformed) rule for the construction of \(sl(2,R)\) tensor product representations. We are interested in getting the non-standard deformed generalisation of the \(\mathcal{PT}\)-symmetric GD realisation (2). Such a result can be obtained by starting from the \(U_{z}(sl(2,\mathbb{R}))\) boson representation obtained in [15, 16], together with the definition of the new set of operators, \(\{J_{0}^{(z)},J_{+}^{(z)},J_{-}^{(z)}\}\) as \[J_{0}^{(z)} = j_{0}^{(-\mathbf{i}z)},\] \[J_{\pm}^{(z)} = \mp\mathbf{i}\ j_{\pm}^{(-\mathbf{i}z)}\,. \tag{7}\] In such a way we obtain \[J_{+}^{(z)} = = -\mathbf{i}a^{{}^{\dagger}},\] \[J_{0}^{(z)} = \mathbf{i}\tfrac{\rme^{-2\mathbf{i}za^{{}^{\dagger}}}-1}{z}a+ \beta\tfrac{\rme^{-2\mathbf{i}za^{{}^{\dagger}}}+1}{2}, \tag{8}\] \[J_{-}^{(z)} = \tfrac{\rme^{-2\mathbf{i}za^{{}^{\dagger}}}-1}{2z}a^{2}-\mathbf{ i}\beta\tfrac{\rme^{-2\mathbf{i}za^{{}^{\dagger}}}+1}{2}a-z\beta^{2}\tfrac{\rme^{-2 \mathbf{i}za^{{}^{\dagger}}}-1}{8}\,.\] These operators can be straightforwardly shown to be \(\mathcal{PT}\)-symmetric, and we stress that the transformation \(z\to-\mathbf{i}z\) is essential in order to recover the \(\mathcal{PT}\) symmetry of this boson representation of the deformed algebra. The action of the operators \(\{J_{0}^{(z)},J_{+}^{(z)},J_{-}^{(z)}\}\) on the eigenstates \(\{|m\rangle,(m=0,1,\ldots\infty)\}\) of the usual boson number operator \(a^{{}^{\dagger}}a\), provides their lower-bounded representation, namely \[J_{+}^{(z)}|m\rangle =-{\bf i}\sqrt{m+1}|m+1\rangle,\] \[J_{0}^{(z)}|m\rangle =(2m+\beta)|m\rangle\] \[\quad+\sum_{k\geq 1}\frac{(-2{\bf i}z)^{k}}{k!}\sqrt{\frac{(m+k)!}{m! }}\left(\frac{2m}{k+1}+\frac{\beta}{2}\right)|m+k\rangle,\] \[J_{-}^{(z)}|m\rangle =-{\bf i}\sqrt{m}(m-1+\beta)|m-1\rangle\] \[\quad-{\bf i}\sum_{k\geq 1}\frac{(-2{\bf i}z)^{k}}{k!}\sqrt{\frac{(m+ k)!}{m!}}\left[\frac{m}{\sqrt{m+k}}\left(\frac{m-1}{k+1}+\frac{\beta}{2} \right)|m-1+k\rangle\right.\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad transformations, which means that \[(\mathcal{PT})\Delta(J_{0}^{(z)})(\mathcal{PT})^{-1} =\Delta(J_{0}^{(z)}),\] \[(\mathcal{PT})\Delta(J_{+}^{(z)})(\mathcal{PT})^{-1} =\Delta(J_{+}^{(z)}),\] \[(\mathcal{PT})\Delta(J_{-}^{(z)})(\mathcal{PT})^{-1} =\Delta(J_{-}^{(z)}). \tag{15}\] In the rest of the paper, we shall apply the previous results to the study of a \(\mathcal{PT}\)-symmetric family of Hamiltonians obtained from the finite-dimensional irreducible representation of the \(\mathcal{PT}\)-invariant generators (8) of the non-standard \(U_{z}(sl(2,\mathbb{R}))\) Hopf algebra. ## 3 Results and discussion In this Section, we present a large family of \(\mathcal{PT}\)-symmetric Hamiltonians defined as functions of the operators (7) under the realisation (8). We will show the appearance of Exceptional Points in the space of model parameters and we will discuss the behaviour of the spectrum both in the exact \(\mathcal{PT}\)-symmetric and the broken \(\mathcal{PT}\)-symmetric dynamical phases. As an initial step in the understanding of the problem, we shall study the natural deformed generalisation of the Hamiltonian of (3), namely the operator \[H_{\mu}=\mu J_{-}^{(z)}+J_{+}^{(z)}. \tag{16}\] If we consider the representation of dimension 2 of the generators (this means (9) with \(\beta=-1\)), given by \[J_{+}^{(z)}=\left(\begin{array}{cc}0&0\\ -\mathbf{i}&0\end{array}\right),\quad J_{-}^{(z)}=\left(\begin{array}{cc}0& \mathbf{i}\\ \frac{\mathbf{i}z^{2}}{4}&z\end{array}\right),\quad J_{0}^{(z)}=\left( \begin{array}{cc}-1&0\\ \mathbf{i}z&1\end{array}\right), \tag{17}\] the Hamiltonian of (16) reads \[H_{\mu}=\left(\begin{array}{cc}0&\mathbf{i}\mu\\ \frac{1}{4}\mathbf{i}z^{2}\mu-\mathbf{i}&z\mu\end{array}\right). \tag{18}\] Indeed, in this case, analytical results can be obtained: As the Hamiltonian of (16) obeys \(\mathcal{PT}\)-symmetry, we can construct an operator \(S\) such as \(SH=H^{\dagger}S\). For instance \[S=\left(\begin{array}{cc}\frac{z^{2}}{2}+\frac{2}{\mu}&-\mathbf{i}z\\ \mathbf{i}z&2\end{array}\right), \tag{19}\] where the operator \(S\) is self-adjoint and for \(\mu>0\) is positive definite. It is now possible to construct a self-adjoint Hamiltonian through the similarity transformation \(h_{\mu}=S^{1/2}\)\(H_{\mu}\)\(S^{-1/2}\), where \[h_{\mu}=\left(\begin{array}{cc}\frac{z\mu\left((z^{2}+4)\mu-4\right)}{2 \left((z^{2}+4)\mu+8\sqrt{\mu}+4\right)}&-\frac{\mathbf{i}\sqrt{\mu}\left((z^ {2}-4)\mu-8\sqrt{\mu}-4\right)}{(z^{2}+4)\mu+8\sqrt{\mu}+4}\\ \frac{\mathbf{i}\sqrt{\mu}\left((z^{2}-4)\mu-8\sqrt{\mu}-4\right)}{(z^{2}+4) \mu+8\sqrt{\mu}+4}&\frac{z\mu\left((z^{2}+4)\mu+16\sqrt{\mu}+12\right)}{2\left( (z^{2}+4)\mu+8\sqrt{\mu}+4\right)}\end{array}\right). \tag{20}\] For \(\mu<0\), the operator \(h_{\mu}\) is isospectral with respect to \(H_{\mu}\) but it is not hermitian. Due to the fact that \(H_{\mu}\) and \(H_{\mu}^{\dagger}\) are similar operators, their spectrum is either real or contains complex pair conjugate eigenvalues. In this example, the eigenvalues take the form \[E_{\pm}=\frac{\mu}{2}z\pm\sqrt{\mu}. \tag{21}\] We also recall that in the treatment of Hamiltonians with \(\mathcal{PT}\)-symmetry, a similar transformation between \(H\) and its adjoint can be obtained by constructing a bi-orthogonal basis [33]. In fact, following [33], let \(\mathcal{A}_{H}=\{\widetilde{\phi}_{j}\}_{j=1\ldots N_{max}}\) the eigenvectors of a non-hermitian operator \(H\), i.e \(H\widetilde{\phi}_{j}=\widetilde{E}_{j}\widetilde{\phi}_{j}\). In the same way we denote \(\mathcal{A}_{H^{\dagger}}=\{\widetilde{\psi}_{j}\}_{j=1\ldots N_{max}}\) as the eigenvectors of \(H^{\dagger}\), and therefore \(H^{\dagger}\overline{\psi}_{j}=\overline{E}_{j}\overline{\psi}_{j}\). If H is diagonalisable, the sets \(\mathcal{A}_{H}\) and \(\mathcal{A}_{H^{\dagger}}\) form a bi-orthonormal set of \(H\), i.e. \(\langle\overline{\psi}_{i}|\widetilde{\phi}_{j}\rangle=\delta_{ij}\) and \(\widetilde{E}_{j}=\overline{E}_{j}^{*}\). Following [33], for a pseudo-hermitian diagonalisable Hamiltonian with real spectrum we can define a symmetry operator \(S_{\psi}\) so that \(S_{\psi}H=H^{\dagger}S_{\psi}\). Moreover, in terms of \(\overline{\psi}_{j}\), the operator \(S_{\psi}\) is given by \[S_{\psi}=\sum_{j=1}^{N_{max}}|\overline{\psi}_{j}\rangle\langle\overline{\psi }_{j}|. \tag{22}\] When this operator is self-adjoint and positive definite, an inner product can be implemented as \(\langle f|g\rangle_{S_{\psi}}=\langle f|S_{\psi}g\rangle\). If \(S\) is not positive definite we are then forced to work within the framework of the formalism of Krein spaces, as it has been described in detail in [33]. It is also worth mentioning that, in general, the constant \(\mu\) generates only a scalar distortion in the spectrum of the Hamiltonian (16). Therefore, according to the sign of the coupling constant \(\mu\), we can make a change of variables that allow us to restrict our study to the two Hamiltonians \(H_{1}\) and \(H_{-1}\). **Proposition 2**.: Hamiltonians \(H_{\mu}=\mu J_{-}^{(z)}+J_{+}^{(z)}\) can be mapped into \(H_{1}\) and \(H_{-1}\) for \(\mu>0\) and \(\mu<0\), respectively. Proof.: If \(\mu>0\) \[H_{\mu} = \mu J_{-}^{(z)}+J_{+}^{(z)}\] \[= \sqrt{\mu}\left(\sqrt{\mu}J_{-}^{(z)}+\frac{1}{\sqrt{\mu}}J_{+}^{ (z)}\right)\,.\] Through the change of parameters \[\lambda:=\sqrt{\mu}z\ \ \ \ \ b_{-}=\sqrt{\mu}a\ \ \ \ \ b_{+}=\frac{1}{\sqrt{\mu}}a^{\dagger}\] we find a new bosonic representation given by \(\{b_{+},b_{-}\}\) and a rescaled deformation parameter called \(\lambda\) such that \(H_{\mu}\) is rewritten as \[H_{1}=J_{-}^{(\lambda)}+J_{+}^{(\lambda)}\,.\] In the same way, if \(\mu<0\), we take \(-\mu=\nu>0\) and \[H_{-\nu} = -\nu J_{-}^{(z)}+J_{+}^{(z)}\] \[= -\sqrt{\nu}\left(\sqrt{\nu}J_{-}^{(z)}-\frac{1}{\sqrt{\mu}}J_{+}^{ (z)}\right)\,.\] With the new parameters \[\lambda:=\sqrt{\nu}z\ \ \ \ \ b_{-}=\sqrt{\nu}a\ \ \ \ \ b_{+}=\frac{1}{\sqrt{\nu}}a^{\dagger} \tag{23}\] again the equivalence between \(H_{\mu}\) and \(-H_{-1}=J_{-}^{(\lambda)}-J_{+}^{(\lambda)}\) can be established. As it could be expected, difficulties in finding an analytical expression for \(h_{\mu}\) increase when we consider higher dimensional representations. In fact, the exact form of the spectrum of the Hamiltonian (16) for an arbitrary finite-dimensional irreducible representation is not known. Nevertheless, the aim of this paper consists in presenting other families of Hamiltonians defined on \(U_{z}(sl(2,\mathbb{R}))\) that can be exactly solved. In particular, let us consider the family of Hamiltonians given by \[H(\mu_{+},\mu_{-},\mu_{0})=\mu_{-}\ J_{-}^{(z)}+\mu_{+}[J_{0}^{(z)},J_{+}^{(z)} ]+\mu_{0}J_{0}^{(z)}, \tag{24}\] with parameters \((\mu_{+},\mu_{-},\mu_{0})\in\mathbb{R}\) and the commutator \([J_{0}^{(z)},J_{+}^{(z)}]\) given in (5). Following [23], in order to characterise the spectrum we shall work with Hamiltonians which are isospectral partners of \(H\). Therefore, let us introduce the similarity transformations \(\Upsilon_{\pm}\) given by \[\Upsilon_{\pm}=\rme^{\eta\ J_{0}^{(z)}}\rme^{\kappa_{\pm}\ J_{+}^{(z)}},\quad \kappa_{\pm}=\pm\frac{1}{\mu_{-}}\left(\sqrt{\mu_{0}^{2}+2\mu_{+}\mu_{-}}-\mu _{0}\right). \tag{25}\] The transformed Hamiltonians can be obtained by using the expressions provided in A, namely \[\mathcal{H}_{\pm}=\Upsilon_{\pm}\,H\,\Upsilon_{\pm}^{-1}=\mu_{-}\rme^{-2\eta} J_{-}^{(z)}+z\mu_{-}\rme^{-\eta}\sinh(\eta)\ (J_{0}^{(z)})^{2}\pm\sqrt{\mu_{0}^{2}+2\mu_{+}\mu_{-}}\ J_{0}^{(z)}. \tag{26}\] For any value of the parameters \((\mu_{+},\mu_{-},\mu_{0})\), the Hamiltonian \(\mathcal{H}\) of (26) in the limit \(\eta\to\infty\) reads \[\mathfrak{h}_{\pm}(\mu_{+},\mu_{-},\mu_{0})=z\ \mu_{-}\ (J_{0}^{(z)})^{2}\pm \sqrt{\mu_{0}^{2}+2\mu_{+}\mu_{-}}\ J_{0}^{(z)}\,, \tag{27}\] and any finite dimensional irreducible representation of \(\mathfrak{h}_{\pm}(\mu_{+},\mu_{-},\mu_{0})\) of (27), is given by a triangular matrix of dimension \(d=|\beta-1|\). Then the following proposition can be proven through a straightforward computation: **Proposition 3.** The Hamiltonians \(\mathfrak{h}_{-}(\mu_{+},\mu_{-},\mu_{0})\), and \(\mathfrak{h}_{+}(\mu_{+},\mu_{-},\mu_{0})\) of (27) are isospectral operators, and their spectrum \(\sigma\) is given by \[\sigma=\left\{\begin{array}{ll}\frac{\varepsilon}{2}\ \mu_{-}\ (2k+1)^{2}\pm(2k+1)\sqrt{\mu_{0}^{2}+2\mu_{+}\mu_{-}},&\mbox{even}\ d,\quad 0\leq k\leq\frac{d-2}{2}\\ \frac{\varepsilon}{2}\ \mu_{-}\ (2k)^{2}\pm 2k\sqrt{\mu_{0}^{2}+2\mu_{+}\mu_{-}},&\mbox{odd}\ d,\quad 0\leq k\leq\frac{d-1}{2}.\end{array}\right. \tag{28}\] As can be observed from (28), when \(\mu_{0}+2\mu_{+}\mu_{-}>0\), the spectrum of \(h_{\pm}\) is real (_i.e._, we are in the exact \(\mathcal{PT}\)-symmetry phase). On the other hand, if \(\mu_{0}+2\mu_{+}\mu_{-}<0\), the spectrum consists of complex pair conjugate eigenvalues, thus indicating a broken \(\mathcal{PT}\)-symmetry phase. In the model space of parameters, the points at which \(\mu_{0}+2\mu_{+}\mu_{-}=0\) are EPs. These points provide the boundary between the two dynamical phases of the system. It is worth noting that the Hamiltonian in (24) may initially seem much more intricate compared to the Hamiltonian given in (16), given that the commutator involves the exponential of the operator \(J_{+}^{(z)}\). Nevertheless, it turns out that for this particular group of operators, it is possible to determine the spectrum explicitly by using similarity transformations, in contradistinction to what happens with the Hamiltonian \(\mu J_{-}^{(z)}+J_{+}^{(z)}\). Moreover, the very same spectral analysis can be straightforwardly generalised to the family of operators \[H_{g}(\mu_{+},\mu_{-},\mu_{0})=\mu_{-}\ J_{-}^{(z)}+\mu_{+}[J_{0}^{(z)},J_{+}^{( z)}]+\mu_{0}g(J_{0}^{(z)})\,, \tag{29}\] with \(g\) being a generic function of \(J_{0}^{(z)}\). In the following, we analyse the phase structure of the spectrum of the Hamiltonian \(\mathfrak{h}_{+}(\mu_{+},\mu_{-},\mu_{0})\) (27). As a first step, in order to discuss the appearance of EPs in the present model, let us take \(|\mu_{-}|=-|\mu_{+}|=\mu\) and \(\nu=\mu_{0}/\mu\). In this manner we get \[h_{-}(\mu,\nu)=\mathfrak{h}_{+}(-\mu,\mu,\mu\ \nu)=\mu(z\ (J_{0}^{(z)})^{2}+ \sqrt{\nu^{2}-2}\ J_{0}^{(z)}). \tag{30}\] In Figure 1, we show the behaviour of its spectrum as a function of \(\nu\), for the undeformed Hamiltonian with \(z=0\). Panels (a) and (b) correspond to values of \(\beta=-4\) (dimension \(d=5\)), and Panels (c) and (d) correspond to values of \(\beta=-9\) (dimension \(10\)). In Panels (a) and (c) we display the behaviour of the real part of the eigenvalues. In Panels (b) and (d), the imaginary part of the eigenvalues is depicted. It can be seen the presence of EPs of order \(2\) and \(5\) at \(|\nu|=\sqrt{2}\), for the \(d=5\) and \(d=10\) cases, respectively. In Figures 2 and 3, we display the spectrum of the deformed Hamiltonian \(h_{-}(\mu,\nu)\) in units of \(\mu\), as a function of both \(\nu\) and the deformation parameter \(z\), for dimensions \(d=5\) and \(d=6\). In Panels (a) and (b) we plot the real and the imaginary part of the eigenvalues, respectively. In Panels (c) and (d), we present Figure 1: Behaviour of the spectrum of \(H_{-}(\mu,\nu)\) for \(z=0\), as a function of \(\nu\). The values are given in units of \(\mu\). Panels (a) and (b) correspond to values of \(\beta=-4\) (dimension \(d=5\)), and Panels (c) and (d) correspond to values of \(\beta=-9\) (dimension \(10\)). In Panels (a) and (c) we display the behaviour of the real part of the eigenvalues. In Panels (b) and (d), the imaginary part of the eigenvalues is depicted. case \(z=2.5\). The real and imaginary parts of the eigenvalues are presented in (c) and (d), respectively. The EPs occur at values of \(\nu=\pm\sqrt{2}\). For the Hamiltonian of (30), \(h_{-}(\mu,\nu)\), the EPs lie between the region with exact \(\mathcal{PT}\) symmetry (real spectrum) and the region of broken \(\mathcal{PT}\)-symmetry, with pairs of complex conjugate energies. As a second example, let us consider \(|\mu_{-}|=|\mu_{+}|=\mu\) and \(\nu=\mu_{0}/\mu\): \[h_{+}(\mu,\nu)=\mathfrak{h}_{+}(\mu,\mu,\mu\ \nu)=\mu(z\ (J_{0}^{(z)})^{2}+\sqrt{ \nu^{2}+2}\ J_{0}^{(z)}). \tag{31}\] In this case, the spectrum of \(h_{+}(\mu,\nu)\), \(\sigma(h_{+}(\mu,\nu))\), takes real values. In Figure 4, we plot the spectrum of \(h_{+}(\mu,\nu)\) in units of \(\mu\), as a function of \(\nu\) and \(z\). Panels (a) and (c) correspond to the results obtained for dimension \(d=5\), while Panels (b) and (d) correspond to the results for dimension \(d=6\). In Panels (c) and (d) we present the projections of the graphs at \(\nu=1\), as a function of z. It is important to emphasise that, as a function of \(z\), eigenvalues form 'bands' composed of two energies. The distance between consecutive bands is governed by the second term of (28). This will be of the outmost relevance when dealing with the applications presented in the next Section. Finally, we shall consider another family of exactly solvable Hamiltonians written Figure 2: Figure 2 shows the spectrum of the Hamiltonian \(h_{-}(\mu,\nu)\) in units of \(\mu\), as a function of \(\nu\) and \(z\), for dimensions \(d=5\). In Panels (a) and (b) we plot the real and the imaginary part of the eigenvalues, respectively. In Panels (c) and (d), we present the projection for \(z=2.5\). The real and imaginary parts of the eigenvalues are presented in (c) and (d), respectively. in terms of the generators of \(U_{z}(sl(2,\mathbb{R}))\), namely \[H_{0}=\mu_{-}J_{-}^{(z)}+\sum_{n=1}^{N}a_{n}\ \left[J_{0}^{(z)}\right]^{n} \tag{32}\] where \(N\in\mathbb{Z}^{+}\cup\infty\). As in the previous cases, by using the similarity transformation given now by the operator \(e^{\alpha J_{0}^{(z)}}\) (see A for computations) and afterwards by taking the limit \(\alpha\to\infty\), we can map \(H_{0}\) into a Hamiltonian \(h_{0}\), such that its matrix representation is given in terms of triangular matrices: \[h_{0}=\frac{z}{2}\mu_{-}(J_{0}^{(z)})^{2}+\sum_{n=1}^{N}a_{n}\left[J_{0}^{(z)} \right]^{n}. \tag{33}\] As a consequence, we can state the following **Proposition 4**.: The spectrum \(\sigma(h_{0})\) of \(h_{0}\) is given by \[\sigma(h_{0})=\left\{\begin{array}{ll}\frac{z}{2}\mu_{-}(2k-1)^{2}+u_{k}^{ \pm},&\mbox{ for }1\leq k\leq\frac{d}{2}\mbox{ and even }d\\ \frac{z}{2}\mu_{-}4(k-1)^{2}+v_{k}^{\pm},&\mbox{ for }1\leq k\leq\frac{d+1}{2} \mbox{ and odd }d\end{array}\right. \tag{34}\] Figure 3: The spectrum of the Hamiltonian \(h_{+}(\mu,\nu)\) in units of \(\mu\), as a function of \(\nu\) and \(z\), for dimensions \(d=6\), is depicted in Figure 3. In Panels (a) and (b) we plot the real and the imaginary part of the eigenvalues, respectively. In Panels (c) and (d), we present a cut in the graph for \(z=2.5\). The real and imaginary parts of the eigenvalues are presented in (c) and (d), respectively. where \[u_{k}^{\pm} = \sum_{n=1}^{N}(\pm 1)^{n}(2k-1)^{n}a_{n}\,,\] \[v_{k}^{\pm} = \sum_{n=1}^{N}(\pm 1)^{n}2^{n+1}(k-1)^{n+1}a_{n}\,.\] Clearly, for \(a_{n}\in\mathbb{R}\) the eigenvalues of \(H_{0}\) of (32) belong to \(\mathbb{R}\). It can be observed from (34) that for \(a_{n}\neq 0\), the characteristic degeneracy of the spectrum of the operator \(J_{-}\) is broken, giving rise again to bands of pairs of parallel lines separated by a controlled gap. Note that the gap is symmetric when \(u_{k}^{+}=u_{k}^{-}\) resp. (\(v_{k}^{+}=v_{k}^{-}\)), i.e when \(a_{2n}=0\), for even (odd) dimension. When odd coefficients are zero, i.e. all \(a_{2n+1}=0\), we have \(u_{k}^{+}=u_{k}^{-}\) (resp. (\(v_{k}^{+}=v_{k}^{-}\))) for even (odd) dimension, thus resulting in a Hamiltonian with degenerate spectrum. As an specific example, we can consider the Hamiltonians \[S(\mu_{-},\lambda)=\mu_{-}J_{-}^{(z)}+\sin(\lambda J_{0}^{(z)})\,, \tag{35}\] \[C(\mu_{-},\lambda)=\mu_{-}J_{-}^{(z)}+\cos(\lambda J_{0}^{(z)})\,. \tag{36}\] The operators \(S(\mu_{-},\lambda)\) and \(C(\mu_{-},\lambda)\) aare defined by power series with particular values of \(a_{n}\), with even and odd null coefficients, respectively. In Figure 5 we represent, Figure 4: We plot the spectrum of \(h_{+}(\mu,\nu)\) in units of \(\mu\), as a function of \(\nu\) and \(z\). Panels (a) and (c) correspond to the graphs for \(d=5\) and Panels (b) and (d) to \(d=6\). In Panels (c) and (d) we depict a projection of the plots (a) and (d) for \(\nu=1\). with solid lines, the spectrum of Hamiltonians of (35) and (36). We have plotted the case \(\lambda=1\) for dimension \(d=6\). As a guide, with dashed lines, we plot the spectrum of the Hamiltonian of (32) when \(a_{n}=0\)\(\forall n\). It can be seen from Panel (a) that, the spectrum of \(S(\mu_{-},1)\) has pairs of parallel lines symmetrically separated with respect to the spectrum of \(\mu_{-}(J_{-}^{(z)})^{2}\) by a controlled gap given by \(\pm\sin(1),\pm\sin(3),\pm\sin(5)\), respectively. For \(C(\mu_{-},1)\) we can see in Panel (b) that the degeneracy of \(J_{-}^{(z)}\) is preserved, albeit displaced into a new double degenerate spectrum given by \(\{\frac{z}{2}+\cos(1),\frac{9z}{2}+\cos(3),\frac{25z}{2}+\cos(5)\}\), due to the parity of the \(\cos(x)\) function. ## 4 Applications It is worth stressing that, recently, the separation in bands of parallel lines has been observed in the spectra of three electrons confined in an asymmetric two-dimensional double well, implemented by a two-centre-oscillator potential. This system turns out to be the cornerstone of two-dimensional (2D) semiconductor-based three-electron hybrid- double-quantum-dot (HDQD) qubits (see [19, 34] and references therein). In the literature, theoretical model Hamiltonians have been developed to reproduce these experimental results [35, 36]. The results presented in the previous Section strongly suggest the possibility of making use of specific \(\mathcal{PT}\)-symmetric Hamiltonians defined on the non-standard \(U_{z}(sl(2,\mathbb{R}))\) quantum algebra in order to model these relevant systems. We show in the following that this will be indeed the case. Let us consider the effective Hamiltonian of [34] given by the equivalent form \[H_{e}=\left(\begin{array}{cccc}\delta L+\frac{\varepsilon}{2}&-t_{3}&0&t_{4 }\\ -t_{3}&-\frac{\varepsilon}{2}&t_{1}&0\\ 0&t_{1}&\frac{\varepsilon}{2}&-t_{2}\\ t_{4}&0&-t_{2}&\delta R-\frac{\varepsilon}{2}\end{array}\right), \tag{37}\] where the parameter \(\varepsilon\) models the detuning of three-electron hybrid qubits based on GaAs asymmetric double quantum dots, and with coupling constants \(\delta L=3,\delta R=95.8,t_{1}=1.8,t_{2}=7.1,t_{3}=11.5,t_{4}=6.3\) (in units of [GHz]) [19]. The eigenvalues of the Hamiltonian \(H_{e}\) of (37) can be obtained analytically as the roots of a fourth-degree Figure 5: Spectra of the Hamiltonians \(S(\mu_{-},\lambda)\) and \(C(\mu_{-},\lambda)\) in units of \(\mu_{-}\), as a function of \(z\), for dimension \(d=6\) and \(\lambda=1\) (solid lines). Dashed lines plot the spectrum of the Hamiltonian of (32) for \(a_{n}=0\)\(\forall n\). polynomial: \[p(\lambda)=\lambda^{4}+c_{3}\lambda^{3}+c_{2}\lambda^{2}+c_{1}\lambda+c_{0}. \tag{38}\] The explicit form of the coefficients \(c_{k}\) is given in C. For \(\epsilon\) sufficiently large the eigenvalues of the Hamiltonian (37) can be approximated by two sets of eigenvalues: \[E_{1,\pm} = \frac{1}{2}\left(\delta L\pm\sqrt{(\delta L+\varepsilon)^{2}+4t_{ 3}^{2}}\right),\] \[E_{2,\pm} = \frac{1}{2}\left(\delta R\pm\sqrt{(\delta R-\varepsilon)^{2}+4t_{ 2}^{2}}\right). \tag{39}\] An effective non-standard quantum algebra Hamiltonian \(H_{eff}\) reproducing the behaviour of the spectrum of \(H_{e}\) (37) can be obtained through \[H_{eff}=\left(\begin{array}{cc}1&0\\ 0&0\end{array}\right)\otimes H_{1}+\left(\begin{array}{cc}0&0\\ 0&1\end{array}\right)\otimes H_{2}, \tag{40}\] with \[H_{1} = \frac{1}{2}(\varepsilon+\delta L)J_{0}^{(\varepsilon)}+\frac{t_{ 3}^{2}}{\delta L}\varepsilon J_{+}^{(\varepsilon)}+\frac{\delta L}{ \varepsilon}J_{-}^{(\varepsilon)},\] \[H_{2} = \frac{1}{2}(\varepsilon-\delta R)J_{0}^{(\varepsilon)}+\frac{t_{ 2}^{2}}{\delta R}\varepsilon J_{+}^{(\varepsilon)}+\frac{\delta R}{\varepsilon }J_{-}^{(\varepsilon)}. \tag{41}\] by identifying the deformation parameter with the detuning, therefore \(z=\varepsilon\), and by making use of the two-dimensional irreducible representation of the \(U_{z}(sl(2,\mathbb{R}))\) quantum algebra (17) obtained from (9) with \(\beta=-1\). To obtain an isospectral Hamiltonian to \(H_{eff}\), we construct the symmetry operator \(S\) of (22), and from it the similarity transformation given by its square root \(S^{1/2}\): \[h_{eff} = S^{1/2}\,H_{eff}\,S^{-1/2} \tag{42}\] \[= \left(\begin{array}{cc}h_{1}&0\\ 0&h_{2}\end{array}\right),\] being \[S^{1/2}=\left(\begin{array}{cc}s_{1}&0\\ 0&s_{2}\end{array}\right), \tag{43}\] with \[s_{1} = \left(\begin{array}{cc}\frac{\varepsilon}{2\delta L}\sqrt{-3 \delta L^{2}-2\varepsilon\delta L+4t_{3}^{2}}&0\\ 0&1\end{array}\right),\] \[s_{2} = \left(\begin{array}{cc}\frac{\varepsilon}{2\delta R}\sqrt{ \delta R^{2}-2\varepsilon\delta R+4t_{2}^{2}}&0\\ 0&1\end{array}\right). \tag{44}\] In this way we obtain that \[h_{1} = \left(\begin{array}{cc}-\frac{1}{2}(\delta L+\varepsilon)& \frac{1}{2}{\bf i}\sqrt{-3\delta L^{2}-2\varepsilon\delta L+4t_{3}^{2}}\\ -\frac{1}{2}{\bf i}\sqrt{-3\delta L^{2}-2\varepsilon\delta L+4t_{3}^{2}}& \frac{1}{2}(3\delta L+\varepsilon)\end{array}\right),\] \[h_{2} = \left(\begin{array}{cc}\frac{1}{2}(\delta R-\varepsilon)& \frac{1}{2}{\bf i}\sqrt{\delta R^{2}-2\varepsilon\delta R+4t_{2}^{2}}\\ -\frac{1}{2}{\bf i}\sqrt{\delta R^{2}-2\varepsilon\delta R+4t_{2}^{2}}&\frac{1 }{2}(\delta R+\varepsilon)\end{array}\right). \tag{45}\] It is straightforward to prove that the eigenvalues of \(h_{1}\) and \(h_{2}\) are just \(E_{1\pm}\) and \(E_{2\pm}\), respectively. Moreover, by making use of a second similarity transformation \(P\), the Hamiltonian \(h_{eff}\) can be arranged as \[\mathfrak{h} = P\,h_{eff}\,P^{-1}, \tag{46}\] \[= \left(\begin{array}{cccc}\delta L+\frac{\varepsilon}{2}&-t_{3}& 0&0\\ -t_{3}&-\frac{\varepsilon}{2}&0&0\\ 0&0&\frac{\varepsilon}{2}&-t_{2}\\ 0&0&-t_{2}&\delta R-\frac{\varepsilon}{2}\end{array}\right).\] where \[P= \left(\begin{array}{cc}p_{1}&0\\ 0&p_{2}\end{array}\right), \tag{47}\] is given by \[p_{1} = \left(\begin{array}{cc}\mathfrak{i}\frac{\varepsilon}{2t_{3}} \sqrt{-3\delta L^{2}-2\varepsilon\delta L+4t_{3}^{2}}&-\frac{1}{2t_{3}}(3 \delta L+2\varepsilon)\\ 0&1\end{array}\right),\] \[p_{2} = \left(\begin{array}{cc}\mathfrak{i}\frac{\varepsilon}{2t_{2}} \sqrt{\delta R^{2}-2\varepsilon\delta R+4t_{2}^{2}}&\frac{1}{2t_{2}}(\delta R -2\varepsilon)\\ 0&1\end{array}\right). \tag{48}\] Figure 6 depicts the spectrum of \(H_{e}\) and \(H_{eff}\) as a function of \(\varepsilon\). In Panel (a), the exact eigenvalues of \(H_{e}\) and the approximate values computed from (39) are displayed as a function of \(\varepsilon\) (solid and dashed lines, respectively). In Panel (b), we plot the absolute value of the difference between the exact and the approximate eigenvalue in units of the maximum or minimum absolute value of the exact solution at the point where each band avoids the crossing. ## 5 Conclusions and outlook In this work, we have obtained the analytical expression for the spectrum of a family of \(\mathcal{PT}\)-symmetric Hamiltonians defined in terms of the generators of the non Figure 6: The Figure depicts the spectrum of \(H_{e}\) and \(H_{eff}\) as a function of \(\varepsilon\).In Panel (a), the exact eigenvalues of \(H_{e}\) and the approximate values computed from (39) are displayed as a function of \(\varepsilon\) (solid and dashed lines, respectively). In Panel (b), we plot the absolute value of the difference between the exact and the approximate eigenvalue in units of the maximum or minimum absolute value of the exact solution at the point where each band avoids the crossing. standard \(U_{z}(sl(2,R))\) quantum algebra under a generic finite-dimensional irreducible representation of the latter [14, 15, 16, 37]. By generalising [16], we have presented a boson realisation of the generators of the \(U_{z}(sl(2,R))\) algebra such that the co-product map and the commutation relations become invariant under the \({\cal PT}\)-transformation. In terms of these operators, we have introduced two families of \({\cal PT}\)-symmetry Hamiltonians, given by (24) and (32). We have shown that the spectrum of the Hamiltonian \(H\) in (24) exhibits different properties depending on the relative signs of the parameters \(\mu_{\pm}\). When \({\rm sign}(\mu_{+})={\rm sign}(\mu_{-})\) the spectrum of \(H\) of (24) is real. Nevertheless, when \({\rm sign}(\mu_{+})=-{\rm sign}(\mu_{-})\), the spectrum of \(H\) can include complex conjugate pairs of eigenvalues. Thus, we have two different dynamical phases, the exact \({\cal PT}\)-symmetry phase for \(\mu_{0}^{2}+\mu_{+}\mu_{-}>0\) with real energies, and the one for the broken \({\cal PT}\)-symmetry phase for \(\mu_{0}^{2}+\mu_{+}\mu_{-}<0\) consisting in pairs of complex conjugate eigenvalues. The boundary between these phases, given by \(\mu_{0}^{2}+\mu_{+}\mu_{-}=0\), is formed by EPs. At these points, two or more eigenvalues are degenerated and their eigenvectors are coalescent. On the other hand, the spectrum of the Hamiltonian defined in (32) has been shown to consist, for real parameters, of real eigenvalues. As a characteristic feature of this spectrum, we have illustrated the appearance of bands consisting of pairs of eigenvalues, and we have studied the relation of the parameters of the model with the gap between such bands. Remarkably enough, this particular band structure has suggested the definition of a non-standard quantum algebra effective model for the spectrum of a realistic system of three-electron hybrid qubits based on GaAs asymmetric double quantum dots [19]. In fact, by identifying the deformation parameter \(z\) with the detuning \(\varepsilon\) of the system, the spectrum of the effective Hamiltonian (41) provides an excellent approximation to the energies of the actual physical system. Work is in progress concerning the analytical spectra for more general \({\cal PT}\)-symmetric Hamiltonians written in terms of the generators of the \(U_{z}(sl(2,R))\) algebra. Also, their possible role as effective models for other quantum systems beyond the one here presented where the Hopf algebra deformation parameter \(z\) had a neat physical interpretation. ## Appendix A In what follows, we summarise the basic similarity transformations of the generators of the \(sl(2,\mathbb{R})\) Lie algebra (2): \[{\rm e}^{\alpha L_{+}}L_{-}{\rm e}^{-\alpha L_{+}} = \alpha(L_{0}-\alpha L_{+})+L_{-},\] \[{\rm e}^{\alpha L_{+}}L_{0}{\rm e}^{-\alpha L_{+}} = L_{0}-2\alpha L_{+},\] \[{\rm e}^{\alpha L_{-}}L_{+}{\rm e}^{-\alpha L_{-}} = -\alpha(L_{0}+\alpha L_{-})+L_{+},\] \[{\rm e}^{\alpha L_{-}}L_{0}{\rm e}^{-\alpha L_{-}} = L_{0}+2\alpha L_{-},\] \[{\rm e}^{\alpha L_{0}}L_{+}{\rm e}^{-\alpha L_{0}} = {\rm e}^{2\alpha}L_{+},\] \[{\rm e}^{\alpha L_{0}}L_{-}{\rm e}^{-\alpha L_{0}} = {\rm e}^{-2\alpha}L_{-}\,. \tag{44}\] For the \(U_{z}(sl(2,\mathbb{R}))\) quantum algebra (7) we have \[{\rm e}^{\alpha J_{+}^{(z)}}J_{-}^{(z)}{\rm e}^{-\alpha J_{+}^{(z)}} = \alpha\left(J_{0}^{(z)}-\alpha f(J_{+}^{(z)})\right)+J_{-}^{(z)},\] \[{\rm e}^{\alpha J_{+}^{(z)}}J_{0}^{(z)}{\rm e}^{-\alpha J_{+}^{(z)}} = J_{0}^{(z)}-2\alpha f(J_{+}^{(z)}),\] \[{\rm e}^{\alpha J_{-}^{(z)}}J_{0}^{(z)}{\rm e}^{-\alpha J_{-}^{(z)}} = -d_{\alpha}\left({\rm e}^{\alpha J_{-}^{(z)}}J_{+}^{(z)}{\rm e}^{ -\alpha J_{-}^{(z)}}\right),\] \[{\rm e}^{\alpha J_{0}^{(z)}}J_{+}^{(z)}{\rm e}^{-\alpha J_{0}^{(z )}} = \sum_{n=1}^{\infty}\frac{(-2z)^{n-1}}{n}\left(1-(-2{\rm e}^{\alpha }\sinh(\alpha))^{n})(f(J_{+}^{(z)})\right)^{n},\] \[{\rm e}^{\alpha J_{0}^{(z)}}J_{-}^{(z)}{\rm e}^{-\alpha J_{0}^{(z )}} = {\rm e}^{-2\alpha}J_{-}^{(z)}+z{\rm e}^{-\alpha}\sinh(\alpha)J_{z} ^{(z)2},\] \[{\rm e}^{\alpha J_{0}^{(z)}}f(J_{+}^{(z)}){\rm e}^{-\alpha J_{0}^ {(z)}} = \sum_{n=1}^{\infty}{\rm e}^{\alpha}\left(4z\sinh(\alpha)\right)^{ n-1}\left({\rm e}^{\alpha}f(J_{+}^{(z)})\right)^{n}, \tag{10}\] where the function \(f\) is given by \[f(J_{+}^{(z)})=\frac{1}{2}\ J_{+}^{(z)}\ [J_{0}^{(z)},J_{+}^{(z)}]. \tag{11}\] ## Appendix B We shall prove that the coproduct (\(\Delta\)), the counit (\(\varepsilon\)), antipode (\(\gamma\) ) maps and the commutation rules amongst the operators \(\{J_{0}^{(z)},J_{+}^{(z)},J_{-}^{(z)}\}\) have the same structure as those of \(\{j_{0}^{(z)},j_{+}^{(z)},j_{-}^{(z)}\}\). Let us start with the Hopf structure of the operators \(\{j_{0}^{(z)},j_{+}^{(z)},j_{-}^{(z)}\}\): \[\Delta(j_{0}^{(z)}) = 1\otimes j_{0}^{(z)}+j_{0}^{(z)}\otimes{\rm e}^{2zj_{+}^{(z)}},\] \[\Delta(j_{-}^{(z)}) = 1\otimes j_{-}^{(z)}+j_{-}^{(z)}\otimes{\rm e}^{2zj_{+}^{(z)}},\] \[\Delta(j_{+}^{(z)}) = 1\otimes j_{+}^{(z)}+j_{+}^{(z)}\otimes 1,\] \[\varepsilon(X) = 0,\quad X\in\{j_{0}^{(z)},j_{+}^{(z)},j_{-}^{(z)}\},\] \[\gamma(j_{0}^{(z)}) = -j_{0}^{(z)}{\rm e}^{-2zj_{+}^{(z)}},\] \[\gamma(j_{-}^{(z)}) = -j_{-}^{(z)}{\rm e}^{-2zj_{+}^{(z)}},\] \[\gamma(j_{+}^{(z)}) = -j_{+}^{(z)}, \tag{12}\] we shall write the maps for \(\{J_{0}^{(z)},J_{+}^{(z)},J_{-}^{(z)}\}\) in terms of \(\{j_{0}^{(z)},j_{+}^{(z)},j_{-}^{(z)}\}\): \[\Delta(J_{0}^{(z)}) = \Delta(j_{0}^{(-1{\bf z})})=1\otimes j_{0}^{(-1{\bf z})}+j_{0}^{(- 1{\bf z})}\otimes{\rm e}^{2(-{\bf i}{\bf z})j_{+}^{(-1{\bf i}_{z})}}\] \[= 1\otimes J_{0}^{(z)}+J_{0}^{(z)}\otimes{\rm e}^{2zJ_{+}^{(z)}},\] \[\Delta(J_{-}^{(z)}) = \Delta({\bf i}j_{-}^{(-{\bf i}{\bf z})})=1\otimes{\bf i}j_{-}^{(- 1{\bf i}{\bf z})}+{\bf i}j_{-}^{(-{\bf i}{\bf z})}\otimes{\rm e}^{2(-{\bf i}{ \bf z})(j_{+}^{(-{\bf i}{\bf z})})}\] \[= 1\otimes J_{-}^{(z)}+J_{-}^{(z)}\otimes{\rm e}^{2zJ_{+}^{(z)}},\] \[\Delta(J_{+}^{(z)}) = 1\otimes(-{\bf i}j_{+}^{(-{\bf i}{\bf z})})+(-{\bf i}j_{+}^{(-{ \bf i}{\bf z})})\otimes 1\] \[= 1\otimes J_{+}^{(z)}+J_{+}^{(z)}\otimes 1,\] \[\varepsilon(X) = 0,\quad X\in\{J_{0}^{(z)},J_{+}^{(z)},J_{-}^{(z)}\},\] \[\gamma(J_{0}^{(z)}) = \gamma(j_{0}^{(-{\bf i}{\bf z})})=-j_{0}^{(-{\bf i}{\bf z})}{\rm e }^{-2(-{\bf i}{\bf z})j_{+}^{(-{\bf i}{\bf z})}}\] \[= -J_{0}^{(z)}{\rm e}^{-2zJ_{+}^{(z)}},\] \[\gamma(J_{-}^{(z)}) = \gamma({\bf i}j_{-}^{(-{\bf i}{\bf z})})=-{\bf i}j_{-}^{(-{\bf i} {\bf z})}{\rm e}^{-2(-{\bf i}{\bf z})j_{+}^{(-{\bf i}{\bf z})}}\] \[= -J_{-}^{(z)}{\rm e}^{-2zJ_{+}^{(z)}},\] \[\gamma(J_{+}^{(z)}) = \gamma(-{\bf i}j_{+}^{(-{\bf i}{\bf z})})=-(-{\bf i})j_{+}^{(-{\bf i }{\bf z})} \tag{14}\] \[= -J_{+}^{(z)}\,.\] For the commutation relations, we have: \[[J_{0}^{(z)},J_{+}^{(z)}] = [j_{0}^{(-{\bf i}{\bf z})},-{\bf i}j_{+}^{(-{\bf i}{\bf z})}] \tag{15}\] \[= -{\bf i}\frac{{\rm e}^{\frac{2(-{\bf i}{\bf z})j_{+}^{(-{\bf i}{ \bf z})}}{-{\bf i}{\bf z}}}-1}{\rm e}\] \[= \frac{{\rm e}^{2zJ_{+}^{(z)}}-1}{z},\] \[[J_{0}^{(z)},J_{-}^{(z)}] = [j_{0}^{(-{\bf i}{\bf z})},{\bf i}j_{-}^{({\bf i}{\bf z})}]\] \[= {\bf i}(-2j_{-}^{(-{\bf i}{\bf z})}+(-{\bf i}z)(j_{0}^{(-{\bf i} {\bf z})})^{2})\] \[= -2{\bf i}j_{-}^{(-{\bf i}{\bf z})}+z(j_{0}^{(-{\bf i}{\bf z})})^{2}\] \[= -2J_{-}^{(z)}+z(J_{0}^{(z)})^{2},\] \[[J_{+}^{(z)},J_{-}^{(z)}] = [-{\bf i}j_{+}^{(-{\bf i}{\bf z})},-{\bf i}j_{-}^{(-{\bf i}{\bf z })}]\] (16) \[= j_{0}^{(-{\bf i}{\bf z})}\] \[= J_{0}^{(z)}.\] Next, we shall prove the invariance of the coproduct and the commutation relations under a \({\cal PT}\) symmetry transformation. Let us summarise the transformation properties of the different operators and scalars under \({\cal PT}\)-symmetry: \[j_{0}^{(z)} \rightarrow \quad j_{0}^{(-z)},\] \[j_{\pm}^{(z)} \rightarrow -j_{\pm}^{(-z)},\] \[{\bf i} \rightarrow -{\bf i},\] \[J_{0}^{(z)} \to J_{0}^{(z)},\] \[J_{\pm}^{(z)} \to J_{\pm}^{(z)}. \tag{17}\] Therefore we have: \[(\mathcal{PT})\Delta(J_{0}^{(z)})(\mathcal{PT})^{-1} = (\mathcal{PT})(1\otimes J_{0}^{(z)}+J_{0}^{(z)}\otimes\rme^{2zJ_{+} ^{(z)}})(\mathcal{PT})^{-1}\] \[= 1\otimes(J_{0}^{(z)})+(J_{0}^{(z)})\otimes\rme^{2z(J_{+}^{(z)})}\] \[= \Delta(J_{0}^{(z)}),\] \[(\mathcal{PT})\Delta(J_{-}^{(z)})(\mathcal{PT})^{-1} = (\mathcal{PT})(1\otimes J_{-}^{(z)}+J_{-}^{(z)}\otimes\rme^{2zJ_{+ }^{(z)}})(\mathcal{PT})^{-1}\] \[= 1\otimes(J_{-}^{(z)})+(J_{-}^{(z)})\otimes\rme^{2z(J_{+}^{(z)})}\] \[= \Delta(J_{-}^{(z)}),\] \[(\mathcal{PT})\Delta(J_{+}^{(z)})(\mathcal{PT})^{-1} = (\mathcal{PT})(1\otimes J_{+}^{(z)}+J_{+}^{(z)}\otimes 1)( \mathcal{PT})^{-1}\] \[= 1\otimes(J_{+}^{(z)})+(J_{+}^{(z)})\otimes 1\] \[= \Delta(J_{+}^{(z)}),\] In a similar way, we can show that the commutation relations are also invariant under \(\mathcal{PT}\)-symmetry transformations: \[(\mathcal{PT})[J_{0}^{(z)},J_{+}^{(z)}](\mathcal{PT})^{-1} = (\mathcal{PT})\left(\frac{\rme^{2zJ_{+}^{(z)}}-1}{z}\right)( \mathcal{PT})^{-1}\] \[= \frac{\rme^{2zJ_{+}^{(z)}}-1}{z}\] \[= [J_{0}^{(z)},J_{+}^{(z)}]\] \[(\mathcal{PT})[J_{0}^{(z)},J_{-}^{(z)}](\mathcal{PT})^{-1} = (\mathcal{PT})(-2J_{-}^{(z)}+z\left(J_{0}^{(z)}\right)^{2})( \mathcal{PT})^{-1} \tag{2.6}\] \[= -2J_{-}^{(z)}+z\left(J_{0}^{(z)}\right)^{2}\] \[= [J_{0}^{(z)},J_{-}^{(z)}],\] \[(\mathcal{PT})[J_{+}^{(z)},J_{-}^{(z)}](\mathcal{PT})^{-1} = (\mathcal{PT})(J_{0}^{(z)})(\mathcal{PT})^{-1},\] \[= J_{0}^{(z)}\] \[= [J_{+}^{(z)},J_{-}^{(z)}].\] ## Appendix C The coefficients of the characteristic polynomial \(p(\lambda)\) of Eq.(38) are given by: \[c_{3}(\varepsilon) = -\delta_{L}-\delta_{R},\] \[c_{2}(\varepsilon) = \frac{1}{2}\left(\varepsilon(\delta_{R}-\delta_{L})-2\left(- \delta_{L}\delta_{R}+t_{1}^{2}+t_{2}^{2}+t_{3}^{2}+t_{4}^{2}\right)- \varepsilon^{2}\right),\] \[c_{1}(\varepsilon) = \frac{1}{4}\varepsilon^{2}(\delta_{L}+\delta_{R})+t_{1}^{2}( \delta_{L}+\delta_{R})+\delta_{L}t_{2}^{2}+\delta_{R}t_{3}^{2},\] \[c_{0}(\varepsilon) = \frac{1}{16}\left(\varepsilon^{4}+2\varepsilon^{3}(\delta_{L}- \delta_{R})+4\varepsilon^{2}\left(\delta_{L}\delta_{R}+t_{1}^{2}+t_{2}^{2}+t_{ 3}^{2}+t_{4}^{2}\right)\right.\] \[\left.+8\varepsilon\left(\delta_{L}\left(t_{1}^{2}+t_{2}^{2} \right)-\delta_{R}\left(t_{1}^{2}+t_{3}^{2}\right)\right)+16\left(\left(t_{2}t _{3}-t_{1}t_{4}\right)^{2}-\delta_{L}\delta_{R}t_{1}^{2}\right)\right).\] The exact expression for the roots of the quartic equation \(p(\lambda)=0\) can be found in [38]. ## Acknowledgements A.B. has been partially supported by Agencia Estatal de Investigacion (Spain) under grant PID2019-106802GB-I00/AEI/10.13039/501100011033, and by the Q-CAYLE Project funded by the Regional Government of Castilla y Leon (Junta de Castilla y Leon) and by the Ministry of Science and Innovation MICIN through the European Union funds NextGenerationEU (PRTR C17.I1). M.R. is grateful to the Universidad de Burgos for its hospitality. M.R. and R.R. have been partially supported by the grant 11/X982 of the University of La Plata (Argentine).
この論文において、量子 $sl(2,\mathbb R)$ アルgeb で定義された $\mathcal{PT}$-対称ハミルトニアンが提示されます。私たちは、非対称ハミルトニアンのスペクトルを、 $sl(2,\mathbb R)$ の非標準 Hopf 代数変形に関する演算子の生成器を用いて表しています。$U_{z}(sl(2, \mathbb R))$ の生成器の boson REPRESENTATION を用いることで、量子代数の co-product と交換関係は $\mathcal{PT}$ 変換に対してInvariantであることが示されました。これらの演算子を用いて、いくつかの有限次元 $\mathcal{PT}$ 対称性ハミルトニアンを構築し、任意の任意の次元でのスペクトルを解析的に求めています。特に、モデルパラメータの空間における非標準点の出現を示し、$\mathcal{PT}$ 対称性と破られた $\mathcal{PT
2309.05116
Critical-like gelation dynamics in cellulose nanocrystal suspensions
We use time-resolved mechanical spectroscopy to offer a detailed picture of the gelation dynamics of cellulose nanocrystal (CNC) suspensions following shear cessation in the presence of salt. CNCs are charged, rodlike colloids that self-assemble into various phases, including physical gels serving as soft precursors for biosourced composites. Here, a series of linear viscoelastic spectra acquired across the sol-gel transition of CNC suspensions are rescaled onto two master curves, that correspond to a viscoelastic liquid state prior to gelation and to a soft solid state after gelation. These two states are separated by a critical gel point, where all rescaling parameters diverge in an asymmetric fashion, yet with exponents that obey hyperscaling relations consistent with previous works on isotropic colloids and polymer gels. Upon varying the salt content, we further show that these critical-like dynamics result in both time-connectivity and time-concentration superposition principles.
Lise Morlet-Decarnin, Thibaut Divoux, Sebastien Manneville
2023-09-10T19:19:23
http://arxiv.org/abs/2309.05116v2
# Critical-like gelation dynamics in cellulose nanocrystal suspensions ###### Abstract We use time-resolved mechanical spectroscopy to offer a detailed picture of the gelation dynamics of cellulose nanocrystal (CNC) suspensions following shear cessation in the presence of salt. CNCs are charged, rodlike colloids that self-assemble into various phases, including physical gels serving as soft precursors for biosourced composites. Here, series of linear viscoelastic spectra acquired across the sol-gel transition of CNC suspensions are rescaled onto two master curves, that correspond to a viscoelastic liquid state prior to gelation and to a soft solid state after gelation. These two states are separated by a critical gel point, where all rescaling parameters diverge in an asymmetric fashion, yet with exponents that obey hyperscaling relations consistent with previous works on isotropic colloids and polymer gels. Upon varying the salt content, we further show that these critical-like dynamics result in both time-connectivity and time-concentration superposition principles. ## 1 Main text Colloidal nanocrystals are rodlike crystalline clusters of atoms with sizes ranging from tens to a few hundred nanometers [1]. These colloids with tunable surface chemistry can be either synthesized or extracted from natural products such as biopolymers. When dispersed into a suspending fluid, colloidal nanocrystals can self-assemble into micro- or meso-structures with outstanding optical and mechanical properties relevant for applications in optics, electronics, sensing, and biomedicine [2, 3]. Several applications, such as catalysis and optoelectronics, rely on the formation of physical gels, i.e., space-spanning networks of colloidal nanocrystals, which behave mechanically as soft solids. While much is known about the different ways and means to induce gelation in colloidal nanocrystals [4, 5, 6], very few studies have characterized their gelation dynamics and the emergence of solidlike properties upon their self-assembly. Yet, understanding such dynamics is crucial for tailoring the microstructure of nanocrystal gels serving as soft precursors for harder materials. Here, we perform a time-resolved mechanical spectroscopy study of the sol-gel transition in cellulose nanocrystals (CNCs). CNCs are biosourced, biodegradable, and biocompatible nanocrystals, which consist in rigid, negatively charged rodlike particles of typical length 100-500 nm, and diameter 5-20 nm [1, 7, 8, 9, 10, 11, 12, 13]. Aqueous dispersions of CNCs display a rich phase diagram including liquid crystalline phases, gels, and glasses [14, 15]. In practice, in dilute CNC suspensions, gelation is induced by adding salt, hence screening the electrostatic repulsion between the CNCs that aggregate via hydrogen bonds and van der Waals interactions [16, 17, 18]. In this letter, using time-resolved mechanical spectroscopy, we aim to provide a detailed dynamical picture of the sol-gel transition of CNC dispersions following flow cessation in the presence of salt. Based on a _time-connectivity_ superposition principle, viscoelastic spectra acquired across the sol-gel transition are rescaled onto two remarkable master curves that extend on each side of a critical gel point. These master curves are compactly described by two fractional mechanical models, which correspond respectively to a viscoelastic liquid state and to a soft solid state, and which share a common element capturing the power-law rheology of the CNC dispersion at the gel point. Moreover, varying the salt content within the CNC dispersion reveals that the pre-gel viscoelastic liquid and the post-gel viscoelastic solid can also be rescaled onto two universal master curves, following a _time-concentration_ superposition principle. Finally, we discuss the exponents that characterize the present critical-like gelation dynamics in view of the literature on other chemical and physical gels. CNC gels are prepared using a commercial 6.4 wt. % aqueous suspension of CNCs extracted from wood (CelluForce). The suspension is diluted with salt water to obtain samples containing 2 wt % of CNCs and salt (NaCl, Merck) in amounts ranging from 12 mM to 22 mM. The sample is homogenized under high shear using mechanical stirring at 2070 rpm (IKA RW 20 Digital mixer equipped with an R1402 blade dissolver) before, during, and after salt addition for about 5 min at each step. Finally, the sample is stored in the fridge for 24 h before being used. At such a low concentration as 2 wt % in CNCs, and in the above range of salt concentrations, CNCs are expected to form colloidal gels [14, 15, 16]. The mechanical properties of the present CNC suspensions are probed during gelation using a stress-controlled rheometer (AR-G2, TA Instruments) equipped with a smooth cylindrical Taylor-Couette geometry (height 58 mm, inner rotating cylinder of radius 24 mm, outer fixed cylinder of radius 25 mm, and gap \(e=1\) mm). The cell is closed by a homemade lid, and the temperature is controlled to \(T=23\pm 0.1\)\({}^{\circ}\)C, thanks to a water circulation around the cell. This setup allows us to study the same sample over several hours without any artifact due to evaporation. After being loaded in the shear cell, each sample is first fully fluidized under a high shear rate \(\dot{\gamma}=1000\) s\({}^{-1}\) during 60 s, before being quenched by setting \(\dot{\gamma}=0\), which defines the Figure 1: Time-resolved mechanical spectroscopy of the sol-gel transition in a CNC suspension. (a) Dependence of the loss factor \(\tan\delta\) on the frequency \(\omega\) at different points in time (\(t=\)10 to 8000 s, from yellow to dark blue) across the sol-gel transition of a 2 wt. % CNC suspension containing 15 mM of NaCl. The gel point is highlighted by the horizontal dashed line. (b) Master curve for the loss factor \(\tan\delta\) vs reduced frequency \(\tilde{\omega}=a(t)\omega\). The red curves show the best fits of the data respectively by a fractional Maxwell model for \(t<t_{g}\) (upper curve) and by a fractional Kelvin-Voigt model for \(t>t_{g}\) (lower curve). (c) Time dependence of the shift factor \(a\). The initial value is arbitrarily taken as \(a(0)=1\) s.rad\({}^{-1}\). The vertical dashed line highlights the gelation time \(t_{g}=1820\) s. Inset: \(a\) vs \(\varepsilon=|t-t_{g}|/t_{g}\) in logarithmic scales. The red curves in both the main graph and the inset show the best power-law fits of the data, \(a\sim\varepsilon^{-y}\), with exponents \(y_{l}=10.3\) for \(t<t_{g}\) and \(y_{g}=7.1\) for \(t>t_{g}\). time origin \(t=0\) s. Upon cessation of shear, the initially liquid CNC suspension slowly re-assembles into a physical gel. This sol-gel transition is monitored over up to \(5\times 10^{4}\) s thanks to time-resolved mechanical spectroscopy [19]. In practice, cycles of small-amplitude oscillatory stress measurements are performed over five discrete frequencies \(\omega\) ranging between 0.3 and 1.5 Hz. This yields one viscoelastic spectrum, defined by the elastic modulus \(G^{\prime}\) and the viscous modulus \(G^{\prime\prime}\) as a function of \(\omega\), every \(\delta t_{\rm exp}=5\) s. These frequencies are purposely chosen such that the sample properties do not evolve significantly over this time scale, i.e., \(\left(\delta t_{\rm exp}/G^{\prime}\right)\left(\partial G^{\prime}/\partial t \right)\ll 1\)[19, 20]. Figure 1 illustrates the gelation dynamics of a 2 wt % CNC suspension containing 15 mM of salt. The frequency dependence of the loss factor \(\tan\delta=G^{\prime\prime}/G^{\prime}\) is reported in Fig. 1(a) at various points in time across the sol-gel transition. This first allows us to identify the "true" gel point [21] defined as the time \(t_{g}=1820\) s when \(\tan\delta\) is frequency independent [see gray dashed line at \(\tan\delta=0.52\) in Fig. 1(a)]. Second, \(\tan\delta(\omega)\) shows two opposite trends on each side of the gel point: it decreases with \(\omega\) for \(t<t_{g}\), while it increases with \(\omega\) for \(t>t_{g}\). We also note that the slope of \(\tan\delta\) vs \(\omega\) continuously goes from negative to positive across the gel point. This prompts us to apply a time-dependent multiplicative factor \(a(t)\) to the frequency \(\omega\), in order to collapse the \(\tan\delta\) data measured at different points in time onto a single curve [22], thus revealing the systematic dynamics of the gelation process. As shown in Fig. 1(b), this rescaling leads to two master curves, one on each side of the gel point, composed of more than 1600 spectra spanning nine orders of magnitude in rescaled dimensionless frequency \(\tilde{\omega}=a(t)\omega\), and describing the entire gelation process. This result points to a time-connectivity superposition principle, as previously identified in polymer gels and colloidal gels made of spherical particles [23, 24, 25, 26], and highlights the self-similar evolution of the sample viscoelastic properties on each side of the gel point. Quite remarkably, the time-dependent shift factor \(a(t)\) displays a power-law divergence in the vicinity of the gel point, with a critical exponent \(y_{l}=10.3\) for \(t<t_{g}\) and \(y_{g}=7.1\) for \(t>t_{g}\) [see Fig. 1(c)]. As shown in Fig. 2, we further construct a master curve for both viscoelastic moduli \(G^{\prime}\) and \(G^{\prime\prime}\) vs \(\tilde{\omega}\) by shifting each instantaneous viscoelastic spectrum vertically thanks to a multiplicative coefficient \(b(t)\). These master curves span over five orders of magnitude in rescaled moduli \(\tilde{G}^{\prime}=b(t)G^{\prime}\) and \(\tilde{G}^{\prime\prime}=b(t)G^{\prime\prime}\), which confirms that time-connectivity superposition applies. Here again, the shift factor \(b(t)\) follows critical-like dynamics around \(t_{g}\), yet with exponents \(z_{l}=3.0\) for \(t<t_{g}\) and \(z_{g}=1.9\) for \(t>t_{g}\) that are about three times smaller than those found for \(a(t)\) [see inset in Fig. 2]. Moreover, the nine orders of magnitude covered in Figure 2: Master curves obtained by rescaling the elastic modulus \(G^{\prime}\) (\(\bullet\)) and the viscous modulus \(G^{\prime\prime}\) (\(\triangle\)) by the same multiplicative factor \(b(t)\), i.e., \(\tilde{G}^{\prime}=b(t)G^{\prime}\) and \(\tilde{G}^{\prime\prime}=b(t)G^{\prime\prime}\). The horizontal axis is the rescaled frequency \(\tilde{\omega}=a(t)\omega\) as defined in Fig. 1. The red lines show the best fits of the data by a fractional Maxwell model for \(t<t_{g}\) and by a fractional Kelvin-Voigt model for \(t>t_{g}\) (see sketches of the mechanical models as insets). Inset: time dependence of the shift factor \(b\). The initial value is arbitrarily taken as \(b(0)=1\) Pa\({}^{-1}\). The red curves show the best power-law fits of the data, \(b\sim\varepsilon^{-z}\) where \(\varepsilon=|t-t_{g}|/t_{g}\), with exponents \(z_{l}=3.0\) for \(t<t_{g}\) and \(z_{g}=1.9\) for \(t>t_{g}\). The vertical dashed line highlights the gelation time \(t_{g}=\)1820 s. Same experiment as in Fig. 1. rescaled frequencies indicate a very wide range of distinct relaxation processes in the sample microstructure. Such a broad relaxation spectrum is often compactly described by fractional models,[25, 27, 28] which introduce "spring-pots" as key rheological elements. A spring-pot is defined by a constitutive equation that relates the stress \(\sigma\) and the strain \(\gamma\) through a fractional derivative,[27]\(\sigma=\mathbb{G}\,d^{3}\gamma/dt^{\beta}\), where \(\mathbb{G}\) is a "quasi-property" with dimension Pa.s[9], and \(\beta\in[0,1]\) is the order of the derivative. In the limit \(\beta\to 0\) (resp. \(\beta\to 1\)), the spring-pot corresponds to a purely elastic (resp. viscous) response. For \(0<\beta<1\), it displays a power-law viscoelastic spectrum \(G^{\prime}\sim G^{\prime\prime}\sim\omega^{\beta}\), and a frequency-independent phase angle \(\delta=\beta\pi/2\). Therefore, the spring-pot is ideally suited to model the response of viscoelastic materials referred to as "critical gels," that form self-similar percolated networks and display a power-law rheology at the gel point.[29, 30] In particular, the fractional approach was recently shown to offer a powerful tool to characterize the mechanical response of colloidal gels, from their critical gel point and beyond.[25, 30, 31] Here, we fit the master curves on each side of the gel point by two five-parameter fractional models, respectively a fractional Maxwell model for \(t<t_{g}\), which captures the liquid-like viscoelasticity of the CNC suspension prior to the gel point, and a fractional Kelvin-Voigt model for \(t>t_{g}\), which captures the solid-like viscoelastic behavior past the gel point (see, respectively, lower and upper sketches in Fig. 2). The reader is referred to the Supplemental Information for full mathematical details on the models. A crucial result is that both models share a common spring-pot element \((\mathbb{G},\beta)\), which is alone responsible for capturing the gel point, here with an exponent \(\beta=0.30\) that corresponds to a value of \(\tan(\beta\pi/2)=0.51\), which is fully consistent with the value of \(\tan\delta\) observed at the gel point in Fig. 1(a). The fits by the two models are shown with red lines in Fig. 2 for the rescaled viscoelastic moduli, and in Fig. 1(b) for the corresponding loss factor. The agreement between theory and experiment is excellent, which provides strong support for interpreting the gelation dynamics in terms of two consecutive fractional mechanical behaviors separated by a critical gel point. In order to explore the impact of the salt content on the master curves reported in Fig. 2, the above analysis was repeated for CNC dispersions with salt concentrations ranging between 12 mM and 22 mM. In all cases, we can unambiguously identify a critical gel point associated with a gelation time \(t_{g}\). Upon increasing the salt concentration, the gelation dramatically accelerates due to the stronger screening of electrostatic interactions [see Supplemental Fig. S1(a)], as already reported not only in CNC suspensions,[17, 32] but also for other types of colloids.[33, 34] Yet, for all salt contents, a master curve similar to that of Fig. 2 can be built, for which the above fractional approach provides very good fits (see Supplemental Fig. S2). Strikingly, at the gel point, the power-law dependence of the viscoelastic moduli with frequency is independent of the salt content, with an exponent \(\beta=0.30\pm 0.03\) [see Supplemental Fig. S1(b)]. This demonstrates the robustness of both the time-connectivity superposition principle and the microstructure of the percolated network formed at the gel point to a change in the strength of electrostatic interactions between the CNCs. Indeed, the exponent \(\beta\) can be related to the fractal dimension \(d_{f}\) of the stress-bearing network at the gel point.[25, 35] Using the relation proposed in Ref.[35] for screened interactions, we obtain \(d_{f}=2.2\) for \(\beta=0.30\), which is compatible with independent neutron and light scattering measurements that yield \(1.6\lesssim d_{f}\lesssim 2.1\).[16] Furthermore, we note that beside \(\beta\), the fractional derivative orders \(\alpha\) and \(\xi\) of the two other spring-pots involved in our fractional models, which respectively control the low-frequency viscoelastic behavior of the viscoelastic liquid for \(t<t_{g}\) and of the soft solid for \(t>t_{g}\), do not significantly depend on the salt content either (see Supplemental Table 1). This suggests again rescaling all master curves, first by collapsing all loss factors \(\tan\delta(\tilde{\omega})\) thanks to a simple rescaling of \(\tilde{\omega}\) into \(\tilde{\omega}/\tilde{\omega}_{i}\) (see lower insets in Fig. 3), and then by normalizing both rescaled viscoelastic moduli \(\tilde{G}^{\prime}\) and \(\tilde{G}^{\prime\prime}\) with a factor \(\tilde{G}_{i}\), where \(i=l\) (\(i=g\) resp.) for the liquid (gel resp.) state. As shown in Fig. 3, the result is two remarkable universal master curves for the dynamics both before and after the gel point, spanning over twelve orders of magnitude in rescaled frequency, and four decades in viscoelastic moduli. These master curves are consistently nicely fitted by a fractional Maxwell model for \(t<t_{g}\) and by a fractional Kelvin-Voigt model for \(t>t_{g}\), with \(\beta=0.30\), \(\alpha=0.64\), and \(\xi=0.19\). The shift factors \(\tilde{G}^{\prime}_{i}\) and \(\tilde{\omega}_{i}\) appear to be linked by two different, non-trivial power laws with exponents of roughly \(1/2\) in the pre-gel state and \(1/4\) in the post-gel state (see upper insets in Fig. 3). Finally, the superposition principles revealed in the present experiments derive from the critical-like dynamics of the gelation process around the gel point that are characterized by four critical exponents, which values are independent of the salt content [see Supplemental Fig. S1(c)]: \(y_{l}=9.4\pm 0.5\), \(y_{g}=7.4\pm 0.5\), \(z_{l}=2.8\pm 0.2\), and \(z_{g}=2.1\pm 0.2\) when averaged over all concentrations in NaCl. These exponents are linked to those introduced classically in the literature on chemical and physical gels based on percolation theory [36, 37, 38, 39, 40, 41, 21, 24, 36, 42, 23, 24, 37, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42]. In particular, the exponents \(y_{l}\) and \(y_{g}\) associated with the frequency shift factor \(a(t)\) in Fig. 1 correspond to the divergence of the longest relaxation time in the system, respectively before and after the gel point. Here, while most previous works have postulated a symmetric divergence, i.e., \(y_{l}=y_{g}\), we find that the pre-gel exponent is systematically larger than its post-gel counterpart by about 20 %, out of the range of experimental uncertainty. A similar difference is found between the exponents \(z_{l}\) and \(z_{g}\) derived in Fig. 2 from the shift factor \(b(t)\) for viscoelastic moduli. Whether such asymmetry in the critical behavior close to gelation is specific to CNCs or general to rodlike colloids is an open issue. Moreover, \(z_{l}\) and \(z_{g}\) relate to the exponents associated with the pre-gel divergence of the zero-shear viscosity, \(\eta_{0}\sim\varepsilon^{-s}\), and with the post-gel growth of the zero-frequency elastic modulus, \(G_{e}\sim\varepsilon^{z}\), through \(z_{g}=z\) and \(z_{l}=y_{l}-s\). Based on similarity arguments, the exponents \(s\) and \(z\) were shown to be linked to the rheological exponent \(\beta\) at the gel point through two hyperscaling relations [38, 39], \(y_{l}=s/(1-\beta)\) and \(y_{g}=z/\beta\), which simply rewrite \(\beta=z_{l}/y_{l}=z_{g}/y_{g}\) in the present notations. Using the above average values of the various exponents, we get \(z_{l}/y_{l}\simeq 0.29\) and \(z_{g}/y_{g}\simeq 0.28\), very close to \(\beta\simeq 0.30\) and thus consistent with the predicted hyperscaling. To conclude, our results demonstrate that the gelation of CNC suspensions after shear cessation involves critical-like dynamics, where the salt content only drives the aggregation kinetics. While similar criticality has already been reported many times in chemical and physical gels, our experiments address the case of rodlike colloids for the first time thanks to a systematic rescaling approach, which does not require estimating a zero-shear viscosity and a zero-frequency modulus by extrapolating viscoelastic spectra at low frequencies. The present study also highlights several peculiarities, including asymmetry in the pre-gel and post-gel exponents and a surprisingly robust time-concentration superposition principle, which call for a microscopic description of the evolution of the colloidal nanorod network across the sol-gel transition, e.g., through time-resolved small-angle scattering. Such complementary microstructural information will help identify a possible universality class for gels made of colloidal nanorods based on the critical exponents and on the fractal features of the space-spanning network. Supporting figures and additional details on the fractional mechanical models, on the data analysis, and on the fitting procedures (PDF). The authors thank I. Capron, B. Jean, and F. Pignon for fruitful discussions. This work was funded by the Institut Universitaire de France (IUF). L. M.-D. also acknowledges financial support from Ecole Normale Superieure de Lyon.
``` 時間分解的な機械的分光法を用いて、塩の添加によりセルロースナノクリスタルの凝集ダイナミクスを詳細に観察しました。CNC懸濁液の凝固停止後、その凝固過程における時間的依存性を示します。CNCは電荷を持ち、棒状のコロイドであり、様々な相を形成し、包括的に、生物資源を起源とした複合体への軟性前駆体として機能します。ここでは、CNC懸濁液の溶液-凝固転移の系列の線形粘弾性スペクトルを、凝固と軟性固体状の両方のマスター曲線に関連付けます。これらのマスター曲線は、凝固前の粘弾性液状状態と凝固後の軟固体状に対応しています。これらの状態は、すべてを再スケールパラメータが異なっている、臨界凝固点で区別されます。 ```
2309.05670
Towards Supporting Sustainable Grocery Shopping through Joyful Technology: Annotated Portfolio of Speculative Ideas
A third of greenhouse gas emissions are attributable to the food sector. A shift in dietary habits could reduce these by half. Engaging and empowering consumers is vital to this critical shift; yet, if we get the framing wrong, we might cause distress or eco-anxiety, impeding initial engagement as well as longer-term diet change. Evoking joy is a powerful yet under-explored motivator to overcome psychological barriers and support pro-environmental attitudes. This pictorial presents the outcomes of a one-day workshop as a series of speculative ideas in the form of an annotated portfolio, highlighting design qualities and interaction mechanisms that afford joy and sustainability in food choices. Our contribution will inspire HCI researchers and designers to reposition joy as a fundamental value to sustainability communication
Gözel Shakeri, Frederike Jung, Ferran Altarriba Bertran, Daniel Fernandez Galeote, Adrian Friday
2023-09-08T12:38:26
http://arxiv.org/abs/2309.05670v1
Towards Supporting Sustainable Grocery Shopping through Joyful Technology: Annotated Portfolio of Speculative Ideas ###### Abstract A third of greenhouse gas emissions are attributable to the food sector. A shift in dietary habits could reduce these by half. Engaging and empowering consumers is vital to this critical shift; yet, if we get the framing wrong, we might cause distress or eco-ancxiety, impending initial management as well as longer-term diet change. Evolving jobs is a powerful yet under-explored motivator to overcome psychological barriers and support pro-environmental attitudes. This pictorial presents the outcomes of a one-day workshop as a series of speculative ideas in the form of an annotated port, highlighting design qualities and interaction mechanisms that afford joy and sustainability in food choices. Our contribution will inspite HCI researchers and designers to reposition ways as a fundamental value to sustainability communication. Behavior change, speculative design, persuasive technology, joy, sustainable HCI, sustainability communication, sustainable interaction design ## 1 Introduction A third of environmental degradation is attributable to the food sector [40]. Dietary change could reduce this by half [53]. Thus, eating more sustainably (i.e., consuming more plant-based, local, and seasonal foods) is of growing importance. Accordingly, nearly three-quarters of North-West Europeans [1, 2] think it is important for them to buy food that has a low environmental impact, yet only 7% regularly do [11, 5]. While there are clearly many system barriers beyond the individual consumer, engaging consumers in this change is more important than ever. Much research focuses on the provision of information when sustainable grocery shopping, in-store and online. Solutions range from sustainable recommender systems [57] and user-preference based systems [26] to incen-ivivisation strategies [19, 37], and behavioural nudges [32, 37, 51]. However, there are two major issues with these approaches. First, existing solutions side-line positive emotional involvement. A growing body of literature argues that the widely-held view of "lack of information" about climate change is not the main obstacle to engagement [31]; rather a lack of positive emotional involvement is. When exposed to environmental degradation [58], the pri-many emotional reactions we may feel are a sense of diminished control and powerlessness [2, 13, 14]. Mechanisms aimed at relieving us from these negative feelings are denial, rational distancing, agathy, and delegation [25]. Thereby, emotions function as _ante-cedent_ to engagement [47], obstructing sustainable behaviours. Additionally, emotions function as _consequence_ of engagement. Individuals who are particularly invested in sustainability are further impacted by negative feelings such as anxiety, distress, dread, guilt, shame, and frustration [34, 59, 60], reinforced by the notion that engagement is intrinsically utilitarian and denies self-indulgence. Ultimately, negative feelings prior and post-environmental behaviours, and the shift of responsibility onto the individual and away from system drivers including affordually and access causes disengagement with climate action altogether [41]. Interventions that only target factors such as knowledge, instead of attempting to overcome negative emotions are unlikely to cause significant changes in socially and culturally embedded high-impact eating habits [5, 39]. We posit that _joy1_ may help consumers to find and maintain holistically meaningful ways to take action, thus promoting and sustaining pro-environmental efforts [25, 47, 56, 61]. Tools that elicit positive emotions, such as feeling amazed, cheerful, proud, and hopeful, have been shown to be significant predictors of purchase intentions, and can increase the probability of making a purchase [17, 48]. Overlooking the importance of joy when designing systems supporting sustainable food purchases may hinder their success [17, 49, 4]. Second, existing solutions utilise primarily (front-of-package) labels as a language to communicate a food's sustainability, focusing again on quantitative and symbolic representation rather than sparking joy. While co-labels could elicit positive emotions such as pride over purchase behaviour [17, 36], eco-labels can lead to consumer confusion through information overload [63] as well as lack of information [20], decreased consumer trust and label credibility [21, 33], and ultimately in feelings of helplessness [34, 59, 60] once sustainable purchase intentions are abandoned [3, 16, 54]. More generally, the intention and impact of eco-labels and existing digital solutions do not extend beyond technology as guidance, as mediators, or as tools for reflection [29]. Despite potentially playing an important role in shaping people's climate conscious decision-making, joy is clearly not sufficient nor is there a one-size-fits-all approach [47]. The expression of joy depends on situational circumstances and individual variables, including one's personal understanding of joy [47]. Therefore, in this work, we explore what might be 'joyful elements' for sustainability signalling by a research-through-design workshop. The ubiquity of digital tools can cater to a multitude of factors by providing versatile media (e.g. apps, web-broward old-ons, Augmented Reality (AR) glasses) in versatile ways (e.g. in-store, online). Actnowledging the possibly pivotal role of digital technologies, we invited HCI researchers and designers to a workshop where we co-designed technology-mediated eco-labels which support consumers in pro-environmental decision-making. The overarching theme of the workshop was to rethink sustainability communication and to incorporate joy into it. This paper presents the outcomes of conversations that took place during the workshop, aimed at imagining the future of digital sustainability support. In particular, our contribution is twofold: first, we contribute an annotated portfolio of five speculative ideas for eco-labels that utilise emerging technologies in ways that may be both _effective_ and _joyful_. Second, we provide a close reading of these designs [9] to reflect on common qualities and themes to sustainability signalling. Aiming to'reposition joy' as a fundamental aspect of sustainability signalling, this work presents researchers and designers in this field with tangible ideas on how to incorporate joy in sustainable grocery shopping. ## Background Joy is key in pro-environmental behaviour and must be considered for productive engagement with climate change. For example, the way we communicate about animal-derived products often highlights positive emotions, triggering re-experiences of eating and enjoying foods, often in social circumstances [39] (e.g., family barbecue), thereby guiding consumption processes such as habits and perceived pleasure [15, 42, 50]. In contrast, communication about sustainable foods often lacks enjoyment, focusing instead on informational and utilitarian factors such as the sustainability of the ingredients or the greater good for society and the planet [39]. Consequently, sustainability becomes a moral obligation towards others rather than an intrinsic value. This, once framed as an extrinsic source of motivation, can tie the behaviour to contingent and external dynamics of reward and punishment [44]. Then, associated negative emotions may function as hindrance to future purchase behaviours. Evolving joy instead may be key to promoting productive engagement with climate change. The single most used technology to support sustainable grocery shopping are food labels. The basic idea of food label technology is to provide information, enable comparison, and give feedback [10, 46], using either abstract or concrete information, which is presented visually or through text, giving complex information about a product's characteristics, such as animal welfare standards, environmental impacts, and ethical wage labour, in a simplified form to make it easier for consumers to make informed decisions [4, 8, 21, 28]. HCI examples of co-labels on food are EoFormed [6]. Social Recipes [62], Envirofy [51], Nu-Food [38], Eco-nundrum [46], and Food Qualiculator [7]. These works investigated labels as a means of providing information tailored to users' own context and choices, without a direct focus on enabling joy. Beyond being primarily informational, labels are traditionally static. They are either printed onto food packaging, visualised in sculptures [46], or displayed with a product during technology-mediated shopping [52]. Static solutions operate from a one-size-fits-all mentality; pro-environmental consciousness [25] however is a complex made up of environmental knowledge, values and attitudes, and emotional involvement. A single solution cannot cater to the multitude of know-ledges and aspects of joy. In HCI, attempts to create dynamic labels exist (e.g. [26, 51]), but they do not consider joy, ignoring a key aspect necessary for successful sustainability support when grocery shopping. Finally, labels may elicit emotions, but often negative ones [17]. On the one hand, labels empower consumers to adhere to their sustainability goals, inhibiting a sense of triple through accomplishment [17]. On the other hand, labels do not address the distress experienced in regards to climate change engagement, such as the sacrifice of personal pleasure over global interests, etc. Invoking people's emotional responses more intentionally and actively may overcome these psychological (and societal) challenges. In summary, there exists a gap in the literature which investigates _effective_ and _joyful_ sustainability support. Digital technological solutions, to a larger extent than printed packaging labels, have the ability to display a multitude of personalised or emotionally appealing content and allow the shopper to interact with sustainability information. HCI researchers play a crucial role in designing, creating, and evaluating effectiveness visualisations that encourage sustainable food choices, positive grocery shopping experiences, and ultimately reduce greenhouse gas emissions. This pictorial presents a series of five speculative ideas in the form of an annotated portfolio, highlighting interesting questions that play joyful and sustainable food choices. **Workshop** This pictorial is based on the results of a workshop held at [omitted]. It was based on the concept of "choosing (with) joy" in which participants explored the idea of joy in co-labels, sustainable food common-nication, and technologies. After piloting the workshop with six participants in a prior setting, we then cont-ducted the workshop in one day with 11 participants, including 5 of the co-authors. All participants worked in the field of HCI, as researchers or designers. Overall, the workshop consisted of three parts: Examining the state-of-the-art in sustainability communication; design-ing speculative, digital tools for sustainability communication; and delying into the ideas and their underlying qualities. Together with the participants we discussed surfaced overall themes to joyful, technology-mediated sustainability communication. After the workshop, the authors interact this analysis. We revisited the common qualities and themes to sustainability signalling through a close reading of the speculative designs [9], and linked them back to existing literature, in an attempt to challenge and solidify the insights from the workshop. The result is a set of joyful speculative ideas, their design qualities and themes that inspire designers and HCI researchers to reclaim joy as a fundamental value for sustainable grocery shopping. # Bee/mind the story: multiple points of view _Be(c)hind the Story_ imagines a pot of honey bearing a label that can be scanned using a phone. The label is not only a symbol, but the drawing of a bee which asks directly: "Have you ever wondered how honey gets from me to you?". When the label is scanned, an AR environment appears on the screen, inviting the user to select one of the points of view involved in the production and distribution of the honey. The unique stories included involve humans, such as the beepers, but also other animal species such as bees. In addition, the user will be able to focus on the honey's point of view itself, including its distribution process. These are presented as videos sprilled with moments of choice where users can decide whether they want more information on different issues, from animal welfare to carbon emissions. Overall, the system dopants from statistical information to wave unique stories, as well as an overarching one, which are then tailored to the consumer according to their own preferences and other circumstances (e.g., their country, representing an and point of the distribution process that may be very different from others). In this way, _Be(c)hind the Story_ prioritiishes the consumer's autonomy, as well as their relatedness to various multi-species actors to whom they have, indeed, a deep connection: they are about to eat the fruit of their labour. By being able to choose the various stories of how the honey got the consumer's particular city or shop, _Be(c)hind the Story_ aims to provoke reflection and a deeper engagement with what is behind the scenes. ## ECO Bloom: Personalized adaptation _Eco Bloom_ provides a layered overview about how a products' qualities align with a consumers' personal pro-environmental attitudes. Upon scanning a product in the _Eco Bloom_ caters to the specific needs of users, who, a colourful closed clover appears on the consumers' smart watch or phone. The closed clover shows an overall score (1-10). Depending on the value, the label is coloured in a traffic-light metaphor. Each folded leaf, surrounding the overall score, shows a symbol of different eco-categories that are important to the consumer, i.e. animal welfare, water consumption etc. When tapping on one leaf, it folds open, revealing a detailed score for the individual category and follow-up information. Every score the consumer sees is geared towards their personal attitudes, i.e. what they consider as important and (un-)acceptable. As pro-environmental attitudes are highly individual, _Eco Bloom_ caters to the specific needs of users, providing personalised product feedback. Thereby, _Eco Bloom_ adjusts to any phase of an individual's journey towards sustainability. Moreover, a key quality of this label is that it does not overwhelm the consumer with a information unnecessary to their interests and by too much information at once. The overall folded label fits onto the size of a smart watch display. As it is a layered "one-size-fits" all information, this type of label encourages individual perspectives and recommendations for sustainable shopping. A label of our own: community-driven eco-labeling _A label of our own_ is a smartphone app for crowdsourcing eco-labeling among consumers. It uses an AR system to enable consumers to target any product's packaging and augment it with meta-data such as: text-based annotations, drawings to hack or otherwise modify the visual appearance of the product, ratings based on consumers' perception of sustainability. By adding content to emergent AR-based eco-labels, users contribute to a collectively-owned, multi-faceted rating of products - thereby claiming shared responsibility over the fiscalisation of those products' socio-ecological impact. Because the system is not owned by anyone in particular (the government, the retailer, the producer, the consumer), the content of the collectively crafted eco-labels can hardly be censored in a top-down manner. Rather, these eco-labels are the result of an emergent and evolving process of negotiation between consumers. As such, this design idea tackles an important issue with eco-labels: credibility. By displacing their creation _from_ producers and retailers to consumers, it reduces chances of greenwashing. From an experiential perspective, _A label of our own_ appeals to people's desire for sharing knowledge and opinion; and, conversely, from their will to learn from others'. It also pins into the joy that can derive from engaging in creative, subversive activity with an intention of making a societal impact. It builds on existing traditions of activism such as guerilli art, which use creative practice as a form of societal transformation through subsection of the status quo. Borrowing from that principle, _A label of our own_ reclaims consumers' voice when it comes to presenting products - if those products are labelled by producers in a non-truthful way, they will likely be called out by the community. Loud&Bold encourages joyful discovery through layers of information. Olfactory, auditory, and haptic feedback provide high-level sustainability information. Upon active search however (i.e. turning the product around), consumers receive detailed information. The alteration of test on the back provides radical transparency increasing consumer trust, detecting them, making them confident in their choice and themselves. _Load&Bold_ is a technology which invades a consumer's physical, emotional, and rational space - loudly and boldy. Simultaneously, it asks consumers to be loud and bold about their choices, be it for good or ill, as the choice is made in public. Loud&Bold encourages joyful discovery through layers of information. Olfactory, auditory, and haptic feedback provide high-level sustainability information. Upon active search however (i.e. turning the product around), consumers receive detailed information. The alteration of test on the back provides radical transparency increasing consumer trust, detecting them, making them confident in their choice and themselves. _Load&Bold_ is a technology which invades a consumer's physical, emotional, and rational space - loudly and boldy. Simultaneously, it asks consumers to be loud and bold about their choices, be it for good or ill, as the choice is made in public. Loud&Bold assesses the sustainability of its content, independent of food category and can thereby be used for any packaged foods, given the embedded technology within _Loud&Bold_ is sourced sustainably, with long-term usage in mind. Through its multimodal design, _Loud&Bold_ encourages consumer sustainability-ability, but also raises the question of social acceptability, especially if an individual is not loud and bold, yet desires the "unsuitable" choice. Finally, the multimodal layers allow for disabled people to engage in everyday sustainability (which is shockingly under-explored). Providing access to participation in sustainability beyond the "perfectly sighted" population might elicit feelings of sense of control, confidence in choice, equality, and ultimately, by anyone disabled communities, therefore greater sustainability, environmentally and socially. ## UNPACKAGED: Disruptive Packaging _Unpackaged_ represents a retail system in which packaging and labels are intertwined. In it, producers are only allowed to design and use aesthetically pleasing packaging designs if their product meets all the necessary eco-cerifications. The _Unpackaged_ eco-labels are graphical symbols imprinted on the package, but they can also be scanned using a digital device such as a mobile phone to show, via augmented reality, the story behind the product. The other side of this system are the products that do not comply with the certifications required to have an attractive presentation. In this case, products must come in bland, homogeneous packaging, In addition, the absence of the necessary eco-labels is also highlighted in the packaging, showing an empty space where they should be. Since this is a status-quo disrupting system, it would likely need strong support and implementation at the policy level, similarly to tobacco and alcohol marketing. _Unpackaged_ aims to provide curiosity through its bland packaging: Why do all these products look the same? After this first impression, the consumer could choose to engage in a closer exploration, in which they would easily find that eco-labels are missing. Once again, this absence aims to stimulate further learning about the reason for this absence.
温室効果ガスの3分の1は食料産業が原因です。食生活の習慣が変化すれば、その量は半分に抑えることができます。消費者の関与とEmpowermentがこの重要な変化において非常に重要です。しかし、フレームが間違えば、ストレスや環境不安に繋がる可能性があり、初期の関与や長期的な食生活の変更を阻害する可能性があります。喜びを喚起することは、心理的な障壁を克服し、 pro-environmental attitudesを支援する強力な動機ですが、まだ十分に活用されていません。この図表は、1日のワークショップの成果を、デザインの品質と相互作用機構のシリーズとして、喜びと持続可能性の要素を強調した注釈付きポートフォリオとして提示しています。私たちの貢献は、HCI研究者とデザイナーに、喜びを持続可能性コミュニケーションにおける基本的な価値として再定義させることを目的としています。
2309.13012
Electric Autonomous Mobility-on-Demand: Jointly Optimal Vehicle Design and Fleet Operation
The advent of autonomous driving and electrification is enabling the deployment of Electric Autonomous Mobility-on-Demand (E-AMoD) systems, whereby electric autonomous vehicles provide on-demand mobility. Crucially, the design of the individual vehicles and the fleet, and the operation of the system are strongly coupled. Hence, to maximize the system-level performance, they must be optimized in a joint fashion. To this end, this paper presents a framework to jointly optimize the fleet design in terms of battery capacity and number of vehicles, and the operational strategies of the E-AMoD system, with the aim of maximizing the operator's total profit. Specifically, we first formulate this joint optimization problem using directed acyclic graphs as a mixed integer linear program, which can be solved using commercial solvers with optimality guarantees. Second, to solve large instances of the problem, we propose a solution algorithm that solves for randomly sampled sub-problems, providing a more conservative solution of the full problem, and devise a heuristic approach to tackle larger individual sub-problem instances. Finally, we showcase our framework on a real-world case study in Manhattan, where we demonstrate the interdependence between the number of vehicles, their battery size, and operational and fixed costs. Our results indicate that to maximize a mobility operator's profit, a fleet of small and light vehicles with battery capacity of 20 kWh only can strike the best trade-off in terms of battery degradation, fixed costs and operational efficiency.
Fabio Paparella, Theo Hofman, Mauro Salazar
2023-09-21T12:29:19
http://arxiv.org/abs/2309.13012v1
# Electric Autonomous Mobility-on-Demand: Jointly Optimal Vehicle Design and Fleet Operation ###### Abstract The advent of autonomous driving and electrification is enabling the deployment of Electric Autonomous Mobility-on-Demand (E-AMoD) systems, whereby electric autonomous vehicles provide on-demand mobility. Crucially, the design of the individual vehicles and the fleet, and the operation of the system are strongly coupled. Hence, to maximize the system-level performance, they must be optimized in a joint fashion. To this end, this paper presents a framework to jointly optimize the fleet design in terms of battery capacity and number of vehicles, and the operational strategies of the E-AMoD system, with the aim of maximizing the operator's total profit. Specifically, we first formulate this joint optimization problem using directed acyclic graphs as a mixed integer linear program, which can be solved using commercial solvers with optimality guarantees. Second, to solve large instances of the problem, we propose a solution algorithm that solves for randomly sampled sub-problems, providing a more conservative solution of the full problem, and devise a heuristic approach to tackle larger individual sub-problem instances. Finally, we showcase our framework on a real-world case study in Manhattan, where we demonstrate the interdependence between the number of vehicles, their battery size, and operational and fixed costs. Our results indicate that to maximize a mobility operator's profit, a fleet of small and light vehicles with battery capacity of 20 kW only can strike the best trade-off in terms of battery degradation, fixed costs and operational efficiency. Electric vehicles, Smart mobility, Simulation of transportation network, Optimization, Intelligent transportation systems. ## I Introduction Mobility-as-a-Service (MaaS) is a solution in the field of mobility that allows users to reserve and pay for several mobility services through a smartphone [1] without the need to personally own the used vehicle. These platforms may address the issues of sustainability and accessibility that mobility systems are currently facing by leveraging opportunities stemming from autonomous driving, connectivity and electrification, for instance with the deployment of Electric Autonomous Mobility-on-Demand (E-AMoD) systems, where electric autonomous vehicles provide mobility services to human users in an on-demand fashion, as shown in Fig. 1. Crucially, the system-level design of an E-AMoD fleet, regarding the number of vehicles, size of their batteries, and charging infrastructure, has a strong influence on both the operational strategies of the fleet and the fixed costs related to its deployment. For instance, a fleet where vehicles are equipped with a large battery size, and therefore have a longer range, provides more flexibility in the charging schedule. This aspect is important because it enables vehicles to wait until they are near a charging station to minimize the distance driven for recharging purposes, and to assign more vehicles to serving customers during periods of high demand, rather than continuously allocating the charging task to the fleet. Fig. 1: E-AMoD single-vehicle operation on the road graph (top), directed acyclic graph (DAG) representation (bottom-left, with non-selected possibilities in light blue), and energy DAG (E-DAG) representation (bottom-right). Each arc/node on the DAGs represents the correspondingly colored fastest path between the nodes on the road graph. Thereby, green arcs indicate charging during a transition. However, larger batteries also increase the cost and weight of each vehicle, leading to higher initial costs, greater energy consumption per distance driven and, given the same charging infrastructure, longer charging times. In contrast, a smaller battery size may lead to less flexibility in terms of operations at the potential advantage of lower fixed and energy costs. Therefore, to strike the best trade-off in terms of operational performance and fleet costs, the design and the operation of E-AMoD systems must be jointly studied. Against this background, this paper proposes a modeling and optimization framework to capture and jointly solve the optimal design and control problem for an E-AMoD system. _Literature Review:_ This work contributes to the research stream of vehicle-level design jointly optimized with the operation of E-AMoD systems, that we review in the following. A significant amount of work has been published on the operation of AMoD systems, with a variety of different objectives and methods, as shown by [2]. Some examples of the latter are queuing-theoretical models [3, 4, 5], agent-based models [6, 7, 8], vehicle routing problem (VRP) [9, 10, 11, 12, 13, 14, 15], and multi-commodity network flow models [16, 17, 18, 19]. In particular, network flow models have been leveraged to optimize the operation accounting for a wide range of factors such as congestion-aware routing [20], intermodality [21], and ride-pooling [22]. For electric fleets, the operational problem encompasses the charging schedule. To this end, fast solution algorithms based on acyclic graphs have been proposed by Yao et al. [9]. In [23, 24], the authors accounted for the coupling with the power grid via network flow models. Regarding E-AMoD, multi-layer network flow models inspired by [23] have been recently leveraged to optimize the charging station siting and sizing jointly with the operation of the fleet [18, 25]. Nevertheless, the majority of these papers do not focus on vehicle-level design aspects, if not via parametric studies of the vehicles. Usually, the vehicle is assumed to be given because it allows to pre-define the mass, energy consumption and autonomy range of the single vehicle. The design of AMoD systems has been investigated with methods ranging from Directed Acyclic Graphs (DAGs) [26], to fluidic models [27] which are treated as a linear time invariant problems. Wallar et al. [28, 29] devised an algorithm to capture vehicles with a different seat-capacity and optimize their number and operation for a ride-sharing AMoD environment, but without considering an electric fleet that needs to be recharged. In [30, 31], the authors investigated the VRP with time windows and heterogeneous fleet composition, but they did it by selecting from a pre-existing set of vehicles so that they would minimize a given objective. In conclusion, to the best of the authors' knowledge, there is a lack of an optimization framework that simultaneously optimizes the number of vehicles, their battery size, and the operation of the fleet. _Statement of Contributions:_ The contribution of this paper is threefold. First, we propose an optimization framework for E-AMoD systems based on DAGs. The optimization problem includes vehicle-level design variables of both the single vehicle unit and of the entire fleet itself. Second, to overcome tractability issues of large problem instances, we devise and analyze a method based on solving multiple randomly sampled sub-problems to draw the probability distribution of the design solution. This allows to find a slightly more conservative solution, in line with the design objective of the optimization problem. Third, we present our results on a real-world case study for Manhattan, NYC, USA, and we show the trade-offs between number of vehicles, battery size and electricity cost. A preliminary version of this paper was presented at the 2022 IEEE Conference on Decision and Control [26]. In this extended version, we carry out a broader literature review, include battery degradation in the model, analyze the quality of the solution obtained from the randomly sampled sub-problems, and provide a heuristic solution to increase their solvable size. Moreover, we conduct a sensitivity analysis on the battery capacity of the fleet to better capture the trade-offs between the design variables. _Organization:_ The rest of the paper is organized as follows: Section II introduces the optimization framework and its underlying assumptions. Section III provides a solution approach. The case study for Manhattan, NYC is discussed in Section IV. In the final Section V, we summarize the work, offer a discussion and suggest avenues for future research. ## II Optimization Problem Formulation This section formulates the optimal vehicle assignment and charge scheduling problem leveraging DAGs. Thereafter, we include the formulation of the objective function, constraints and variables, capturing the trade-off between number of vehicles, battery capacity of the single unit, cost to operate the fleet, and revenues generated by serving travel requests. ### _Road Network_ We model the transportation system as a directed graph \(\mathcal{G}^{\prime}=(\mathcal{V}^{\prime},\mathcal{A}^{\prime})\), where the set of arcs \(\mathcal{A}^{\prime}\) represents road links, the set of vertices \(\mathcal{V}^{\prime}\) contains intersections. We also indicate \(D_{mn}\) and \(T_{mn}\) as the distance and travel time, respectively, of road segments between road intersections \((m,n)\in\mathcal{A}^{\prime}\). We denote a set of travel requests by \(\mathcal{I}\) with \(i\in\mathcal{I}:=\{1,2,...,I\}\) the set of transportation requests. In order to model the demanded trips, let the triple \(r_{i}=(o_{i},d_{i},t_{i}^{\rm start})\) denote a requested trip, where \(t_{i}^{\rm start}\) is the requested pick-up time, whilst \(o_{i},d_{i}\in\mathcal{V}^{\prime}\) are the origin and destination nodes of request \(i\), respectively. In the area under consideration there are \(C\) charging stations, whereby each station \(c\in\mathcal{C}:=\{1,2...,C\}\) is located at vertex \(n_{c}\in\mathcal{V}^{\prime}\). For each arc in \(\mathcal{A}^{\prime}\) we assume the driving pattern and the travel time to be fixed and known in advance for each time of the day. Finally, we assume that vehicles drive through the fastest path when traveling from one location to another. Yet other criteria can be readily implemented to predefine the paths between locations. ### _Directed Acyclic Graph_ In order to study the fleet design and operation problem in a mathematically tractable fashion, we construct an acyclic directed graph \(\mathcal{G}\), similar to [32]. Each arc in \(\mathcal{G}\) represents a specific pre-computed route to be taken in the original graph \(\mathcal{G}^{\prime}\) to travel from the destination of a request to the origin of the next one. To include depots, we define an extended set of requests \(\mathcal{I}^{+}:=\{0,1,2...,I,I+1\}\) where the depots are the first and last requests that have to be served so that vehicles start and conclude their schedules in a depot. In other words, graph \(\mathcal{G}^{\prime}\) describes the geography of the road network, where the collection of arcs connecting two locations represents a path. The DAG \(\mathcal{G}\) represents the sequential order of requests that are served. This way we can capture the transitions between requests: Each arc \((i,j)\in\mathcal{A}\) represents the transition from the destination of \(i\), \(d_{i}\), to the origin of \(j\), \(o_{j}\), and it is characterized by the travel time and distance \(t_{ij}^{\text{fp}}\) and \(d_{ij}^{\text{fp}}\), respectively, of the fastest path. If \(i=j\), then \(t_{ii}^{\text{fp}}\) is merely the fastest time to serve \(i\). Given the set of \(K\in\mathbb{N}\) vehicles \(\mathcal{K}:=\{1,2...,K\}\), to capture whether vehicle \(k\in\mathcal{K}\) serves request \(i\) and then request \(j\), we set the binary tensor \(X_{ij}^{k}=1\) and to \(0\) otherwise. Last, when taking into account also the state of energy of the vehicles, the DAG represented in an energy-time space, is defined as E-DAG. If between the two requests vehicle \(k\) charges its battery at charging station \(c\), we set the binary tensor \(S_{ijc}^{k}=1\) and \(0\) otherwise; we also quantify the amount of battery charged with the non-negative-valued tensor \(C_{ijc}^{k}\). Nevertheless, it is possible to also account for vehicle-to-grid (V2G) activities by relaxing tensor \(C\) to negative values, as in [33]. ### _Example_ In this section, in Fig. 1, we demonstrate, in a reader-friendly fashion, the case of three requests to be served, \(\mathcal{I}=[1,2,3]\). We extend the set of requests \(\mathcal{I}^{+}=[0,1,2,3,4]\), and we identify the depot with nodes \(0\) and \(4\), meaning that the vehicle starts and ends the trip in a pre-defined location. If vehicle \(1\) first serves request \(1\) and then request \(2\), then \(X_{12}^{1}=1\). It means that the vehicle serves in order the two requests and then idles for the rest of the time. During the final transition between request 3 and 4 (the depot), if vehicle \(1\) is charged at charging station \(1\) after serving request \(3\) and before arriving to request \(4\), then \(X_{34}^{1}=1\) and \(S_{231}^{1}=1\). To indicate that charging occurs during the transition, the arc in the figure is depicted in green. The amount of energy recharged in \(S_{231}^{1}=1\) is defined as \(C_{231}^{1}\). We highlight that with this formulation, because of the absence of time dependency, temporal and sequential information about the actions inside a transition are lost. For example, in \(X_{23}^{1}=1\) there might be a time interval when the vehicle idles. This can occur before traveling to the charging station, during the stay in the station, or at the depot. The information about the duration of the idling time is retrievable, but it cannot be allocated to a precise time-slot without further assumptions on the sequentiality of tasks. ### _Objective Function_ In this paper, we set as optimization objective the maximization of profit for the fleet operator. We identify three main terms that play a key role: the fixed costs to purchase the fleet of vehicles, the variable costs to operate the fleet, and the revenues generated by serving the travel requests. We define \(p_{0}^{k}\) as vehicle \(k\)'s amortized cost--the process of gradually writing off the initial cost over the lifetime of the vehicle \(\tau_{\text{v}}\)--as an affine function of its battery energy capacity \(E_{\text{b}}^{k}\): \[p_{0}^{k}=\frac{p_{\text{v}}\cdot b_{\text{v}}^{k}+p_{\text{b}}\cdot E_{\text {b}}^{k}}{\tau_{\text{v}}}\quad\forall k\in\mathcal{K}. \tag{1}\] Note that \(E_{\text{b}}^{k}\) can be different for each vehicle, enabling a heterogeneous fleet. Without loss of generality, for a fleet to be homogeneous, each battery capacity \(E_{\text{b}}^{k}\) can be set equal to the others. The price to purchase the entire vehicle excluding the battery is defined as \(p_{\text{v}}\), while \(p_{\text{b}}\) indicates the price per unit energy of battery capacity. Note that this last term can include not only the price of the battery itself, but also the costs that depend on the battery, i.e., a very large battery requires a heavier and larger chassis. The variable \(p_{0}^{k}\) captures the daily fixed cost per vehicle, that can be conveniently adjusted to include additional factors like insurance, storage and maintenance. The binary variable \(b_{\text{v}}^{k}\in\{0,1\}\) indicates if vehicle \(k\) is used (and purchased) or not. We then define the operational cost as the price paid to charge the whole fleet for one day. The electricity price, indicated by \(p_{\text{el}}\), is the price per unit energy, which is set as a constant. However, it can conveniently be set as a function of time, i.e., as a function of transition \(ij\). The total amount of energy charged on the day by the whole fleet is \(C^{\text{tot}}\), defined as \[C^{\text{tot}}=\sum_{i,j\in\mathcal{I}^{+}}\sum_{c\in\mathcal{C}}\sum_{k\in \mathcal{K}}C_{ijc}^{k}. \tag{2}\] Last, we define \(p_{i}\) as the revenue paid by serving the \(i\)-th request, which can be given or as in our case, computed. We also include the choice of serving request \(i\) by introducing \(b_{\text{r}}^{i}\), the binary variable indicating if request \(i\) was served or not. To conclude, we define the (negative) profit objective function as a linear combination of the previously explained terms as follows: \[J=\sum_{k\in\mathcal{K}}p_{0}^{k}+p_{\text{el}}\cdot C^{\text{tot}}-\sum_{i\in \mathcal{I}^{+}}b_{\text{r}}^{i}\cdot p_{i}. \tag{3}\] ### _Operational Constraints_ We define the transition matrix \(X\in\{0,1\}^{|\mathcal{I}^{+}|\times|\mathcal{I}^{+}|\times K}\), where \(X_{ij}^{k}=1\) if vehicle \(k\) serves demand \(i\) and then demand \(j\), and zero otherwise. We also introduce the tensor \(S\in\{0,1\}^{|\mathcal{I}^{+}|\times|\mathcal{I}^{+}|\times K\times C}\) to account for charging activities. If vehicle \(k\) in-between \(d_{i}\) and \(o_{j}\) goes to charging station \(c\), \(S_{ijc}^{k}\) is set to one, otherwise it is set to zero. In the same way, we quantify the amount of energy charged with element \(C_{ijc}^{k}\) of the corresponding charging tensor \(C\in\mathbb{R}_{+}^{|\mathcal{I}^{+}|\times|\mathcal{I}^{+}|\times K\times C}\). However, because of time constraints, not all transitions are feasible. To determine whether a transition is feasible or not, we define the available time between transitions as \(t_{\text{ava}}:=t_{j}-t_{i}+t_{ii}^{\text{fp}}\), that represents the time between \(d_{i}\) and \(o_{j}\). Travel, deviation and charging times are efficiently pre-computed via standard shortest path algorithms, so we can pre-compute which transitions are feasible and directly elimi nate such unfeasible variables, i.e., compute upper bounds. It follows that \[X_{ij}^{k}\leq\begin{cases}1&\text{if }t_{ij}^{\text{fp}}\leq t_{ij}^{\text{ava}} \quad\forall i,j\in\mathcal{I}^{+},\ \ \forall k\in\mathcal{K}.\\ 0&\text{otherwise}\end{cases} \tag{4}\] Via this upper bound, we obtain a triangular adjacency matrix, from which derives the DAG formulation, as described in [32, 9]. In the same way, if there is enough time available, a deviation to charging station \(c\) within transition \(ij\) is feasible: \[S_{ijc}^{k}\leq\begin{cases}1&\text{if }t_{ij}^{\text{fp}}+\Delta T_{ijc}^{ \text{go2S}}\leq t_{ij}^{\text{ava}}\quad\forall i,j\in\mathcal{I}^{+},\ \ \forall k\in\mathcal{K}.\\ 0&\text{otherwise}\end{cases} \tag{5}\] Finally, we write the upper bound of the amount of energy that can be charged at station \(c\) as \[C_{ijc}^{k}\leq\begin{cases}\hat{C}_{ijc}^{k}&\text{if }t_{ij}^{\text{fp}}+ \Delta T_{ijc}^{\text{go2S}}\leq t_{ij}^{\text{ava}}\quad\forall i,j\in \mathcal{I}^{+},\ \ \forall k\in\mathcal{K},\\ 0&\text{otherwise}\end{cases} \tag{6}\] where \(\hat{C}_{ijc}^{k}=\min[(t_{ij}^{\text{ava}}-t_{ij}^{\text{fp}}-\Delta T_{ijc} ^{\text{go2S}})\cdot P_{\text{ch}},E_{\text{b}}^{\text{max}}]\) is the upper bound of the energy that can potentially be charged if all the available time left were used to charge. Note that we introduce \(\min(\cdot)\) because the maximum amount charged cannot be greater than the capacity of the battery size itself, with \(E_{\text{b}}^{\text{max}}\) being an upper bound of the battery capacity of the fleet. We then enforce that vehicle \(k\) can only charge an amount \(C_{ijc}^{k}\geq 0\) if \(S_{ijc}^{k}=1\), which, in turn, can happen only if transition \(ij\) is performed: \[\sum_{c\in\mathcal{C}}S_{ijc}^{k}\leq X_{ij}^{k}\quad\forall i,j\in\mathcal{I} ^{+},\ \forall k\in\mathcal{K}, \tag{7}\] \[C_{ijc}^{k}\leq\hat{C}_{ijc}^{k}\cdot S_{ijc}^{k}\quad\forall i,j\in\mathcal{I }^{+},\ \forall c\in\mathcal{C},\ \forall k\in\mathcal{K}. \tag{8}\] We define two parameters \(f\) and \(l\), so that \(f_{j}^{k},l_{j}^{k}=0\ \forall i,j\in\mathcal{I}\) and \(f_{0}^{k}=l_{I+1}^{k}=1\ \forall k\in\mathcal{K}\), to initialize and finalize the vehicles in a specific location, the depot. Then, we enforce that each request can be served once, at most. In other words, before and after a served request, there can only be a previous and a subsequent request, at most, that can be served, which is expressed by \[\sum_{i\in\mathcal{I}^{+},k\in\mathcal{K}}X_{ij}^{k}+\sum_{k\in \mathcal{K}}f_{j}^{k}\leq 1\quad\forall j\in\mathcal{I}^{+}, \tag{9}\] \[\sum_{j\in\mathcal{I}^{+},k\in\mathcal{K}}X_{ij}^{k}+\sum_{k\in \mathcal{K}}l_{i}^{k}\leq 1\quad\forall i\in\mathcal{I}^{+}. \tag{10}\] Finally, we ensure continuity of the schedule of each vehicle, meaning that if vehicle \(k\) serves request \(i\), transition \(ij\) can only be effected by the same vehicle \(k\), \[\sum_{i\in\mathcal{I}^{+}}X_{ij}^{k}-\sum_{l\in\mathcal{I}^{+}}X_{jl}^{k}=f_{ j}^{k}+l_{j}^{k}\quad\forall j\in\mathcal{I}^{+},\ \forall k\in\mathcal{K}. \tag{11}\] We highlight that in (9)-(11) the terms \(f_{j}^{k}\)and \(l_{j}^{k}\) are always null for \(i,j\in\mathcal{I}\). This is not the case if \(i,j\in\{0,I+1\}\), i.e., at the beginning or end of the schedule, where the first and last transitions are initialized or finalized in a depot. ### _Energy Constraints_ In this section we introduce the energy balance of each vehicle \(k\) for every node of the DAG. We express the state of battery charge of vehicle \(k\) on a given node \(j\) as \(e_{j}^{k}\), that represents the energy at the end of trip \(j\): \[e_{j}^{k}=e_{i}^{k}-d_{jj}^{\text{fp}}\cdot\Delta e^{k}-E_{ij}^{ k}+\sum_{c\in\mathcal{C}}C_{ijc}^{k}\] \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K}|X_{ij}^{k}=1, \tag{12}\] with \(\Delta e^{k}\) being the consumption per unit distance, \(d_{jj}^{\text{fp}}\) the distance of the fastest path between \(o_{j}\) and \(d_{j}\). The energy at the end of trip \(j\) is equal to the energy at the end of the previous trip, minus the energy to serve it, minus the transition energy \(E_{ij}^{k}\), plus the charged energy, \(C_{ijc}^{k}\), if any. We recall that (12) is valid only if \(X_{ij}^{k}=1\). We re-write (12) with the big M formulation to take it into account, where \(M\) is a sufficiently large number [34]: \[e_{j}^{k}\geq e_{i}^{k}-d_{jj}^{\text{fp}}\cdot\Delta e^{k}-E_{ij}^{ k}+ \sum_{c\in\mathcal{C}}C_{ijc}^{k}-M\cdot(1-X_{ij}^{k}) \tag{13}\] \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K},\] \[e_{j}^{k}\leq e_{i}^{k}-d_{jj}^{\text{fp}}\cdot\Delta e^{k}-E_{ij}^{ k}+ \sum_{c\in\mathcal{C}}C_{ijc}^{k}+M\cdot(1-X_{ij}^{k})\] (14) \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K}.\] The energy to transition from \(d_{i}\) to \(o_{j}\), \(E_{ij}^{k}\), is defined by \[E_{ij}^{k}=\begin{cases}d_{ij}^{\text{fp}}\cdot\Delta e^{k}\\ \quad\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K}|\sum_{c\in\mathcal{C }}S_{ijc}^{k}=0,\\ (d_{ij}^{\text{fp}}+\Delta d_{ijc}^{\text{go2S}})\cdot\Delta e^{k}\\ \quad\forall i,j\in\mathcal{I}^{+},\forall c\in\mathcal{C},\forall k\in\mathcal{ K}|S_{ijc}^{k}=1.\end{cases} \tag{15}\] If a detour to charging station \(c\) occurs, \(\Delta d_{ijc}^{\text{go2S}}\) is the additional distance traveled to pass through it. We reformulate (15) with the big M formulation: \[E_{ij}^{k}\geq\Delta e^{k}\cdot d_{ij}^{\text{fp}}\ \forall i,j\in \mathcal{I}^{+},\forall k\in\mathcal{K}, \tag{16}\] \[E_{ij}^{k}\leq\Delta e^{k}\cdot d_{ij}^{\text{fp}}+M\cdot\sum_{c\in \mathcal{C}}S_{ijc}^{k}\] (17) \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K},\] \[E_{ij}^{k}\geq\Delta e^{k}\cdot(d_{ij}^{\text{fp}}+\Delta d_{ijc}^{ \text{go2S}})-M\cdot(1-S_{ijc}^{k})\] (18) \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K},\forall c\in \mathcal{C},\] \[E_{ij}^{k}\leq\Delta e^{k}\cdot(d_{ij}^{\text{fp}}+\Delta d_{ijc}^{ \text{go2S}})+M\cdot(1-S_{ijc}^{k})\] (19) \[\forall i,j\in\mathcal{I}^{+},\forall k\in\mathcal{K},\forall c\in \mathcal{C}.\] Eq.(16) is always active, (17) becomes inactive if \(\sum_{c}S_{ijc}^{k}=0\), (18) and (19) become active if \(\sum_{c}S_{ijc}^{k}=1\). Assuming a fairly constant battery-to-wheels efficiency we formulate the vehicle's consumption per unit distance as an affine function of its mass [35, Ch. 2]. Since the mass is, in turn, an affine function of the battery size, the vehicle consumption per unit distance \(\Delta e^{k}\) is \[\Delta e^{k}=\Delta e_{0}+\Delta e_{b}\cdot E_{\text{b}}^{k}\quad\forall k\in \mathcal{K}. \tag{20}\] The base vehicle consumption is \(\Delta e_{0}\), whereas \(\Delta e_{\mathrm{b}}\) is a linear term. Thereafter, the battery size must always be larger than the energy stored in it. Last, the energy stored must always be greater than zero: \[E_{\mathrm{b}}^{k}\geq e_{j}^{k}\geq 0\ \ \forall j\in\mathcal{I}^{+},\ \forall k\in\mathcal{K}. \tag{21}\] ### _Number of Vehicles_ In this section we introduce constraints related to the number of vehicles that can used used in the fleet. First, we define the binary variable \(b_{\mathrm{v}}^{k}=\{0,1\}\) to indicate whether vehicle \(k\) is being used or not. Note that this formulation can be generalized by extending the domain of \(b_{\mathrm{v}}^{k}\) to integer values. In this case it would be straightforward to take into account multiple vehicles with different characteristics. As previously explained at the beginning of Section II, depots are modeled as the first and last requests to be served and are included in \(\mathcal{I}^{+}\). If a vehicle is not used, it stays in the depot, meaning that the only transition it performs is \(X_{0,I+1}^{k}=1\). It follows that \[\sum_{i,j\in\mathcal{I}^{+}}X_{ij}^{k}\leq 1-M\cdot b_{\mathrm{v}}^{k}\quad \forall k\in\mathcal{K}, \tag{22}\] \[E_{\mathrm{b}}^{k}\leq M\cdot b_{\mathrm{v}}^{k}\quad\forall k\in\mathcal{K}. \tag{23}\] Then, we enforce the energy of each vehicle at the beginning and end of the schedule to be the same, which, in turn, can ba set equal to an inizialization parameter, \(E_{\mathrm{b}}^{0}\), \[e_{0}^{k}=e_{I+1}^{k}=b_{\mathrm{v}}^{k}\cdot E_{\mathrm{b}}^{0}\ \ \forall k\in\mathcal{K}. \tag{24}\] If the vehicle is not being used, \(b_{\mathrm{v}}^{k}=0\) and the initial and final battery state is equal to 0kWh. Note that in this way it is possible to avoid the non linear term \(b_{\mathrm{v}}^{k}\cdot E_{\mathrm{b}}^{k}\) in (3). Finally, to decrease the number of multiple solutions, without loss of generality, we use vehicles sequentially: \[b_{\mathrm{v}}^{k+1}\leq b_{\mathrm{v}}^{k}\quad\forall k\in\mathcal{K}. \tag{25}\] ### _Problem Formulation_ To summarize, we formulate the maximum-profit design and operation problem for an E-AMoD fleet as follows: **Problem 1** (Joint Design and Operation Optimization).: _Given a set of transportation requests \(\{r_{i}\}_{i\in\mathcal{I}}\) and a set of charging stations \(\mathcal{C}\), the number of vehicles, their battery size, and their operations maximizing the total profit result from_ \[\min\ J\] \[\mathrm{s.t.}\ \eqref{eq:1}-\eqref{eq:11},\ \eqref{eq:13}-\eqref{eq:14},\ \eqref{eq:16}-\eqref{eq:25}.\] Problem 1 is a mixed integer linear program that can be solved with global optimality guarantees by off-the-shelf optimization algorithms. ### _Discussion_ A few comments are in order. First, we consider travel times on the road digraph \(\mathcal{G}^{\prime}\) to be given. This assumption is in order for a small fleet as the one under consideration, whose routing strategies do not significantly impact travel time and hence overall traffic. This way, also varying levels of exogenous traffic during the course of the day can be captured by simply including time-dependent traffic data and adjusting fastest path time and distance accordingly. Second, we assume the charging stations to always be available. We leave the inclusion of constraints to avoid potentially conflicting charging activities by multiple vehicles to future research. Moreover, the charging power \(P_{\mathrm{ch}}\) is a parameter that can be conveniently adapted to a specific charging infrastructure. Third, considering design aspects, the solution of Problem 1 is deterministic. Optimizing the fleet for a specific scenario may render its design not feasible for another one. This problem can be addressed by either a robust optimization approach or, as we do in this paper, by solving the problem for multiple sampled scenarios--i.e., solving Problem 1 with a subset of travel requests--as already done in the literature [36, 37, 38]. In this way, it is possible to draw the probability distribution of the design solution. The obtained solution is sub-optimal compared to directly solving the whole problem, as will be shown in Section III. However, this leads to a more conservative design solution that guarantees more robustness to different scenarios w.r.t. solving a larger instance of Problem 1. ## III Solution Approach Problem 1 is a MILP that can be solved with commercial solvers. However, due to its combinatorial nature, solving it for a large number of travel requests might lead to computational intractability. Moreover, as previously discussed in Section II-I, the solution of the problem is deterministic. To mitigate both the shortcomings highlighted above, we devise the method that we explain in the following. ### _Randomly Sampled Sub-problems_ We infer a conservative solution of the design problem by solving \(m\) times a set of sub-problems, also called scenarios, with \(n\) travel requests each, where each sub-problem set of travel requests is randomly sampled from the original set. Thereafter, we recover a distribution of the solution of the original problem as the aggregate of the solutions of all the sub-problems. Notably, the smaller the scenario and the number of demands, the more difficult it is to match demands with vehicles in an efficient manner. In contrast, the larger the scenario, the easier it will be to coordinate the vehicles due to the so-called Mohring effects [39], ultimately leading to a smaller fleet. Therefore, the smaller the sub-problem size, the more conservative the resulting aggregate solution will be in terms of estimated rejection rate, number of vehicles and achieved objective, as will be quantitatively shown in the Results Section IV-A. ### _Heuristic Sub-problem Reduction_ In the previous section, we explained that it is possible to infer a more conservative solution of the whole problem by solving many randomly sampled sub-problems and aggregating their solutions. We also highlighted that, the smaller the sub-problem, the more conservative the aggregate solution is. For this reason, the size of each sub-problem should be maximized while taking into account the computational complexity. Given the combinatorial nature of the problem, optimizing over many travel requests would mean having an extremely large amount of variables that would, in turn, quickly lead to an intractable problem. Moreover, compared to the standard assignment problem, the framework in this paper cannot be solved neither by Hungarian algorithm [40], nor by relaxing the integer variables, because the energy constraints shown in Section II-F are not entirely unimodular. Against this backdrop, and similar to [41], we shrink the feasible domain, removing very improbable solutions. In particular, for two travel requests \(i\) and \(j\) so that \(t_{ij}^{\text{fp}}\leq t_{j}^{\text{start}}-t_{i}^{\text{end}}\), the upper bound of \(X_{ij}^{k}\) is equal to \(1\), even if the two requests are spatially-temporally very distant one from the other. On the one hand, by not restricting the space of variables, we would not lose guarantees of global optimality for the sub-problem. On the other hand, the larger the instance of the sampled problem, the better the quality of the aggregate solution w.r.t. the one the full size problem. Thus, we restrict the upper bounds of tensors \(X,S,C\) so that each transition \(ij\) can only happen if the idling time and the rebalancing distance are below a given amount \(\tilde{t}_{ijc}^{\text{idl}}\). We highlight that, whilst in small problem instances such a constraint could significantly affect the solution, for large problem instances it can be easily respected and can be leveraged to efficiently reduce the feasible domain. Formally, we define \(t_{ijc}^{\text{idl}}\) as the idling time of transition \(ij\) passing through charging station \(c\) and restrict it as \[t_{ijc}^{\text{idl}}=t_{j}^{\text{start}}-t_{i}^{\text{end}}-\Delta T_{ijc}^{ \text{go2S}}-t_{ij}^{\text{fp}}-\frac{E_{\text{day}}}{P_{\text{ch}}}, \tag{26}\] \[t_{ijc}^{\text{idl}}\leq\tilde{t}_{ijc}^{\text{idl}}\hskip 28.452756pt\forall i,j \in\mathcal{I},\forall k\in\mathcal{K},\forall c\in\mathcal{C}. \tag{27}\] If (27) is not satisfied, the corresponding upper bounds of the elements of \(X_{ij}^{k},S_{ijc}^{k},C_{ijc}^{k}\) are set to \(0\). We define: \[\begin{array}{ll}\Delta T_{ijc}^{\text{go2S}}=\Delta\tilde{T}_{ijc}^{\text {go2S}}&\text{if}\quad\Delta\tilde{T}_{ijc}^{\text{go2S}}\leq\Delta\tilde{T}_ {ijc}^{\text{go2S}},\\ \Delta T_{ijc}^{\text{go2S}}=-\infty&\mathrm{otherwise},\\ t_{ij}^{\text{fp}}=\tilde{t}_{ij}^{\text{fp}}&\mathrm{if}\quad\tilde{t}_{ij}^ {\text{fp}}\leq\tilde{t}_{ij}^{\text{fp}},\\ t_{ij}^{\text{fp}}=-\infty&\mathrm{otherwise},\end{array} \tag{28}\] where the barred terms on the right side are parameters subject to tuning. They are thresholds up to which the idling time is set to infinity and (27) does not hold. The last term represents the maximum time a vehicle can spend in a charging station, which still allows for far-apart transitions that would occur because the vehicle needs time to charge. Overall, by applying this heuristic, it is possible to decrease the feasible domain by approximately \(80\%\), improving significantly the tractability and scalability of the problem so that larger instances can be solved. ## IV Results This section showcases our framework for Manhattan, NYC. Specifically, we use the road network shown in Fig. 1, consisting of 357 nodes and 1006 links, which was constructed using a version based on OpenStreetMaps [42]. The travel requests were supplied by the Taxicab & Livery Passenger Enhancement Programs to the NYC Taxi and Limousine Commission. The data set is built using historical data of taxi rides that occurred in March 2018. The values of all the parameters used are collected in Table I, together with their sources. Following [43], we express it as an affine function with respect to time and distance required to serve the travel request: \[p_{i}=\alpha+\beta\cdot d_{ii}^{\text{fp}}+\gamma\cdot t_{ii}^{\text{fp}}\quad \forall i\in\mathcal{I}, \tag{29}\] with \(\alpha\) equal to the base fare, \(\beta\) to the cost per unit distance and \(\gamma\) to the cost per unit time. The number of cycles to end-of-life of the battery of the vehicles, \(\tau_{\text{v}}^{\text{cycle}}\), is further explained in Section IV-C. Following the method explained in Section III, we solve multiple randomly sampled sub-problems, obtaining a discretized sub-optimal distribution of the original solution of the full problem. Based on trial-and-error, we set \(\Delta\tilde{T}_{ijc}^{\text{go2S}}=\Delta\tilde{T}_{ijc}^{\text{go2S}}\)\(\tilde{t}_{ijc}^{\text{idl}}=0.1\,\mathrm{h}\), \(E_{\text{day}}=30\,\mathrm{kWh}\). Finally, we assume that the E-AMoD operator has 15 privately owned charging facility infrastructures, evenly distributed in the area of interest as shown in Fig. 1, that is a sufficient number to avoid large distance recharging trips [18]. In future research, will also address the issue of jointly optimizing, taking into account the siting and sizing of the charging infrastructure. This can readily be implemented by adding an integer variable for each station and changing the objective function accordingly. We parse and solve Problem 1, using Yalmip [44] and Gurobi 9.1 [45]. ### _Numerical Experiments on the Problem Size_ To quantitatively analyze the impact of the size of the sub-problems on the solution, we conduct a scan of the solution distribution w.r.t. it, as shown in Fig. 2. The heuristic approach to increase the solvable size of the sub-problems is explained in Section III-B. All the results presented are normalized. The simulation clearly depicts that the smaller the instances, the more sub-optimal the solution. In fact, the larger the problem instance, the larger the profit--the smaller the objective function--, and the lower the number of vehicles required. Note that for a sufficiently large scenario, the coordination between vehicles allows to reduce their number, as indicated by the lower value of the normalized number of vehicles. Nevertheless, the computation time increases exponentially. We recall that: For very small scenarios the heuristic method in Sec. III-B is unsuitable due to the overall low number of requests; For large scenarios the problem can only be solved with the heuristic method. Interestingly, the phenomenon observed in Fig. 2 recalls the Better-matching and Mohring effects [39]: the higher the number of people requesting for a mobility service, the higher the performance of the system. Thus, the larger the instance, the better. However, depending on the computational and time resources, and on the quality of the solution required, it is possible to compute a sub-optimal solution. Finally, following the numerical experiments above, and given the computational and time resources available, we conclude that with a size "10", equal to \(250\) travel requests of the randomly sampled sub-problems, the quality of the aggregate solution is, for the design nature of the problem, in line with its scope. ### _Case Study of Manhattan_ In this section, we showcase our method using a data set with requests recorded on March 3, 2018. We randomly sample the data set 25 times with 250 requests each. We then solve Problem 1 using the sampled data set to obtain a discretized distribution of the solution of the full problem. Solving each instance took a computational time of approximately 3 h. We highlight that economies of scales can significantly impact not only the solution, as discussed in Section III, but also the performance of an AMoD system [39]. Two notorious examples are the Mohring and the Better-Matching effects. In bigger systems, larger number of vehicles and requests enable more efficient schedules and reduce delays. In this case in particular, the size of the sampled scenarios can impact the idling time, the rebalancing distance and the overall number of vehicles. Table II shows the optimal solution of Problem 1. Thereby, the battery size of the fleet, \(E_{\text{b}}^{k}\), is set to be equal between agents within the same batched problem. The number Fig. 2: The figure shows the normalized objective per vehicle and the rejection rate as a function of the size of the simulation. Simulation size 1 is equal to 1 vehicle and 25 travel requests. The battery size of the vehicles is fixed to \(E_{\text{b}}^{k}=20\,\mathrm{kWh}\quad\forall k\in\mathcal{K}\). It can be noted that the larger the number of vehicles, the lower the rejection rate and the lower the objective function, the higher the profit. of recharging stops per vehicle is defined as \[s^{k}=\sum_{i,j\in\mathcal{I}^{+},c\in\mathcal{C}}S^{k}_{ijc}, \tag{30}\] while the number of requests served per vehicle \(B^{k}_{\mathrm{r}}\), and the corresponding generated revenue \(r^{k}\), are defined by \[B^{k}_{\mathrm{r}}=\sum_{i\in\pi(k)}b^{i}_{\mathrm{r}}, \tag{31}\] \[r^{k}=\sum_{i\in\pi(k)}p_{i}b^{i}_{\mathrm{r}}, \tag{32}\] with \(\pi(k)\) being the sequence of requests served by vehicle \(k\). Last, \(C^{k}_{\mathrm{avg}}\) is the average energy recharged per vehicle per stop. The optimal battery size distribution lies near 20 kWh, considerably smaller compared to commercial vehicles. This allows to have a lower energy consumption thanks to lighter vehicles, thus reducing the electricity operational costs. The part of the solution related to the battery is consistent over different scenarios, as shown by the small standard deviations. Regarding the charging scheduling of the fleet, each vehicle charges multiple times per day a small amount of energy, while only once, during night, a large amount, as indicated by the high standard deviation of \(C^{k}_{\mathrm{avg}}\) in Table II. This means that the vehicles have enough driving range to charge mainly whenever they are nearby a charging station, and when there are fewer requests. Interestingly, compared to [33], where V2G was included, the battery is significantly downsized. In fact, because of the lack of incentives in charging and discharging during convenient time-widows, the battery is downsized so that the vehicles are as energy efficient and cheap as possible, but without sacrificing the potential number of travel requests that can be served. In Section IV-D we will perform a sensitivity analysis on the battery size of the fleet. In particular, we solve Problem 1 fixing the vehicle's design variables, hence removing the influence of the fixed costs per vehicle on the solution. ### _Expected Lifetime of the Fleet_ A key aspect in this study is estimating the lifetime of the fleet, because it is the period over which the fixed costs are amortized. Specifically, the daily fixed costs are influenced by three elements: the cost of the single unit, the number of units and the expected lifetime. The cost of the single unit and the number of units depend on the battery capacity and operation of the fleet. The number of daily recharging cycles and the battery capacity influence the lifetime of the fleet itself. In particular, fleets with large battery capacity need less daily recharging cycles and are subject to lower degradation, reflecting in a longer lifetime. We assume the lifetime bottleneck component of an electric vehicle to be the battery. We estimate the number of cycles before end of life of the battery to be 2500 full cycles. We assume the depth of discharge (DoD) does not significantly influence the battery's lifetime because the vehicles are charged for the most part by \(20-40\%\) per stop, i.e., the DoD is small. The amortization time in days for each vehicle is calculated by dividing the number of lifetime cycles by the daily ones. Fig. 3 shows the purchasing costs per unit and per unit amortized over its lifetime for different battery sizes. We highlight that battery degradation is not negligible, especially in vehicles with a very small battery capacity, that reflects in being more expensive on a daily basis, although being cheaper in absolute terms. However, for a battery size above 10 kWh, the purchasing cost and degradation have comparable effects, resulting in an approximately constant daily costs. We also recall that we estimate the lifetime of the battery to be the bottleneck. This means that, for vehicles with very large battery capacity, the lifetime might be long enough to outlive the mechanical components, resulting in a possible overestimation of the lifetime of the vehicle. This could potentially reflect in a plateau of the lifetime, thus resulting in increasing amortized costs for vehicles with a large battery capacity, that are not captured in this model. Note that the lifetime is a function of the battery, which breaks the linearity of (3). However, we showed that above 10 kWh, the purchasing costs are counterbalanced by the degradation of the battery, leading to an approximately constant daily cost per unit, independent of the battery. Therefore, to maintain a linear objective function, the terms \(p_{\mathrm{v}}/\tau_{\mathrm{v}}\) and \(p_{\mathrm{b}}/\tau_{\mathrm{v}}\) are set to constant values. The approximation is valid only for battery size above 10 kWh. However, in Section IV-D we will show that if the vehicles are equipped with a very small battery, a larger number of units will be required. Hence the solution does not lie in that part of the domain. Then, in Section IV-D, to gather a deeper insight into the effect of battery degradation on the solution, we perform a sensitivity analysis on the battery size, showing that it is not convenient to have a fleet with a very small battery size. ### _Sensitivity Analysis on the Battery Size_ In this section we investigate how fleets with different battery capacities behave, in particular the trade-off between Fig. 3: Overall costs per unit in 10 kEuro (red), daily cost per unit amortized over the lifetime (blue) and daily full battery cycles (yellow) as functions of the battery capacity of the unit. We recall that the minimum in the daily cost per unit is also due to the increasing energy consumption for vehicles with larger battery capacity. daily fixed costs and operational ones when fixing the number of daily requests served. Fig. 4 shows the relevant metrics of the system. For very small battery capacities, the additional distance driven to the respective charging stations is considerably higher, up to three times longer w.r.t. vehicles with larger batteries (top-right). The main reason is the absence of charging flexibility. In fact, the driving range is so short that the vehicles cannot wait to conveniently be near a charging station to recharge. Despite this, the order of magnitude of the additional distance driven is not comparable to the overall distance driven: This reflects in a lower energy consumption overall. From this we conclude that the operational costs are a monotone increasing function with respect to the battery size and weight of the single unit. Conversely, the fixed amortized cost tends to be fairly constant for battery size above 15 kWh, but it significantly increases in case of very small battery capacities, despite the lower purchasing cost per unit. The reasons are twofold. First, with a very small battery the flexibility of the charging schedule is absent. In this way, because of the continuous recharging process, in case of higher number of demands, the availability of vehicles cannot be temporarily increased, and more vehicles are required to serve the same number of requests. This leads to a larger, and thus more expensive, fleet. Second, a higher number of battery cycles leads to a shorter lifetime over which the fixed costs are amortized. Thus, both these reasons point to the fact that downsizing too much the battery is counter-productive. We also highlight that for very large batteries, the overall number of vehicles increases. The interpretation is that large vehicles consume more energy, leading to longer charging times, which, in turn, lead to lower availability of vehicles. To counter-act this phenomenon, the charging power can potentially be increased, especially in the case of vehicles with a large battery. However, for an easier comparison, we assume we have a fixed charging infrastructure. Fig. 5 shows the trade-off between the fixed and the operational costs for a fixed number of travel requests served with an optimal value reached at approximately 20 kWh. Around this value, the charging flexibility is high enough to not need additional vehicles to serve demand peaks, especially compared to lower battery capacities. In parallel, because of undersized batteries compared to the 60 kWh ones, the single units are lighter and more energy efficient, reducing the overall energy consumption of the fleet. ## V Conclusions In this paper, we proposed a framework for optimizing the design and operation of an Electric Autonomous Mobility-on-Demand (E-AMoD) fleet by considering the number of vehicles, battery size, and operation simultaneously. The framework combines vehicle assignment and charge scheduling with the design of the fleet. To deal with the computational and combinatorial complexity of the resulting problem, we proposed a solution approach based on randomly sampling of the problem instance and, whenever needed, simplifying its structure via heuristic considerations. We showcased our framework using a real-world case-study of Manhattan, NYC, showing that a fleet with battery size around 20 kWh can i) result in a lower energy consumption without worsening the operation of the fleet compared to a fleet with a longer autonomy range, and ii) result in lower fixed costs compared to a fleet of vehicles with an undersized battery, which would have a shorter lifetime and require more vehicles due to the lower charging flexibility. Going forward, our framework presents multiple avenues for expansion, such as the extension to ride-pooling, and to more sustainable intermodal settings that include public transit and active modes. ## VI Acknowledgments We thank Dr. I. New, Ir. M. Clemente and Ir. O. Borsboom for proofreading this paper, and Prof. A. Agazzi for the fruitful discussion and advice. This publication is part of the project Fig. 4: The figure shows the energy usage (top-left), the additional driven to go to charging stations (top-right), the number of vehicles (bottom-right) and daily occupancy (bottom-left) as a function of the battery size of the vehicles of the fleet. Fig. 5: The figure shows the sensitivity analysis of the objective function with respect to the battery size of the vehicles of the fleet. The number of served travel requests are fixed. NEON with project number 17628 of the research program Crossover which is (partly) financed by the Dutch Research Council (NWO).
自動運転と電気自動車の普及により、電気自動運転モビリティオンデマンド(E-AMoD)システムの展開が実現し、電気自動運転車によりオンデマンドの移動が可能となっています。特に、個々の車両とFleetの設計、そしてシステムの運営は強く結びついており、システムレベルのパフォーマンスを最大化するためには、それらを共同で最適化することが必要です。このため、本論文では、バッテリー容量と車両数など、E-AMoDシステムのFleet設計と運用戦略を組み合わせて、オペレーターの総収益を最大化する枠組みを提示しています。具体的には、この共同最適化問題を directed acyclic graphs を用いて混合整数線形計画に formulate します。これは、商用ソルバーにより最適性を保証した形で解決可能です。また、大規模な問題に対しては、ランダムにサンプリングされたサブ問題を解くアルゴリズムを提案
2301.13630
Enhancing NOMA Networks via Reconfigurable Multi-Functional Surface
By flexibly manipulating the radio propagation environment, reconfigurable intelligent surface (RIS) is a promising technique for future wireless communications. However, the single-side coverage and double-fading attenuation faced by conventional RISs largely restrict their applications. To address this issue, we propose a novel concept of multi-functional RIS (MF-RIS), which provides reflection, transmission, and amplification simultaneously for the incident signal. With the aim of enhancing the performance of a non-orthogonal multiple-access (NOMA) downlink multiuser network, we deploy an MF-RIS to maximize the sum rate by jointly optimizing the active beamforming and MF-RIS coefficients. Then, an alternating optimization algorithm is proposed to solve the formulated non-convex problem by exploiting successive convex approximation and penalty-based method. Numerical results show that the proposed MF-RIS outperforms conventional RISs under different settings.
Ailing Zheng, Wanli Ni, Wen Wang, Hui Tian
2023-01-31T13:44:17
http://arxiv.org/abs/2301.13630v1
# Enhancing NOMA Networks via Reconfigurable Multi-Functional Surface ###### Abstract By flexibly manipulating the radio propagation environment, reconfigurable intelligent surface (RIS) is a promising technique for future wireless communications. However, the single-side coverage and double-fading attenuation faced by conventional RISs largely restrict their applications. To address this issue, we propose a novel concept of multi-functional RIS (MF-RIS), which provides reflection, transmission, and amplification simultaneously for the incident signal. With the aim of enhancing the performance of a non-orthogonal multiple-access (NOMA) downlink multiuser network, we deploy an MF-RIS to maximize the sum rate by jointly optimizing the active beamforming and MF-RIS coefficients. Then, an alternating optimization algorithm is proposed to solve the formulated non-convex problem by exploiting successive convex approximation and penalty-based method. Numerical results show that the proposed MF-RIS outperforms conventional RISs under different settings. Multi-functional reconfigurable intelligent surface, non-orthogonal multiple access, rate maximization. ## I Introduction Compared to orthogonal multiple access (OMA), non-orthogonal multiple access (NOMA) is capable of achieving high spectrum efficiency and massive connectivity [1]. Prior investigations have shown that the differences between users' channel conditions can be exploited to enhance NOMA performance [2]. However, users in large-scale networks may have poor or similar channel conditions, which hinders the application of successive interference cancellation (SIC) and the effective implementation of NOMA. Therefore, adjusting channel conditions and enhancing channel diversity are able to release the potential of NOMA in practical networks. Recently, with the ability to reshape the wireless propagation environment, reconfigurable intelligent surface (RIS) has emerged as a key technique to improve the performance of NOMA networks [3]. By properly designing the reflection coefficients, RIS is able to smartly change the combined channels to enhance the differences among users, thus boosting the performance of NOMA in large-scale networks. Initial investigations on RIS-aided NOMA networks in [3, 4, 5, 6, 7] had verified the superiority of the integration of NOMA and RIS. Specifically, the authors of [3] and [4] performed comprehensive discussions of the main challenges and futuristic use cases regarding RIS-aided NOMA networks. Moreover, the works in [5, 6, 7] demonstrated the benefits brought by RISs to achieve performance trade-off among multiple NOMA users through smartly adjusting the decoding order. However, the existing literature on RIS-aided NOMA networks mostly uses single functional RIS (SF-RIS) that only supports signal reflection or transmission/refraction. This implies that only users located in a single side can be served by the SF-RIS if no additional operations are performed. To overcome this limitation, the authors of [8] proposed the concept of dual-functional RIS (DF-RIS). Unlike SF-RIS, DF-RIS refers to the reconfigurable dual-functional surface that can conduct signal reflection and transmission simultaneously, such as simultaneous transmitting and reflecting RIS (STAR-RIS) [9] and intelligent omni-surface (IOS) [10]. Specifically, the coverage characterization of STAR-RIS-aided NOMA networks was investigated in [9] by studying a coverage range maximization problem. The authors of [10] considered the average rate maximization problem in an IOS-aided NOMA networks with spatially correlated channels. Furthermore, the effective capacity and secrecy outage probability of STAR-RIS-aided NOMA networks were derived in [11] and [12], respectively. However, although the effective coverage can be enhanced by the existing DF-RIS, the signals relayed by the DF-RIS still suffer from channel fading twice due to the features of cascaded channels. This double-fading effect inevitably deteriorates the achievable performance of passive RIS-assisted wireless networks. Therefore, it is necessary to design new RIS architectures to mitigate the double-fading attenuation problem faced by the existing RISs. In this letter, a novel multi-functional RIS (MF-RIS) is proposed to address the issues aforementioned. Specifically, the proposed MF-RIS can not only divide the incident signal into transmission and reflection two parts based on the field equivalence principle, but also amplify the outgoing signal with the help of active loads. Thus, the MF-RIS is able to facilitate a full-space coverage and overcome the double-fading issue. Then, we investigate a sum rate maximization problem in an MF-RIS-aided NOMA network. Compared to the existing problems formulated in [8] and [9], the newly introduced MF-RIS constraints and highly coupled variables make the performance optimization more complicated. The main contributions of this letter are summarized as follows: 1) We propose a new concept of MF-RIS by integrating the surface electric and magnetic impedances, and power amplifier into each element so that the incident signal can be reflected, refracted, and amplified simultaneously. 2) We formulate a non-convex optimization problem to maximize the throughout of an MF-RIS-aided NOMA network, where the MF-RIS is deployed to constructively enhance the channel condition by flexibly adjusting the radio propagation environment. 3) To solve the formulated non-convex problem, we propose an efficient iterative algorithm by alternatively optimizing the active beamforming and MF-RIS coefficients based on the penalty-based method and successive convex approximation (SCA). 4) Simulation results show that the proposed MF-RIS-aided NOMA network can provide up to about 59% sum rate gain than the SF-RIS, and the MF-RIS prefers to be deployed at the user side for better performance. ## II System Model and Problem Formulation ### _System Model_ We consider an MF-RIS-aided NOMA downlink network, where an \(N\)-antenna BS communicates with \(K\) single-antenna users with the aid of an MF-RIS comprising \(M\) elements, as shown in the right of Fig. 1. The sets of elements and users are denoted by \(\mathcal{M}=\{1,2,\ldots,M\}\) and \(\mathcal{K}=\{1,2,\ldots,K\}\), respectively. The channels of BS-user, BS-RIS, and RIS-user are denoted by \(\mathbf{h}_{k}\in\mathbb{C}^{N\times 1}\), \(\mathbf{H}\in\mathbb{C}^{M\times N}\), and \(\mathbf{g}_{k}\in\mathbb{C}^{M\times 1}\), respectively. Furthermore, we define \(\mathbf{u}_{p}=[\sqrt{\beta_{1}^{p}}e^{j\theta_{1}^{p}},\ \sqrt{\beta_{2}^{p}}e^{j\theta_{2}^{p}},\ldots,\ \sqrt{\beta_{M}^{p}}e^{j\theta_{M}^{p}}]^{ \mathrm{T}}\in\mathbb{C}^{M\times 1}\) as the transmission \((p=t)\) or reflection \((p=r)\) beamforming vector, where \(p\in\{t,r\}\) denotes the transmission and reflection spaces, \(\beta_{m}^{p}\in[0,\beta_{\max}]\) and \(\theta_{m}^{p}\in[0,2\pi)\) represent the amplitude and the phase shift response of the \(m\)-th element, respectively, with the maximum amplification factor \(\beta_{\max}\geq 1\). Due to the law of energy conservation, we have \(\beta_{m}^{r}+\beta_{m}^{t}\leq\beta_{\max}\). If user \(k\) is located at the reflection space, the diagonal matrix of the MF-RIS for user \(k\) is given by \(\mathbf{\Theta}_{k}=\mathrm{diag}(\mathbf{u}_{r})\); otherwise \(\mathbf{\Theta}_{k}=\mathrm{diag}(\mathbf{u}_{t})\). We assume that the perfect channel state information (CSI) of all channels is available at the BS. Then the signal received at user \(k\) is expressed as \[y_{k}=(\mathbf{h}_{k}^{\mathrm{H}}+\mathbf{g}_{k}^{\mathrm{H}}\mathbf{\Theta }_{k}\mathbf{H})\mathbf{x}+\mathbf{g}_{k}^{\mathrm{H}}\mathbf{\Theta}_{k} \mathbf{n}_{s}+n_{k},\ \forall k, \tag{1}\] where \(\mathbf{x}=\sum_{k}\mathbf{w}_{k}s_{k}\) denotes the transmit signal, \(\mathbf{w}_{k}\) and \(s_{k}\in\mathcal{CN}(0,1)\) represent the transmit precoder and the information symbol for user \(k\), respectively. \(\mathbf{n}_{s}\in\mathcal{CN}(\mathbf{0},\sigma_{s}^{2}\mathbf{I}_{M})\) denotes the dynamic noise at the MF-RIS with each element's noise power \(\sigma_{s}^{2}\), and \(n_{k}\in\mathcal{CN}(0,\sigma_{k}^{2})\) denotes the additive white Gaussian noise at user \(k\) with power \(\sigma_{k}^{2}\). By employing SIC, the strong user can mitigate the interference from weak users to improve the signal-to-interference-plus-noise ratio. Similar to [4, 5, 6], with the assistance of RIS to flexibly adjust channel conditions of multiple users, we assume that users' indexes are ranked in an increasing order with respect to their channel gains, i.e., \[\|\widehat{\mathbf{h}}_{1}\|^{2}\leq\|\widehat{\mathbf{h}}_{2}\|^{2}\leq \cdots\leq\|\widehat{\mathbf{h}}_{K}\|^{2}, \tag{2}\] where \(\widehat{\mathbf{h}}_{k}=\mathbf{h}_{k}^{\mathrm{H}}+\mathbf{g}_{k}^{\mathrm{ H}}\mathbf{\Theta}_{k}\mathbf{H}\) is the equivalent combined channel. For the fixed decoding order, the corresponding achievable sum rate of user \(k\) is given by \(R_{k}=\log_{2}(1+\gamma_{k})\), where \(\gamma_{k}\) can be obtained by \[\gamma_{k}=\frac{|\widehat{\mathbf{h}}_{k}\mathbf{w}_{k}|^{2}}{\sum_{i=k+1}^{ K}(|\widehat{\mathbf{h}}_{k}\mathbf{w}_{i}|^{2})+|\mathbf{g}_{k}^{\mathrm{H}} \mathbf{\Theta}_{k}\mathbf{n}_{s}|^{2}+\sigma_{k}^{2}},\ \forall k. \tag{3}\] ### _Problem Formulation_ In this letter, we aim to maximize the achievable sum rate of all users by jointly optimizing the active beamforming at the BS and the coefficients at the MF-RIS. Under the transmit and amplification power constraints, and the quality-of-service (QoS) requirement of users, the considered optimization problem can be formulated as \[\max_{\mathbf{w}_{k},\mathbf{\Theta}_{k}} \sum\nolimits_{k=1}^{K}R_{k} \tag{4a}\] \[\mathrm{s.t.} \sum\nolimits_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\leq P_{\max},\] (4b) \[\sum\nolimits_{k=1}^{K}(\|\mathbf{\Theta}_{k}\mathbf{H}\mathbf{w }_{k}\|^{2}+\|\mathbf{\Theta}_{k}\mathbf{I}_{M}\|_{F}^{2}\sigma_{s}^{2})\! \leq\!P_{o},\] (4c) \[\beta_{m}^{r}+\beta_{m}^{t}\leq\beta_{\max},\ 0\leq\beta_{m}^{p} \leq\beta_{\max},\ \forall m,\ \forall p,\] (4d) \[R_{k}\geq R_{k}^{\min},\ \theta_{m}^{p}\in[0,2\pi),\ (2),\ ( \forall k,\ \forall m,\ \forall p, \tag{4e}\] where \(P_{\max}\) and \(P_{o}\) denote the maximum transmit and amplification power at the BS and MF-RIS, respectively. \(R_{k}^{\min}\) represents the minimum rate requirement of user \(k\). Specifically, the constraints for transmit power, amplification power, the QoS requirements and the decoding order are given in (4b)-(4e), respectively. It can be observed that the formulated problem (4) is intractable due to the non-convex objective function and constraints. Besides, the active beamforming and MF-RIS coefficients are highly coupled, making it difficult to be solved directly. Thus, we aim to transform problem (4) into some tractable convex subproblems and solve them Fig. 1: Conventional RIS vs. the proposed MF-RIS-aided NOMA networks. separately and alternatively over iterations. In the next section, we adopt alternating optimization method to obtain the active beamforming and the MF-RIS coefficients efficiently. ## III Proposed Solution ### _Active Beamforming Design_ Given the MF-RIS coefficients, the active beamforming optimization problem is still non-convex. To solve it, we first introduce an auxiliary variable set \(\{A_{k},B_{k}|k\in\mathcal{K}\}\), where \(A_{k}\) and \(B_{k}\) are defined as \[{A_{k}}^{-1}=|\widehat{\mathbf{h}}_{k}\mathbf{w}_{k}|^{2}, \tag{5}\] \[B_{k}=\sum\nolimits_{i=k+1}^{K}(|\widehat{\mathbf{h}}_{k}\mathbf{ w}_{i}|^{2})+|\mathbf{g}_{k}^{\mathrm{H}}\mathbf{\Theta}_{k}\mathbf{n}_{s}|^{2}+ \sigma_{k}^{2}. \tag{6}\] Thus, the achievable data rate can be rewritten as \(R_{k}=\log_{2}\left(1+(A_{k}B_{k})^{-1}\right)\). Then, the active beamforming optimization problem in (4) can be equivalently expressed as \[\max_{\mathbf{w}_{k},A_{k},B_{k},R_{k}} \sum\nolimits_{k=1}^{K}R_{k} \tag{7a}\] \[\mathrm{s.t.} \log_{2}\left(1+\left(A_{k}B_{k}\right)^{-1}\right)\geq R_{k}, \ \forall k,\] (7b) \[{A_{k}}^{-1}\!\leq\!|\widehat{\mathbf{h}}_{k}\mathbf{w}_{k}|^{2 },\ \forall k,\] (7c) \[B_{k}\!\geq\!\!\sum\limits_{i=k+1}^{K}(|\widehat{\mathbf{h}}_{k }\mathbf{w}_{i}|^{2})\!+\!|\mathbf{g}_{k}^{\mathrm{H}}\mathbf{\Theta}_{k} \mathbf{n}_{s}|^{2}\!+\!\sigma_{k}^{2},\ \forall k,\] (7d) \[R_{k}\geq R_{k}^{\mathrm{min}},\ \eqref{eq:active_beamforming},\ \eqref{eq:active_beamforming},\ \forall k. \tag{7e}\] We further define \(\widehat{\mathbf{H}}_{k}=\widehat{\mathbf{h}}_{k}^{\mathrm{H}}\widehat{ \mathbf{h}}_{k}\), \(\mathbf{D}_{k}=(\mathbf{H}^{\mathrm{H}}\mathbf{\Theta}_{k})(\mathbf{H}^{ \mathrm{H}}\mathbf{\Theta}_{k})^{\mathrm{H}}\) and \(\mathbf{W}_{k}=\mathbf{w}_{k}\mathbf{w}_{k}^{\mathrm{H}}\), where \(\mathbf{W}_{k}\succeq\mathbf{0}\), and \(\mathrm{rank}(\mathbf{W}_{k})=1\). Then, we have \[|\widehat{\mathbf{h}}_{k}\mathbf{w}_{k}|^{2}=\mathrm{Tr}(\widehat{\mathbf{H}} _{k}\mathbf{W}_{k}),\ \|\mathbf{\Theta}_{k}\mathbf{H}\mathbf{w}_{k}\|^{2}=\mathrm{Tr}(\mathbf{W}_{k }\mathbf{D}_{k}). \tag{8}\] Therefore, problem (7) can be reformulated as \[\max_{\mathbf{W}_{k},A_{k},B_{k},R_{k}} \!\!\sum\nolimits_{k=1}^{K}R_{k} \tag{9a}\] \[\mathrm{s.t.} {A_{k}}^{-1}\leq\mathrm{Tr}(\widehat{\mathbf{H}}_{k}\mathbf{W}_{ k}),\ \forall k,\] (9b) \[B_{k}\!\!\geq\!\!\sum\limits_{i=k+1}^{K}\!\!\mathrm{Tr}(\widehat{ \mathbf{H}}_{k}\mathbf{W}_{i})\!+\!|\mathbf{g}_{k}^{\mathrm{H}}\mathbf{\Theta}_ {k}\mathbf{n}_{s}|^{2}\!+\!\sigma_{k}^{2},\ \forall k,\] (9c) \[\sum\nolimits_{k=1}^{K}\!\!\mathrm{Tr}(\mathbf{W}_{k})\leq P_{ \max},\] (9d) \[\sum\nolimits_{k=1}^{K}\left[\mathrm{Tr}(\mathbf{W}_{k}\mathbf{D} _{k})\!+\!\|\mathbf{\Theta}_{k}\mathbf{I}_{M}\|^{2}\!\sigma_{s}^{2}\right]\! \leq\!P_{o},\] (9e) \[\mathrm{rank}(\mathbf{W}_{k})=1,\ \forall k,\] (9f) \[\mathbf{W}_{k}\succeq\mathbf{0},\ R_{k}\geq R_{k}^{\mathrm{min}}, \ \eqref{eq:active_beamforming},\ \forall k. \tag{9g}\] In order to deal with the non-convex constraint (7b), we adopt the first-order Taylor expansion, and then we obtain the lower bound as follows: \[\log_{2}(1\!+\!\frac{1}{A_{k}B_{k}}) \!\!\geq\!\log_{2}(1\!+\!\frac{1}{A_{k}^{(1)}B_{k}^{(1)}})\!-\! \frac{\log_{2}e(A_{k}\!-\!A_{k}^{(1)})}{A_{k}^{(1)}(1\!+\!A_{k}^{(1)}B_{k}^{( 1)})} \tag{10}\] \[-\frac{\log_{2}e(B_{k}\!-\!B_{k}^{(1)})}{B_{k}^{(1)}(1\!+\!A_{k} ^{(1)}B_{k}^{(1)})}\stackrel{{\Delta}}{{=}}\overline{R}_{k},\] where \(A_{k}^{(1)}\) and \(B_{k}^{(\tau_{1})}\) are feasible points of \(A_{k}\) and \(B_{k}\) in the \(\tau_{1}\)-th iteration, respectively. For the non-convex rank-one constraint in (9f), we assume to transform it to a penalty term in the objective function, which can be solved by SCA. Thus, we firstly introduce an equivalent equality: \[\|\mathbf{W}_{k}\|_{*}-\|\mathbf{W}_{k}\|_{2}=0,\ \forall k, \tag{11}\] where \(\|\mathbf{W}_{k}\|_{*}=\sum_{i}\varepsilon_{i}(\mathbf{W}_{k})\) and \(\|\mathbf{W}_{k}\|_{2}=\varepsilon_{1}(\mathbf{W}_{k})\) denote the nuclear norm and the spectral norm of \(\mathbf{W}_{k}\), respectively. \(\varepsilon_{i}(\mathbf{W}_{k})\) is the \(i\)-th largest singular value of matrix \(\mathbf{W}_{k}\). Thus, when the matrix \(\mathbf{W}_{k}\) is rank-one, equality (11) holds. Next, we employ the penalty method to solve problem (9) by adding (11) to the objective function (9a). Since the penalty term (11) makes the objective function not convex, we apply the first-order Taylor expansion to obtain a convex upper bound of (11) as follows: \[\|\mathbf{W}_{k}\|_{*}-\|\mathbf{W}_{k}\|_{2}\leq\|\mathbf{W}_{k}\|_{*}-\| \overline{\mathbf{W}}_{k}\|_{2}, \tag{12}\] where \(\|\overline{\mathbf{W}}_{k}\|_{2}=\|\mathbf{W}_{k}^{(\tau_{1})}\|_{2}+\mathrm{Tr }\!\left[\mathbf{e}_{k}^{(\tau_{1})}(\mathbf{e}_{k}^{(\tau_{1})})^{\mathrm{H}}( \mathbf{W}_{k}-\mathbf{W}_{k}^{(\tau_{1})})\right]\), and \(\mathbf{e}_{k}^{(\tau_{1})}\) is the eigenvector corresponding to the largest eigenvalue of \(\mathbf{W}_{k}^{(\tau_{1})}\) in the \(\tau_{1}\)-th iteration. By introducing (12) to the objective function (9a), we obtain the following problem: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! According to the above transformation, we can obtain \[|\widehat{\mathbf{h}}_{k}\mathbf{w}_{k}|^{2}=|(\mathbf{h}_{k}^{\rm H} +\mathbf{g}_{k}^{\rm H}\mathbf{\Theta}_{k}\mathbf{H})\mathbf{w}_{k}|^{2}={\rm Tr }(\mathbf{V}_{k}\mathbf{F}_{k}), \tag{19}\] \[\|\widehat{\mathbf{h}}_{k}\|^{2}={\rm Tr}(\mathbf{V}_{k}\overline{ \mathbf{R}}_{k}). \tag{20}\] Based on (20), the decoding order in (2) is rewritten as \[{\rm Tr}(\mathbf{V}_{1}\overline{\mathbf{R}}_{1})\leq{\rm Tr}( \mathbf{V}_{2}\overline{\mathbf{R}}_{2})\leq\cdots\leq{\rm Tr}(\mathbf{V}_{K} \overline{\mathbf{R}}_{K}). \tag{21}\] Then, given the active beamforming vector, the subproblem of MF-RIS coefficient design can be given by \[\underset{\mathbf{V}_{k},A_{k},B_{k},R_{k}}{\max} \underset{k=1}{\sum}R_{k} \tag{22a}\] \[{\rm s.t.}\quad\quad{A_{k}}^{-1}\!\leq\!{\rm Tr}(\mathbf{V}_{k} \mathbf{F}_{k}),\ \forall k,\] (22b) \[B_{k}\!\!\leq\!\!\sum_{i=k+1}^{K}\!\!{\rm Tr}(\mathbf{V}_{k} \mathbf{F}_{j}\!)\!+\!\sigma_{*}^{2}{\rm Tr}(\mathbf{V}_{k}\overline{\mathbf{Q }}_{k})\!+\!\sigma_{k}^{2},\ \forall k,\] (22c) \[\sum\nolimits_{k=1}^{K}{\rm Tr}(\mathbf{V}_{k}\overline{\mathbf{ G}}_{k})\leq P_{o},\] (22d) \[\mathbf{V}_{k}\succeq\mathbf{0},\ R_{k}\geq R_{k}^{\rm min},\ \forall k,\] (22e) \[[\mathbf{V}_{k}]_{\rm m,m}=\beta_{m}^{k},\ [\mathbf{V}_{k}]_{\rm M+1,M+1}=1,\ \forall k,\] (22f) \[{\rm rank}(\mathbf{V}_{k})=1,\ \forall k,\] (22g) \[\theta_{m}^{p}\in[0,2\pi),\ (\ref{eq:22}),\ (\ref{eq:22}),\ (\ref{eq:22}),\ (\ref{eq:22}),\ (\ref{eq:22}),\ \forall m,\ \forall p, \tag{22h}\] where \(\mathbf{F}_{i}\) denotes \(\mathbf{F}_{k}\) when \(\mathbf{w}_{k}\) is replaced by \(\mathbf{w}_{i}\). Similar to (12), we replace the rank-one constraint in (22g) with the following form: \[\|\mathbf{V}_{k}\|_{*}-\|\mathbf{V}_{k}\|_{2}\leq\|\mathbf{V}_{k }\|_{*}-\|\overline{\mathbf{V}}_{k}\|_{2}, \tag{23}\] where \(\|\mathbf{V}_{k}\|_{*}\) and \(\|\mathbf{V}_{k}\|_{2}\) denote the nuclear norm and the spectral norm of matrix \(\mathbf{V}_{k}\), respectively. Besides, \(\|\overline{\mathbf{V}}_{k}\|_{2}=\|\mathbf{V}_{k}^{(\tau_{2})}\|_{2}+{\rm Tr }[\mathbf{z}_{k}^{(\tau_{2})}(\mathbf{z}_{k}^{(\tau_{2})})^{\rm H}(\mathbf{V} _{k}-\mathbf{V}_{k}^{(\tau_{2})})]\) and \(\mathbf{z}_{k}^{(\tau_{2})}\) is the eigenvector corresponding to the largest eigenvalue of \(\mathbf{V}_{k}^{(\tau_{2})}\) in the \(\tau_{2}\)-th iteration. By introducing (10) into (7b), problem (22) can be reformulated as \[\underset{\mathbf{V}_{k},A_{k},B_{k},R_{k}}{\max} \sum\nolimits_{k=1}^{K}\!R_{k}\!-\!\frac{1}{\xi}\sum\nolimits_{k}(\| \mathbf{V}_{k}\|_{*}\!-\!\|\overline{\mathbf{V}}_{k}\|_{2})\] (24a) \[{\rm s.t.}\quad\quad\overline{R}_{k}\geq R_{k},\ \theta_{m}^{p}\in[0,2\pi),\ \forall k,\ \forall m,\ \forall p,\] (24b) \[(\ref{eq:22}),\ (\ref{eq:22}),\ (\ref{eq:22})\!-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! tentuation, which helps to improve the channel gain of cascaded links. Furthermore, due to the limitations faced by the active RIS and STAR-RIS counterparts (i.e., half-space coverage and double-fading attenuation), the MF-RIS improves the rate performance by 16% and 44% when \(P_{\max}=10\ \mathrm{dBm}\), respectively. Additionally, it is evident that all RIS-aided schemes achieve significant gains than the scheme without RIS. This demonstrates the superiority of using RIS to improve the performance of wireless networks. Fig. 2(b) shows that the sum rate of all RIS-aided schemes increase with \(M\). This is because a larger \(M\) enables a higher beamforming gain, thus improving the system performance. In addition, with more degree of freedoms to manipulate signal propagation, the STAR-RIS is capable of enjoying a 6% higher sum rate than the SF-RIS. Moreover, although only the users located in the reflection space are served by the active RIS, it outperforms the STAR-RIS with a 12% higher sum rate. This is because the performance gain obtained from the signal amplification of active RIS is greater than that from full-space coverage of STAR-RIS. This also implies that the signal amplification function plays an important role in improving the performance of RIS-aided networks. Fig. 2(c) illustrates the sum rate versus the \(Y\)-coordinate of RIS (from \(0\) to 50), where the RIS moves from the BS side to the user side. We can observe that the sum rates of the STAR-RIS and the SF-RIS first decrease and then increase. The reason behind this is that the channel gain decreases with the link distance. Specifically, when the STAR-RIS and the SF-RIS are located close to the middle point, the received signals at users are attenuated the most, resulting in the lowest sum rate. In contrast, owing to the signal amplification function, the MF-RIS and the active RIS are less affected by the double-fading attenuation, which achieve 39% and 28% gains in the middle point compared to the SF-RIS. Moreover, the corresponding sum rate maintains a continuous upward trend even when the MF-RIS and active RIS are far away from the BS. This is because as the RIS comes closer to the users, the power of the incident signal at the RIS is weaker. Thus, under a fixed amplification power budget, the MF-RIS can provide more amplification gain when deployed closer to users. This compensates for the attenuation caused by the double-fading issue. This observation also reveals that the MF-RIS should be deployed close to the users for better performance. ## V Conclusion In this letter, we proposed a novel MF-RIS architecture to alleviate the double-fading attenuation via transmitting and reflecting the incident signal with power amplification. Then, we investigated the resource allocation problem in a downlink multiuser MF-RIS-aided NOMA network. Specifically, the active beamforming and MF-RIS coefficients were jointly optimized to maximize the achievable sum rate by leveraging SCA and penalty-based method. Numerical results validated the effectiveness of the proposed MF-RIS and the superiority of MF-RIS over traditional RISs. In the future, we are interested in studying the coupled phase and hardware impairment problems of the MF-RIS. In addition, the robust beamforming under imperfect CSI cases deserves exploration as well.
柔軟に無線伝播環境を操作することで、再構成可能な intelligente 表面 (RIS) は、将来の無線通信技術として有望です。ただし、従来の RIS に対する単一側カバーと双方の影減衰は、その応用を大きく制限しています。この課題に対処するため、私たちは、反射、伝送、増幅を同時に提供する多機能 RIS (MF-RIS) の新しい概念を提案しました。MF-RISは、発生信号に対して反射、伝送、増幅を提供することで、性能向上を目指しています。非正交多重アクセス (NOMA) 下流多重ネットワークの性能を高めるために、MF-RISを導入し、アクティブなビームフォーミングと MF-RIS係数を同時最適化することで、総レートを最大化します。そして、提案されたアルゴリズムは、連続的凸近似と罰則に基づいた方法を用いて、形成された非凸問題を
2309.17274
A Ramsey-type phenomenon in two and three dimensional simplices
We develop a Ramsey-like theorem for subsets of the two and three-dimensional simplex. A generalization of the combinatorial theorem presented here to all dimensions would produce a new proof that $\textrm{Homeo}_+[0,1]$ is extremely amenable (a theorem due to Pestov) using general results of Uspenskij on extreme amenability in homeomorphism groups.
Sumun Iyer
2023-09-29T14:25:40
http://arxiv.org/abs/2309.17274v1
# A Ramsey-type phenomenon in two and three dimensional simplices ###### Abstract. We develop a Ramsey-like theorem for subsets of the two and three-dimensional simplex. A generalization of the combinatorial theorem presented here to all dimensions would produce a new proof that \(\mathrm{Homeo}_{+}[0,1]\) is extremely amenable (a theorem due to Pestov) using general results of Uspenskij on extreme amenability in homeomorphism groups. Cornell University _E-mail address_: ssi22@cornell.edu ## 1. Introduction This note develops a Ramsey-type phenomenon about subsets of the two-dimensional simplex: **Theorem 1.1**.: _Let \(F\) be a subset of \(\Delta^{2}=\{(x,y)\ :\ 0\leq x\leq y\leq 1\}\). Then one of the following is true:_ 1. \(F^{c}\) _has infinite rank_ 2. _any finite pattern is essential for_ \(F\)_._ Precise definitions of _rank_ and _essential patterns_ are in Section 2. For now we just mention that, loosely, the theorem says that any subset of the two-simplex is either geometrically "thin" or it is combinatorially very rich. Theorem 1.1 is proven in Section 2 (as Theorem 2.2); the proof is combinatorial but the moves are inspired by some geometric intuitions. The main interest of Theorem 1.1 is that it seems to indicate a new sort of Ramsey phenomenon. The theorem statement has a Ramsey-like structure and there are Ramsey theoretic gestures throughout the proof, but there does not appear to be any direct relation to existing Ramsey theorems. The motivation for formulating and proving Theorem 1.1 comes from topological dynamics and we say a bit about that now. A topological group is _extremely amenable_ if every continuous action of it on a compact Hausdorff spaces has a fixed point. For the class of Polish groups which arise as automorphism groups of Fraisse limits, extreme amenability is related to the structural Ramsey property in classes of finite structures (this is known as the _Kechris-Pestov-Todorcevic correspondence_, see [2] for more). We are interested here in extreme amenability outside the setting of Fraisse limit automorphism groups. Let \(\mathrm{Homeo}_{+}[0,1]\) be the group of orientation-preserving homeomorphisms of the unit interval. First, note that \(\mathrm{Homeo}_{+}[0,1]\) is known to be extremely amenable; this is a theorem of Pestov [3]. The standard proof goes like this: one notes that there is a map from the group \(\mathrm{Aut}(\mathbb{Q},\leq)\) of order-preserving bijections of the rationals into \(\mathrm{Homeo}_{+}[0,1]\) with dense image. Then an application of the classical Ramsey theorem and the Kechris-Pestov-Todorcevic correspondence implies that \(\mathrm{Aut}(\mathbb{Q},\leq)\) is extremely amenable. It is a general and easy to see fact that if a group \(G\) has an extremely amenable, dense subgroup, then \(G\) itself must be extremely amenable. Let \(X\) be a compact metric space and \(\mathrm{Homeo}(X)\) the group of homeomorphisms of \(X\) with the uniform convergence topology. A theorem of Uspenskij in [4] shows that determining extreme amenability of \(\mathrm{Homeo}(X)\) is equivalent to proving that for a certain countable family of \(\mathrm{Homeo}(X)\)-flows, the only minimal subflows are singletons. The construction in [4] suggests a possible new approach to proving that \(\mathrm{Homeo}_{+}[0,1]\) is extremely amenable, as Uspenskij notes. For \(n\in\mathbb{N}\), let \[\Delta^{n}=\{(x_{1},x_{2},\ldots,x_{n})\ :\ 0\leq x_{1}\leq x_{2}\leq\cdots x_{n }\leq 1\}\] be the standard realization of the \(n\)-dimensional simplex. The group \(\mathrm{Homeo}_{+}[0,1]\) acts on \(\Delta^{n}\) diagonally: \[g\cdot(x_{1},x_{2},\ldots,x_{n})=(g(x_{1}),g(x_{2}),\ldots,g(x_{n}))\] A consequence of Uspenskij's general theorem is that extreme amenability of \(\mathrm{Homeo}_{+}[0,1]\) is equivalent to the statement below: **Lemma 1.2**.: _(Lemma 1.2 of [4]) For each \(n\in\mathbb{N}\), every minimal, closed, \(\mathrm{Homeo}_{+}[0,1]\)-invariant subset of \(\text{Exp}(\Delta^{n})\) is a singleton._ Throughout, for \(K\) compact, \(\text{Exp}(K)\) is the compact space of all compact subsets of \(K\) with the Vietoris topology (see [1], Section 4.F for definitions). So a proof of Lemma 1.2 (that of course, does not simply apply Pestov's theorem) would provide a new proof that \(\mathrm{Homeo}_{+}[0,1]\) is extremely amenable. Uspenskij mentions that he's "not aware of a short, independent proof of the lemma," and this note began as an attempt to bridge this gap. We succeed in providing an independent proof for dimension \(n=2\) using Theorem 1.1. The definitions needed to state Theorem 1.1 and the proof of Theorem 1.1 occupy Section 2. We note that Section 2 makes no mention of topological dynamics or of the group \(\mathrm{Homeo}_{+}[0,1]\), it can be read completely on its own. In Section 3, we return to topological dynamics and show how Theorem 1.1 implies Uspenskij's lemma (Lemma 1.2 above) for \(n=2\). In Section 4, we formulate a Ramsey statement which is a natural generalization of Theorem 1.1 for all dimensions (Conjecture 4.2); an independent proof of this statement would answer Uspenskij's question. We then prove a partial result (Theorem 4.3) towards Conjecture 4.2 for the simplex of dimension 3. There are some interesting technical difficulties that arise in dimension 3 as opposed to dimension 2 and which may indicate some sort of new approach is needed to prove Conjecture 4.2 in all dimensions. ### Acknowledgements: I would like to thank Slawek Solecki for many helpful conversations about this project and also for encouraging me to continue to work on it. ## 2. A Ramsey theorem for the \(2\)-simplex By \(\Delta^{2}\), we denote the usual geometric realization of the two-simplex given by: \[\Delta^{2}=\{(x,y)\ :\ 0\leq x\leq y\leq 1\}\subseteq\mathbb{R}^{2}.\] This first definition assigns a rank to subsets of \(\Delta^{2}\) that captures how "thick" they are. **Definition 2.1**.: For \(S\subseteq\Delta^{2}\), say that _rank of \(S\) is at least \(n\)_ if there exists \[0\leq x_{0}<x_{1}\leq y_{1}<x_{2}\leq y_{2}<\cdots<x_{n}\leq y_{n}<y_{n}^{ \prime}\leq 1\] such that \[S\supseteq(x_{0},x_{1})\times(y_{1},y_{n}^{\prime})\cup(x_{0},x_{2})\times(y_ {2},y_{n}^{\prime})\cup\cdots\cup(x_{0},x_{n})\times(y_{n},y_{n}^{\prime})\] We will denote by \(\operatorname{rk}(S)\) the greatest integer \(n\) so that the rank of \(S\) is at least \(n\). If \(S\) has rank at least \(n\) for all \(n\in\mathbb{N}\), then we write \(\operatorname{rk}(S)=\infty\). Clearly, this rank is monotone in the sense that \[A\subset B\implies\operatorname{rk}(A)\leq\operatorname{rk}(B)\] Notice also that \(\operatorname{rk}(\Delta^{2})=\infty\) and in fact if we let \[D_{a,b}=\{(x,y)\in\Delta^{2}\ :\ a<x<y<b\}\] for any \(a<b\), then \(\operatorname{rk}(D_{a,b})=\infty\). These open triangles of the form \(D_{a,b}\) will be important later on, and we refer to them as _essential triangles_. The condition of being "combinatorially rich" is captured by the notion of _patterns_. For a natural number \(k\), we use \([k]\) to denote the set \(\{0,1,\ldots,k-1\}\). For a set \(A\) and a natural number \(i\), \((A)^{i}\) is the collection of all subsets of \(A\) of cardinality \(i\). For \(n\) an even natural number, a _pattern of size \(n\)_ is a subset \(P\) of \(([n])^{2}\) so that \(\bigcup_{a\in P}a=[n]\) and \(a\cap b=\emptyset\) for each \(a,b\in P\) with \(a\neq b\). Let \(O:(\mathbb{N})^{2}\to\mathbb{N}\times\mathbb{N}\) be the map: \[O(\{a,b\})=\begin{cases}(a,b)&\text{ if }a<b\\ (b,a)&\text{ if }a>b\end{cases}\] For \(i=1,2\), let \(\operatorname{proj}_{i}:\mathbb{N}\times\mathbb{N}\to\mathbb{N}\) be the projection onto the \(i\)th coordinate. We also use \(\operatorname{proj}_{1}:\Delta^{2}\to[0,1]\) to be the projection onto the \(x\)-coordinate and \(\operatorname{proj}_{2}:\Delta^{2}\to[0,1]\) to be the projection onto the \(y\)-coordinate (the meaning should always be clear from context). If \(P\) is a pattern of size \(n\) and \(F\subseteq\Delta^{2}\), then a _copy of \(P\) in \(F\)_ is a function \[\sigma:O(P)\to F\] such that for any \(a,b\in O(P)\) and \(p,q\in\{\operatorname{proj}_{1},\operatorname{proj}_{2}\}\) \[p(a)<q(b)\implies p(\sigma(a))<q(\sigma(b))\] Notice that a copy \(\sigma\) of \(P\) induces an order preserving function which we denote by \(f_{\sigma}:[n]\to[0,1]\) defined by: \[f_{\sigma}(j)=\begin{cases}\operatorname{proj}_{1}(\sigma(a))&\text{ if }j= \operatorname{proj}_{1}(a)\\ \operatorname{proj}_{2}(\sigma(a))&\text{ if }j=\operatorname{proj}_{2}(a) \end{cases}\] We say that a pattern \(P\) is _essential_ for \(F\) if there is a copy of \(P\) in \(D_{a,b}\cap F\) for every \(a<b\). Here is the statement of the main theorem: **Theorem 2.2**.: _Let \(F\) be a subset of \(\Delta^{2}\). Then one of the following is true:_ 1. \(\text{rk}(F^{c})=\infty\)__ 2. _any finite pattern is essential for_ \(F\)_._ First let \(\psi_{k}^{n}:[n]\to[n+1]\) be given by: \[\psi_{k}^{n}(x)=\begin{cases}x&\text{ if }x<k\\ x+1&\text{ if }x\geq k\end{cases}\] We also have an inverse operator \((\psi_{k}^{n})^{-1}:[n+1]\setminus\{k\}\to[n]\): \[(\psi_{k}^{n})^{-1}(x)=\begin{cases}x&\text{ if }x<k\\ x-1&\text{ if }x\geq k+1\end{cases}\] Both maps are injective and order-preserving on their respective domains. Abusing notation slightly, we use \(\psi_{k}^{n}\) and \((\psi_{k}^{n})^{-1}\) also to represent the obvious maps induced on \((\mathbb{N})^{2}\) and on \(\mathbb{N}\times\mathbb{N}\) coordinate-wise. **Lemma 2.3**.: _Assume that \(F\) is a subset of \(\Delta^{2}\) and there exists \(M\in\mathbb{N}\) so that for any set \(S\subseteq\Delta^{2}\) with \(\text{rk}(S)\geq M\), \(S\cap F\neq\emptyset\). If \(P\) is a pattern of size \(n\) that is essential for \(F\), then for any \(k\leq n\), \(\psi_{k}^{n}(P)\cup\{(k,n+1)\}\) is a pattern of size \(n+2\) that is essential for \(F\)._ Proof.: Let \(P^{\prime}=\psi_{k}^{n}(P)\cup\{k,n+1\}\); we want to prove that \(P^{\prime}\) is essential for \(F\). We first take care of the case that \(k=n\). Fix \(a<b\) and since \(P\) is essential, let \(\sigma:O(P)\to F\cap D_{a,b}\) be a copy of \(P\). Let \(f_{\sigma}:[n]\to[0,1]\) be the associated order-preserving function. Note that \(f_{\sigma}(n-1)<b\) since \(\sigma(O(P))\subseteq D_{a,b}\). The essential triangle \(D_{f_{\sigma}(n-1),b}\) has rank \(\infty\) and so let \((x,y)\in F\cap D_{f_{\sigma}(n-1),b}\). Then, one can see that \(\tau:O(P^{\prime})\to F\cap D_{a,b}\) defined by \[\tau(a)=\begin{cases}\sigma(a)&\text{ if }a\in O(P)\\ (x,y)&\text{ if }a=(n,n+1)\end{cases}\] is a copy of \(P^{\prime}\) in \(F\cap D_{a,b}\) since \(y>x>f_{\sigma}(n-1)\). So \(P^{\prime}\) is essential for \(F\). Now assume that \(k<n\). Fix \(a<b\) and suppose for contradiction that \(F\cap D_{a,b}\) does not contain a copy of \(P^{\prime}\). Set \(a_{0}=a\) and \(b_{0}=b\). We will inductively define \((a_{i},b_{i},c_{i})\) for \(i=1,\dots,M+2\) so that for each \(i\geq 1\): 1. \(a_{i-1}<a_{i}<b_{i}\leq c_{i}<b_{i-1}\) 2. \(F\cap(a_{i},b_{i})\times(c_{i},b)=\emptyset\) Let \(\sigma_{1}:O(P)\to F\cap D_{a,b}\) be a copy of \(P\) in \(F\cap D_{a,b}\). Let \(f_{\sigma_{1}}:[n]\to[0,1]\) be the order-preserving function associated to \(\sigma_{1}\). Set \(a_{1}=f_{\sigma_{1}}(k-1)\), \(b_{1}=f_{\sigma_{1}}(k)\) and \(c_{1}=f_{\sigma_{1}}(n-1)\). Condition (1) is satisfied because \(f_{\sigma_{1}}\) is order-preserving and since \(\sigma(O(P))\subset D_{a,b}\), it must be that \(f_{\sigma_{1}}([n])\subset(a,b)\). To see that condition (2) is satisfied, suppose for contradiction that \((x,y)\in F\cap(a_{1},b_{1})\times(c_{1},b)\). Define \(\sigma_{1}^{\prime}:O(P^{\prime})\to F\cap D_{a,b}\) by \[\sigma_{1}^{\prime}(z)=\begin{cases}\sigma_{1}(w),&\text{ if }z=\psi_{k}^{n}(w) \\ (x,y),&\text{ if }z=(k,n+1)\end{cases}\] Now one can check that \(\sigma_{1}^{\prime}\) is a copy of \(P^{\prime}\) in \(F\cap D_{a,b}\), exactly because \((x,y)\in(a_{1},b_{1})\times(c_{1},b)=(f_{\sigma_{1}}(k-1),f_{\sigma_{1}}(k)) \times(f_{\sigma_{1}}(n-1),b)\). This cannot happen and so it must be that \(F\cap(a_{1},b_{1})\times(c_{1},b)=\emptyset\), i.e., that condition (2) holds. Suppose now that we have defined \((a_{n},b_{n},c_{n})\) satisfying (1) and (2). As \(P\) is essential for \(F\), let \(\sigma_{n+1}:O(P)\to F\cap D_{a_{n},b_{n}}\) be a copy of \(P\) and let \(f_{\sigma_{n+1}}:[n]\to[0,1]\) be the associated order-preserving function. Set \(a_{n+1}=f_{\sigma_{n+1}}(k-1)\), \(b_{n+1}=f_{\sigma_{n+1}}(k)\), and \(c_{n+1}=f_{\sigma_{n+1}}(n-1)\). Condition (1) for \(i=n+1\) is satisfied because \(\sigma_{n+1}(O(P))\subset D_{a_{n},b_{n}}\) and condition (2) is satisfied because if \((x,y)\in F\cap(a_{n+1},b_{n+1})\times(c_{n+1},b)\) then just as in the base case one can check that \[\sigma_{n+1}^{\prime}(z)=\begin{cases}\sigma_{n+1}(w),&\text{ if }z=\psi_{k}^{n}(w) \\ (x,y),&\text{ if }z=(k,n+1)\end{cases}\] is a copy of \(P^{\prime}\) in \(F\cap D_{a,b}\). Once \((a_{M}+2,b_{M}+2,c_{M}+2)\) has been defined, condition (2) implies that \[R:=(a_{M+2},b_{M+2})\times(c_{M+2},b)\cup(a_{M+2},b_{M+1}) \times(c_{M+1},b)\cup\\ \cdots\cup(a_{M+2},b_{1})\times(c_{1},b)\subset F^{c}\] and condition (1) implies that \[a_{M+2}<b_{M+2}\leq c_{M+2}<b_{M+1}\leq c_{M+1}<\cdots<b_{1}\leq c_{1}<b\] and so \(\operatorname{rk}(R)\geq M\) yet \(R\cap F=\emptyset\), which contradicts our assumption on \(F\). So \(P^{\prime}\) is essential for \(F\). Now we can prove Theorem 2.2: Proof of Theorem 2.2.: Suppose that \(F\) is a subset of \(\Delta^{2}\) and that \(\operatorname{rk}(F^{c})=M<\infty\). Then, every subset of \(S\subseteq\Delta^{2}\) with \(\operatorname{rk}(S)\geq M+1\) is such that \(S\cap F\neq\emptyset\). So we satisfy the assumptions of Lemma 2.3. Now we want to argue by induction on the size of patterns that any finite pattern is essential for \(F\). First the base case. It is clear that the pattern \(\{0,1\}\) is essential for \(F\) since \(F\) must intersect every essential triangle (recall that essential triangles have infinite rank) and \(\{0,1\}\) is the only pattern of size \(2\). Suppose now that every pattern of size \(n\) is essential for \(F\). Let \(P\) be a pattern of size \(n+2\). Let \(a\in P\) be the member of \(P\) with \(n+1\in a\) and suppose that \(a=\{k,n+1\}\) where of course \(k\leq n\). Note that \((\psi_{k}^{n})^{-1}\left(P\setminus\{a\}\right)\) is a pattern of size \(n\) and so is essential for \(F\). Then, Lemma 2.3 implies that \[P=\psi_{k}^{n}\left((\psi_{k}^{n})^{-1}\left(P\setminus\{a\}\right)\right)\cup \{(k,n+1\}\] is essential for \(F\). ## 3. Dynamical consequences We show in this section how Theorem 2.2 implies Theorem 1.2 in dimension 2. Recall that we consider the diagonal action of \(\mathrm{Homeo}_{+}[0,1]\) on \(\Delta^{2}\): \[g\cdot(x,y)=(g(x),g(y))\text{ for }(x,y)\in\Delta^{2}\text{ and }g\in\mathrm{ Homeo}_{+}[0,1]\] For \(A\subset\mathbb{R}^{2}\) and \(\epsilon>0\), define \[(A)_{\epsilon}=\{(x,y)\in\mathbb{R}^{2}\ :\ d((x,y),a)<\epsilon\text{ for some }a \in A\}\] where \(d\) is the Euclidean metric on \(\mathbb{R}^{2}\). By \(\partial\Delta^{2}\), we mean the boundary of \(\Delta^{2}\)-the union of the one-dimensional faces of \(\Delta^{2}\). First, here are two definitions capturing the properties of subsets of \(\Delta^{2}\) that we will be interested in. We say \(F\subseteq\Delta^{2}\) is _pseudo-dense_ if for any \(\epsilon>0\), there exists \(g\in\mathrm{Homeo}_{+}[0,1]\) so that \(g(F)\) is \(\epsilon\)-dense in \(\Delta^{2}\). We say \(F\subseteq\Delta^{2}\) is _thin_ if for any \(\epsilon>0\) there exists some \(g\in\mathrm{Homeo}_{+}[0,1]\) so that \(g(F)\subseteq\big{(}\partial\Delta^{2}\big{)}_{\epsilon}\). Lemma 3.2 below proves equivalent conditions for each of these two properties that connects them to Theorem 2.2. Lemma 3.1 is easy to check from the definition of rank and the fact that elements of \(\mathrm{Homeo}+[0,1]\) preserve the relative orders of sets of points in \([0,1]\). **Lemma 3.1**.: _For any set \(S\subset\Delta^{2}\) and any \(f\in\mathrm{Homeo}_{+}[0,1]\), \(\text{rk}(f(S))=\text{rk}(S)\)._ **Lemma 3.2**.: _Let \(F\) be a subset of \(\Delta^{2}\). Then:_ 1. \(F\) _is pseudo-dense if and only if_ \(F\) _contains a copy of every finite pattern._ 2. \(F\) _is thin if and only if_ \(\text{rk}(F^{c})=\infty\)_._ It is convenient here to define a particular type of pattern. A pattern \(P\) is a _grid of width \(n\)_ if \[O(P)=\{(x_{i}^{j},y_{i}^{j})\ :\ 0\leq i\leq j\leq n-1\}\] satisfying the condition that \[\{x_{1}^{1},\ldots,x_{1}^{n},y_{1}^{1}\} <\{x_{2}^{2},\ldots,x_{2}^{n},y_{1}^{2},y_{2}^{2}\}<\cdots\] \[<\{x_{j}^{j},\ldots,x_{j}^{n},y_{1}^{j},y_{2}^{j},\ldots,y_{j}^{ j}\}<\cdots<\{x_{n}^{n},y_{n}^{n}\} \tag{3.1}\] where we write \(A<B\) for finite subsets \(A,B\) of \(\mathbb{N}\) if \(\max(A)<\min(B)\). Another formulation: a pattern \(P\) is a grid of width \(n\) if there exists reals \[0<x_{1}<y_{1}<x_{2}<y_{2}<\cdots<x_{n+1}<y_{n+1}<1\] so that \(|O(P)\cap(x_{i},x_{i+1})\times(y_{j},y_{j+1})|=1\) for each \(1\leq i\leq j\leq n\). It is not hard to see with a moment of thought that for any finite pattern \(P\), there exists some \(N\) so that any copy of a grid of width \(N\) contains a copy of \(P\). Proof of Lemma 3.2.: **(1):** First suppose that \(F\) is pseudo-dense. Fix \(n\in\mathbb{N}\). Let \(g\in\mathrm{Homeo}_{+}[0,1]\) be so that \(g(F)\) is \(\frac{1}{n}\)-dense in \(\Delta^{2}\). So for \(0\leq i\leq j\leq n-1\), \(g(F)\) intersects the set \(\big{(}\frac{i}{n},\frac{i+1}{n}\big{)}\times\big{(}\frac{i}{n},\frac{i+1}{n} \big{)}\) and we can choose a point \((a_{i}^{j},b_{i}^{j})\) in the intersection. Let \(P=\{(x_{i}^{j},y_{i}^{j})\ :\ 0\leq i\leq j\leq n-1\}\) be a grid of width \(n\) with the \(x_{i}^{j},y_{i}^{j}\) satisfying Condition 3.1. It is then not hard to check that \(\sigma:O(P)\to F\) given by \[\sigma\left((x_{i}^{j},y_{i}^{j})\right)=g^{-1}\cdot(a_{i}^{j},b_{i}^{j})\] is a copy of \(P\) in \(F\), since \(g^{-1}\) is order-preserving. Since \(F\) contains a copy of grids of arbitrary width, \(F\) contains a copy of every finite pattern. Conversely, suppose \(F\) contains a copy of every pattern. Then, given \(\epsilon>0\), let \(n\) be so that \(\frac{4}{2n+3}\sqrt{2}<\epsilon\). Let \(\sigma:O(P)\to F\) be a copy of \(P\) in \(F\), where \(P\) is a grid of width \(n\). Let \(x_{1},y_{1},\ldots,x_{n+1},y_{n+1}\) be as in the second formulation of what is means to be a grid. Choose \(g\in\mathrm{Homeo}_{+}[0,1]\) so that \(g(x_{i})=\frac{2i-1}{2n+3}\) and \(g(y_{i})=\frac{2i}{2n+3}\) for \(1\leq i\leq n+1\). Then, \(g(F)\) has the property that for any odd \(i\) and even \(j\) with \(2n+2\geq j>i\geq 1\), the square \(\left(\frac{i}{2n+3},\frac{i+2}{2n+3}\right)\times\left(\frac{j}{2n+3},\frac{j +2}{2n+3}\right)\) intersects \(g(F)\) this implies that \(g(F)\) is \(\epsilon\)-dense in \(\Delta^{2}\). **(2):** Suppose that \(F\) is thin and fix \(\epsilon>0\). Let \(g\) be in \(\mathrm{Homeo}_{+}[0,1]\) so that \(g(F)\subset\left(\partial\Delta^{2}\right)_{\epsilon}\). Let \(k\in\mathbb{N}\) satisfy \[(2k+3)\epsilon<1-\epsilon \tag{3.2}\] Then, observe that \[(\epsilon,2\epsilon)\times(3\epsilon,(2k+3)-\epsilon)\cup(\epsilon,4 \epsilon)\times(5\epsilon,(2k+3)-\epsilon)\cup\cdots\] \[\cup(\epsilon,2k\epsilon)\times((2k+1)\epsilon,(2k+3)\epsilon) \subseteq\left(g(F)\right)^{c}\] and so \(\mathrm{rk}(g(F)^{c})\geq k\). By Lemma 3.1, \(\mathrm{rk}(F^{c})\geq k\). By making \(\epsilon\) small, we can find choices of \(k\) satisfying 3.2 that as as large as we wish; so \(\mathrm{rk}(F^{c})=\infty\). Before we prove the converse, notice the following: if \((x,y)\in\Delta^{2}\) is such that one of the three conditions below holds: 1. \(x<\epsilon\) 2. \(y>1-\epsilon\) 3. \(y-x<\epsilon\) then, \((x,y)\in\left(\partial\Delta^{2}\right)_{\epsilon}\). Now, suppose \(\mathrm{rk}(F^{c})\geq n\). Let \(\beta=\frac{1}{2n+4}\). Let: \[x_{0}<x_{1}\leq y_{1}<\cdots<x_{n}\leq y_{n}<y_{n}^{\prime}\] be as in Definition 2.1, witnessing that \(\mathrm{rk}(F^{c})\geq n\). Let \(f\in\mathrm{Homeo}_{+}[0,1]\) so that \(f(x_{0})=\beta\); \(f(x_{i})=2i\beta\) for \(1\leq i\leq n\); \(f(y_{i})=(2i+1)\beta\) for \(1\leq i\leq n\); and \(f(y_{n}^{\prime})=\frac{2n+3}{2n+4}=1-\beta\). Then suppose that \((x,y)\notin f(F^{c})\). This implies \[(x,y)\notin(\beta,2\beta)\times(3\beta,1-\beta)\cup(\beta,4\beta)\times(5\beta,1-\beta)\cup\cdots\cup(\beta,2n\beta)\times((2n+1)\beta,1-\beta)\] Now, there are three options for \(x\): (1) \(x<\beta\), (2) \(x>2n\beta\), or (3) \(2k\beta\leq x\leq(2k+2)\beta\) for some \(0\leq k\leq n\). Of course, (2) implies that \(y>2n\beta=1-4\beta\). Case (3) implies that either \(y<(2k+3)\beta\) in which case \(y-x<(2k+3)\beta-2k\beta=3\beta\) or \(y>1-\beta\). So by our observation above, \((x,y)\in\left(\partial\Delta^{2}\right)_{4\beta}\) and thus that \(f(F^{c})^{c}=f(F)\subseteq\left(\partial\Delta^{2}\right)_{4\beta}\). It follows that \(\mathrm{rk}(F^{c})=\infty\) implies that \(F\) is thin. The following fact about the topology of \(\text{Exp}(Y)\) will be useful: for \(F\subseteq\text{Exp}(Y)\) and \(A\in\text{Exp}(Y)\), \(A\in\overline{F}\) if for every \(\epsilon>0\), there exists \(X\in F\) so that \(A\subseteq(X)_{\epsilon}\) and \(X\subseteq(A)_{\epsilon}\). **Proposition 3.3**.: _Every minimal subflow of \(\text{Exp}(\Delta^{2})\) is a singleton. Moreover, it is a union of faces of \(\Delta^{2}\)._ First a lemma which illustrates the method of proving Proposition 3.3 in the easy one-dimensional case and will be used in the proof: **Lemma 3.4**.: _Every minimal subflow of \(\text{Exp}([0,1])\) is a singleton. Actually, it is a union of faces of the one-simplex \([0,1]\), that is, one of:_ \[\{[0,1]\},\{0\},\{1\},\{0,1\}\] Proof.: Let \(X\) be a minimal subflow of \(\text{Exp}([0,1])\) and let \(A\in X\). There are a few cases to consider. Suppose first that there is an interval \((a,b)\) contained in \(A^{c}\) and that \(A\) contains points less than \(a\) and greater than \(b\). Then, for any \(\epsilon>0\), there is some \(g\in\text{Homeo}_{+}[0,1]\) so that \(g(a)=\frac{\epsilon}{2}\) and \(g(b)=1-\frac{\epsilon}{2}\). Notice that for such a \(g\), \(g(A)\subseteq\left(\{0,1\}\right)_{\epsilon}\) and \(\{0,1\}\subset(g(A))_{\epsilon}\). Since \(X\) is \(\text{Homeo}_{+}[0,1]\)-invariant and closed in \(\text{Exp}([0,1])\), it follows that \(\{0,1\}\in X\). But \(\{0,1\}\) is clearly a fixed point of the action of \(\text{Homeo}_{+}[0,1]\) on \(\text{Exp}([0,1])\) and \(X\) is minimal; so \(X=\{\{0,1\}\}\). Similar arguments in the cases that \(A\subset[b,1]\) (resp. \(A\subset[0,a]\)) give that \(X=\{\{1\}\}\) (resp. \(X=\{\{0\}\}\)). If there is no such interval \((a,b)\subset A^{c}\); then \(A\) is dense in \([0,1]\) and it follows that \(A=[0,1]\) and so \(X=\{\{[0,1]\}\}\) since \([0,1]\) is a fixed point of the action. Now we can prove the proposition: Proof of Proposition 3.3.: Let \(X\) be a minimal subflow of \(\text{Exp}(\Delta^{2})\). Let \(A\in X\). Lemma 3.2 and Theorem 2.2 imply that at least one of the following two things happens: 1. \(A\) is pseudo-dense. 2. \(A\) is thin. Since \(X\) is \(\text{Homeo}_{+}[0,1]\)-invariant, \(g(A)\in X\) for each \(g\in\text{Homeo}_{+}[0,1]\). In the case that (1) occurs, the fact that \(X\) is a closed subset of \(\text{Exp}(\Delta^{2})\) implies that \(\Delta^{2}\in X\). It is clear that \(\Delta^{2}\) is a fixed point of the action of \(\text{Homeo}_{+}[0,1]\) and since \(X\) is minimal it must be that \(X=\{\Delta^{2}\}\). In the case that (2) occurs, compactness of \(X\) implies that there exists some \(B\in X\) so that \(B\subseteq\partial\Delta^{2}\). The boundary of \(\Delta^{2}\) is composed of three line segments. Consider for example \[B_{0}:=B\cap\{(x,y)\ :\ x=0\text{ and }0\leq y\leq 1\}\] By Lemma 3.4, there exists some \(B^{\prime}\in X\) so that \(B^{\prime}\subseteq\partial\Delta^{2}\) (notice that each face of \(\Delta^{2}\) is preserved set-wise by the action) and so that \[B^{\prime}\cap\{(x,y)\ :\ x=0\text{ and }0\leq y\leq 1\}\] is some union of faces of \(\{(x,y)\ :\ x=0\text{ and }0\leq y\leq 1\}\). Repeating this argument two more times on the other two one-dimensional faces of \(\Delta^{2}\), applying in turn the fact that the action of \(\operatorname{Homeo}_{+}[0,1]\) preserves each face set-wise, we produce \(C\in X\) so that \(C\subseteq\partial\Delta^{2}\) is a union of faces of \(\Delta^{2}\). Since \(C\) is a fixed point of the action, \(X=\{C\}\). ## 4. A \(3\)-dimensional theorem We develop a natural analog of the notion of rank in two dimensions to higher dimensions, conjecture the natural generalization of Theorem 2.2 to higher dimensions, and then prove a theorem for the \(3\)-simplex that is partial progress towards resolving the conjecture. We take the usual representation of the \(n\)-simplex: \[\Delta^{n}=\{(x_{1},x_{2},\ldots,x_{n})\ :\ 0\leq x_{1}\leq x_{2}\leq\cdots\leq x _{n}\leq 1\}\subset\mathbb{R}^{n}\] **Definition 4.1**.: A set \(A\subset\Delta^{n}\) has _rank \(\geq m\)_ if there exists \[0 \leq x_{1}^{0}<x_{1}^{1}\leq x_{2}^{1}\leq x_{3}^{1}\leq\cdots \leq x_{n}^{1}<x_{1}^{2}\leq x_{2}^{2}\leq\cdots\] \[\leq x_{n}^{2}<\cdots<x_{1}^{m}\leq x_{2}^{m}\leq\cdots\leq x_{n} ^{m}<x_{n}^{m+1}\leq 1\] such that: \[\operatorname{int}\left(\bigcup_{0\leq i_{1}<i_{2}<\cdots<i_{n}\leq m}[x_{1} ^{i_{1}},x_{1}^{i_{1}+1}]\times[x_{2}^{i_{2}},x_{2}^{i_{2}+1}]\times\cdots \times[x_{n}^{i_{n}},x_{n}^{i_{n}+1}]\right)\subseteq A.\] Just as in the two dimensional case, we define patterns. For \(N\in\mathbb{N}\) where \(n|N\), an \(n\)_-pattern of size \(N\)_ is a subset \(P\) of \(([N])^{n}\) such that \(\bigcup_{a\in P}a=[N]\) and \(a\cap b=\emptyset\) for each \(a,b\in P\) with \(a\neq b\). Let \[O_{n}:(\mathbb{N})^{n}\to\underbrace{\mathbb{N}\times\cdots\times\mathbb{N}}_{ n\text{-times}}\] be given by \(O_{n}(\{a_{1},\ldots,a_{n}\})=(a_{\tau(1)},a_{\tau(2)},\ldots,a_{\tau(n)})\) where \(\tau\) is the unique permutation of \(\{1,2,\ldots,n\}\) such that \(a_{\tau(1)}<a_{\tau(2)}<\ldots<a_{\tau(n)}\). As in dimension \(2\) we use \(\operatorname{proj}_{i}:\underbrace{\mathbb{N}\times\cdots\times\mathbb{N}} _{n\text{-times}}\to\mathbb{N}\) to be the projection onto the \(i\)th coordinate and also use \(\operatorname{proj}_{i}:\Delta^{n}\to[0,1]\) to be projection onto the \(i\)th coordinate, \(x_{i}\). When \(P\) is an \(n\)-pattern of size \(N\) and \(F\subseteq\Delta^{n}\), a _copy of \(P\) in \(F\)_ is a function \[\sigma:O_{n}(P)\to F\] so that for any \(a,b\in O_{n}(P)\) and any \(p,q\in\{\operatorname{proj}_{1},\operatorname{proj}_{2},\ldots,\operatorname{ proj}_{n}\}\) \[p(a)<q(b)\implies p(\sigma(a))<q(\sigma(b)).\] Notice that a copy \(\sigma\) of \(P\) induces an order preserving function \(f_{\sigma}:[N]\to[0,1]\) \[f_{\sigma}(m)=\operatorname{proj}_{i}(\sigma(a))\] where \(i\leq n\) and \(a\) is such that \(m=\operatorname{proj}_{i}(a)\). Given \(0\leq a<b\leq 1\), we let \[D_{a,b}^{n}=\{x\in\Delta^{n}\ :\ \operatorname{proj}_{i}(x)\in(a,b)\text{ for all }i\leq n\}\] and call each such set \(D_{a,b}^{n}\) an _essential \(n\)-simplex_. An \(n\)-pattern \(P\) is _essential for \(F\)_ if there is a copy of \(P\) in \(F\cap D_{a,b}^{n}\) for each \(a<b\). **Conjecture 4.2**.: _Let \(F\subseteq\Delta^{n}\). Then at least one of the following holds:_ 1. \(\text{rk}(F^{c})=\infty\)__ 2. _any finite_ \(n\)_-pattern is essential for_ \(F\)__ An argument very similar to that presented in Section 2 gives that Conjecture 1 implies that for any \(n\in\mathbb{N}\), every minimal \(\text{Homeo}_{+}[0,1]\)-subflow of \(\text{Exp}(\Delta^{n})\) is a singleton that consists of a union of faces of \(\Delta^{n}\). Therefore a proof of Conjecture 4.2 (which is independent of Pestov's theorem) would produce a new proof of the extreme amenability of \(\text{Homeo}_{+}[0,1]\). Here is the theorem in dimension 3 that represents partial progress towards Conjecture 1. Theorem 4.3 considers the first "non-trivial" pattern in 3-dimensions. **Theorem 4.3**.: _Let \(F\subseteq\Delta^{3}\). Then at least one of the following holds:_ 1. \(\text{rk}(F^{c})=\infty\)__ 2. _the pattern_ \(\{0,2,4\},\{1,3,5\}\) _is essential for_ \(F\)__ From now on, we will only be working with \(\Delta^{3}\) and with 3-patterns; when we say _pattern_ we always mean a 3-pattern. First, a bit of notation. For \(j_{1},j_{2},l_{1},l_{2}\in\mathbb{N}\) with \(j_{1}<j_{2}\) define: \[\Phi_{(j_{1},l_{1}),(j_{2},l_{2})}:\mathbb{N}\to\mathbb{N}\] by \[\Phi_{(j_{1},l_{1}),(j_{2},l_{2})}(x)=\begin{cases}x&\text{ if }x<j_{1}\\ x+l_{1}&\text{ if }j_{1}\leq x<j_{2}\\ x+l_{1}+l_{2}&\text{ if }x\geq j_{2}\end{cases}\] We also define \[\Phi_{(j_{1},l_{1})}:\mathbb{N}\to\mathbb{N}\] by \[\Phi_{(j_{1},l_{1})}(x)=\begin{cases}x&\text{ if }x<j_{1}\\ x+l_{1}&\text{ if }x\geq j_{1}\end{cases}\] We use also the convention that \[\Phi_{(j_{1},l_{1}),(j_{1},l_{2})}=\Phi_{(j_{1},l_{1}+l_{2})}.\] Every map defined above is injective and order-preserving on its domain. Abusing notation, we will use \(\Phi_{(j_{1},l_{i}),(j_{2},l_{2})}\) and \(\Phi_{(j_{1},l_{1})}\) to denote the obvious corresponding map on \(2^{\mathbb{N}}\), the collection of subsets of \(\mathbb{N}\), and the obvious coordinate-wise map on \(\mathbb{N}\times\mathbb{N}\times\mathbb{N}\) as well. For \(k\in\mathbb{N}\) we let \[T_{k}:\mathbb{N}\to\mathbb{N}\] be the translation given by \[T_{k}(i)=i+k\] and we use \(T_{k}\) also to denote the obvious extensions to \(2^{\mathbb{N}}\) and \(\mathbb{N}\times\mathbb{N}\times\mathbb{N}\). Now suppose that \(P\) is a pattern of size \(n\), \(j_{1}\leq j_{2}\leq n\), \(Q\) is a pattern of size \(m\), and \(i\leq m\). Then when \(j_{1}<j_{2}\), by \(P_{j_{1}}^{j_{2}}\oplus Q_{i}\) we mean the pattern: \[\Phi_{(j_{1},i),(j_{2},m-i)}(P)\cup\Phi_{(0,j_{1}),(i,j_{2}-j_{1})}(Q)\] A consequence of pattern \(Q\) of size \(n\) being essential for a set \(F\) is that for any pattern \(P\) which is essential for \(F\) and any \(l_{1}<l_{2}\leq m\) where \(m\) is the size of \(P\), the pattern \[P_{l_{1}}^{l_{2}}\oplus Q_{n}\] is essential for \(F\). We will use this easy observation implicitly many times in the proofs below. The key construction is that of a _chain_. Fix a pattern \(P\) of size \(n\) and \(j<k\leq n\). We inductively define a _\(l\)-chain of \(P\) at \(j,k\)_, which we denote by \(\operatorname{Ch}_{j,k}^{l}(P)\) as follows. A 1-chain of \(P\) at \(j,k\) is just \(P\) \[\operatorname{Ch}_{j,k}^{1}=P\] Having defined \(\operatorname{Ch}_{j,k}^{l}(P)\), define \[\operatorname{Ch}_{j,k}^{l+1}(P)=\left(\operatorname{Ch}_{j,k}^{l}(P)\right) _{lj}^{(l-1)j+k}\oplus P_{k}\] It is helpful to be able to keep track of the individual copies of \(P\) that form a chain of \(P\). Suppose that \(\sigma:O(\operatorname{Ch}_{j,k}^{l}(P))\to\Delta^{2}\) is a copy of the \(l\)-chain of \(P\) at \(j,k\) and that \(P\) is of size \(n\). Then for \(i=1,\ldots,l\) we denote by \(\sigma^{(i)}\) the copy of \(P\) added in stage \(i\) of the inductive construction of the chain. To be precise, define \(\sigma^{(i)}:O(P)\to\Delta^{2}\) by: \[\sigma^{(1)}=\sigma\circ\Phi_{(j,(l-2)(n)+k),(k,n-k)}\] and for \(i=2,\ldots,l\): \[\sigma^{(i)}=\sigma\circ(T_{j})^{i-1}\circ\Phi_{(j,(l-i-1)(n)+k),(k,n-j)} \tag{4.1}\] One downside to the notation is that it tends to obscure the geometric intuition of patterns "glued" together in specific formations that underlies nearly all of the arguments to come. The reader is encouraged to draw diagrams while reading the proofs; this will make the arguments much easier to follow. To demonstrate, suppose that \(P\) and \(Q\) are arbitrary patterns of size \(m\) and \(n\) respectively, and \(j_{1}<j_{2}\leq n\) and \(k\leq m\), Figure 1 is a diagram of \(P_{j_{1}}^{j_{2}}\oplus Q_{k}\). **Lemma 4.4**.: _Let \(P\) be a pattern of size \(n\) with \(j<k<n\). Assume \(F\subseteq\Delta^{2}\) is so that \(\text{rk}(F^{c})<\infty\). If \(\text{Ch}_{j,k}^{m}(P)\) is essential for \(F\) for every \(m\in\mathbb{N}\), then_ \[P^{\prime}=\Phi_{(j,1),(k,1)}(P)\cup\{(j,k+1,n+2)\}\] Figure 1. A diagram of \(P_{j_{1}}^{j_{2}}\oplus Q_{k}\); the pattern \(P\) is in red and \(Q\) is in black. _is essential for \(F\)._ Proof.: Let \(m\) be such that every set \(S\) with \(\operatorname{rk}(S)\geq\lfloor\frac{m}{2}\rfloor\) intersects \(F\). Let \(a<b\) and let \(\sigma:O\left(\operatorname{Ch}_{j,k}^{m}(P)\right)\to F\cap D_{a,b}\) be a copy of \(\operatorname{Ch}_{j,k}^{m}(P)\). Let \(f_{\sigma}\) be the order-preserving function \([mn]\to[0,1]\) induced by \(\sigma\). Let \(f_{\sigma^{(i)}}:[n]\to[0,1]\) be the order-preserving functions induced by \(\sigma^{(i)}\), the \(i\)th copy of \(P\) in the chain as defined above. Now consider the set: \[S=\bigcup_{i=1}^{m}(f_{\sigma^{(i)}}(j-1),f_{\sigma^{(i)}}(j))\times(f_{\sigma^ {(i)}}(k-1),f_{\sigma^{(i)}}(k))\times(f_{\sigma^{(i)}}(n-1),b)\] **Claim 4.5**.: \(\operatorname{rk}(S)\geq\lfloor\frac{m}{2}\rfloor\)_._ Once we have Claim 4.5, we are done; our assumption on \(F\) implies that \(F\cap S\neq\emptyset\) and in particular, there exists \[(x,y,z)\in F\cap(f_{\sigma^{(i)}}(j-1),f_{\sigma^{(i)}}(j))\times(f_{\sigma^ {(i)}}(k-1),f_{\sigma^{(i)}}(k))\times(f_{\sigma^{(i)}}(n-1),b)\] Then, we can extend copy \(\sigma^{(i)}\) of \(P\) to a copy \((\sigma^{(i)})^{\prime}\) of \(P^{\prime}\) in \(F\cap D_{a,b}\) by: \[(\sigma^{(i)})^{\prime}(u)=\begin{cases}\sigma^{(i)}(v)&\text{ if }u=\Phi_{(j,1),(k,1)}(v)\\ (x,y,z)&\text{ otherwise}\end{cases}\] So it remains to show the Claim. Proof of Claim 4.5.: Let \(x_{0}=f_{\sigma^{(m)}}(j-1)\), \(x_{1}=f_{\sigma^{(m)}}(j)\), \(y_{1}=f_{\sigma^{(m)}}(k-1)\), and \(z_{1}=f_{\sigma^{(m)}}(n-1)\). For \(2\leq i\leq\lfloor\frac{m}{2}\rfloor\), let \(x_{i}=f_{\sigma^{(m-2(i-1))}}(j)\), \(y_{i}=f_{\sigma^{(m-2(i-1))}}(k-1)\), and \(z_{i}=f_{\sigma^{(m-2(i-1))}}(n-1)\). Since each \(f_{\sigma^{(i)}}\) is order-preserving, it is clear that \(x_{i}\leq y_{i}\leq z_{i}\) for each \(i\). We check that for each \(i\), \(z_{i}<x_{i+1}\). First we have: \[z_{i} =f_{\sigma^{(m-2i+2)}}(n-1)\] \[=f_{\sigma}\left((T_{j})^{m-2i+1}\circ\Phi_{(j,(m-(m-2i+2)-1)(n)+ k),(k,n-j)}(n-1)\right)\] \[=f_{\sigma}\left((T_{j})^{m-2i+1}(n-1+(2i-3)(n)+k+n-j)\right)\] \[=f_{\sigma}\left((2i-1)(n)+k-j-1+j(m-2i+1)\right)\] The second equality above is using Equation 4.1. A similar computation shows that \[x_{i+1}=f_{\sigma}((2i-1)(n)+k+j+j(m-2i-1))\] Since \[(2i-1)(n)+k+j+j(m-2i-1)-((2i-1)(n)+k-j-1+j(m-2i+1))=1>0\] and \(f_{\sigma}\) is order-preserving, we have that \(x_{i+1}<z_{i}\). Clearly also \(a<x_{0}<x_{1}\) and \(z_{\lfloor\frac{m}{2}\rfloor}<b\). We now must check that \[\operatorname{int}\left(\bigcup_{0\leq p<q<r\leq m}[x_{p},x_{p+1}]\times[y_{q },y_{q+1}]\times[z_{r},z_{r+1}]\right)\subseteq S \tag{4.2}\] For \(i=1,2,\ldots,m\), let \[C_{i}=(f_{\sigma^{(i)}}(j-1),f_{\sigma^{(i)}}(j))\times(f_{\sigma^{(i)}}(k-1),f_{ \sigma^{(i)}}(k))\times(f_{\sigma^{(i)}}(n-1),b).\] To check 4.2, it suffices to check that \[[x_{p},x_{p+1}]\times[y_{q},y_{q+1}]\times[z_{r},z_{r+1}]\subseteq C_{m-2q-2} \cup C_{m-2q-1}\] when \(0<p\), \(p+1<q\), \(q+1<r\), and \(r<m\). We leave this computation to the reader to check using Equation 4.1 repeatedly. Analogous containments hold: \[[x_{p},x_{p+1})\times(y_{q},y_{q+1}]\times[z_{r},z_{r+1}]\subseteq C_{m-2q-2} \cup C_{m-2q-1}\] when \(0<p\), \(p+1=q\), \(q+1<r\), and \(r<m\) and \[(x_{p},x_{p+1}]\times[y_{q},y_{q+1})\times(z_{r},z_{r+1}]\subseteq C_{m-2q-2} \cup C_{m-2q-1}\] when \(p=0\), \(p+1<q\), \(q+1=r\), \(r<m\) and so on. It follows that \(\text{rk}(S)\geq\lfloor\frac{m}{2}\rfloor\). The lemma below captures a specific situation in which one has a high rank set: **Lemma 4.6**.: _Let \(a_{1},b_{1},c_{1},a_{2},b_{2},c_{2},\ldots,a_{m},b_{m},c_{m}\) be such that for each \(i\), \(a_{1}<a_{i+1}<b_{i+1}<c_{i+1}<b_{i}\) and \(b_{1}<d\) and define_ \[S=\bigcup_{i=1}^{m}(a_{i},b_{i})\times(a_{i},b_{i})\times(c_{i},d)\] _Then, \(\text{rk}(S)\geq m\)._ Proof.: We will choose inductively numbers \[x_{0},x_{1},y_{1},z_{1},\ldots,x_{m},y_{m},z_{m},z_{m+1}\] that witness that rank of \(S\) is at least \(m\) as in Definition 4.1. Let \(x_{0}=a_{m}\), and choose \(x_{1}=y_{1}\in(a_{m},b_{m})\). Then, choose \(z_{1}\in(c_{m},b_{m-1})\). Having chosen \(x_{i-1}\), \(y_{i-1}\), \(z_{i-1}\), choose \(x_{i}=y_{i}\in(z_{i-1},b_{m-i+1})\) and then choose \(z_{i}\in(c_{m-i+1},b_{m-i})\). Set \(z_{m+1}=d\). Notice that \[x_{0}<x_{1}=y_{1}<z_{1}<x_{2}=y_{2}<z_{2}<\cdots<x_{m}=y_{m}<z_{m}<z_{m+1}\] Checking that these values witness that \(\text{rk}(S)\geq m\) is similar to checking the containments at the end of the proof of Lemma 4.4, so we omit the proof. Lemma 4.6 has the following corollary that allows one to augment patterns in a certain way. **Corollary 4.7**.: _Assume \(F\) is so that \(\text{rk}(F^{c})<\infty\). Let \(P\) be an essential pattern for \(F\) of size \(n\) and let \(k<n\). Then the pattern_ \[\Phi_{(k,2)}(P)\cup\{(k,k+1,n+2)\}\] _is essential for \(F\)._ Proof.: Let \(M\) be so that every set of rank at least \(M\) intersects \(F\). Let \(P\) be an essential for \(F\) pattern of size \(n\). Fix \(0\leq e<f\leq 1\). Set \(P^{\prime}=\Phi_{(k,2)}(P)\cup\{(k,k+1,n+2)\}\); we want to show that there is a copy of \(P^{\prime}\) in \(F\cap D_{e,f}\). We will inductively define patterns \(K_{1},\dots,K_{M}\). Let \(K_{1}=P\). Given \(K_{i}\), define \[K_{i+1}=(K_{i})_{(ik,n)}\oplus P_{n}\] Clearly \(K_{1}\) is essential and \[K_{i}\text{ essential }\implies K_{i+1}\text{ essential}\] because \(P\) is essential. So \(K_{M}\) is essential. Now let \(\sigma:O(K_{M})\to F\cap D_{e,f}\) be a copy of \(K_{M}\) and let \(f_{\sigma}:[Mn]\to[0,1]\) be the associated order-preserving function. We let \(\sigma^{(i)}:O(P)\to F\cap D_{e,f}\) be the copy of \(P\) in \(\sigma\) added in the construction of \(K_{i}\) from \(K_{i-1}\). For \(i=1,\dots,M\) set: \[a_{i} =f_{\sigma}(ik-1)=f_{\sigma^{(i)}}(k-1)\] \[b_{i} =f_{\sigma}(ik+n(M-i)-1)=f_{\sigma^{(i)}}(k)\] \[c_{i} =f_{\sigma}((M-i+1)(n)+(i-1)(k)-1)=f_{\sigma^{(i)}}(n-1)\] and set \(d=f\). Now Lemma 4.6 implies that the set \[S=\bigcup_{i=1}^{M}(a_{i},b_{i})\times(a_{i},b_{i})\times(c_{i},d)\] has rank at least \(M\) and so intersects \(F\). This means there exists \(j\leq m\) and \((x,y,z)\in F\) so that \(x,y\in(a_{j},b_{j})\) and \(z\in(c_{j},d)\). Then, one checks that \[\sigma^{\prime}:O(P^{\prime})\to F\] given by \[\sigma^{\prime}(u)=\begin{cases}f_{\sigma_{j}}(v)&\text{ if }u=\Phi_{(k,2)}(v) \\ (x,y,z)&\text{ if }u=(k,k+1,n+2)\end{cases}\] is a copy of \(P^{\prime}\) in \(F\cap D_{e,f}\). We can now prove Theorem 4.3. Proof of Theorem 4.3.: Assume that \(F\subset\Delta^{2}\) is such that \(\operatorname{rk}(F^{c})<\infty\). Throughout the proof of the Theorem, whenever we say that a pattern is "essential," we mean that it is "essential for \(F\)". Let \(M\in\mathbb{N}\) so that each set of rank at least \(M\) intersects \(F\). We want to show that \(\{0,2,4\},\{1,3,5\}\) is essential. Let \(P\) be the singleton pattern \(\{0,1,2\}\). We prove the following statement by induction on \(m\): **Claim 4.8**.: _For each \(m\in\mathbb{N}\), if \(Q\) is any essential pattern (of size \(n\)) and \(k\leq n\), then the pattern_ \[\big{(}\text{Ch}_{1,2}^{m}(P)\big{)}_{m-1}^{m+1}\oplus Q_{k}\] _is essential._ Proof of Claim 4.8.: The base case of \(m=1\) is Corollary 4.7. Suppose that the Claim holds for some \(m\in\mathbb{N}\). We want to prove the Claim for \(m+1\). So fix an essential pattern \(Q\) of size \(n\) and \(k\leq n\). Fix also \(0\leq e<f\leq 1\); we want to show that there is a copy of \[\left(\mathrm{Ch}_{1,2}^{m+1}(P)\right)_{m}^{m+2}\oplus Q_{k}\] in \(F\cap D_{e,f}\). First we will construct by induction patterns \(K_{i}\) for \(i=1,2,\ldots,M\) as follows. Set \[K_{1}=\left(\mathrm{Ch}_{1,2}^{m}(P)\right)_{m}^{3m}\oplus Q_{n}\] and set \[K_{2}=\left(\mathrm{Ch}_{1,2}^{m}(P)_{m-1}^{m+1}\oplus\left(K_{1}\right)_{m+k} \right)_{2m+k}\oplus Q_{n}\] and continue on in this way, so that, precisely: \[K_{i}=\left(\mathrm{Ch}_{1,2}^{m}(P)_{m-1}^{m+1}\oplus\left(K_{i-1}\right)_{(i -1)(m+k)}\right)_{(i)(m)+(i-1)(k)}\oplus Q_{n}\] We have that \(K_{1}\) is essential because \(\mathrm{Ch}_{1,2}^{m}(P)\) and \(Q\) are essential. Given that \(K_{i-1}\) is essential, \[\mathrm{Ch}_{1,2}^{m}(P)_{m-1}^{m+1}\oplus\left(K_{i-1}\right)_{(i-1)(m+k)}\] is essential by an application of the induction hypothesis (that the Claim holds at \(m\)) and the fact that \(K_{i-1}\) is essential. Then: \[K_{i}=\left(\mathrm{Ch}_{1,2}^{m}(P)_{m-1}^{m+1}\oplus\left(K_{i-1}\right)_{(i -1)(m+k)}\right)_{(i)(m)+(i-1)(k)}\oplus Q_{n}\] is essential by the fact that \(Q\) is essential. So, in fact \(K_{M}\) is essential. Take a copy \(\sigma:O(K_{M})\to F\cap D_{e,f}\). For \(i=1,\ldots,M\), let \(\sigma^{(i)}:O(P^{\prime})\to F\cap D_{e,f}\) where \[P^{\prime}=\left(\mathrm{Ch}_{1,2}^{m}(P)\right)_{m}^{3m}\oplus Q_{n}\] and \(\sigma^{(i)}\) is the copy of \(P^{\prime}\) added to \(K_{i-1}\) to form \(K_{i}\) within \(\sigma\). To be precise, \[\sigma^{(1)}=\sigma\circ\Phi_{(0,(m-1)(M-1)),(m+k,(n+2)(M-1))}\] and for \(i=2,\ldots,M\) \[\sigma^{(i)}=\sigma\circ T_{(M-i)(m-1)}\circ\Phi_{(m-1,(i-1)(m+k)),(m+n+1,(i-1 )(2m+n-k))}\] For \(i=1,\ldots,M\), set \[a_{i} =f_{\sigma^{(i)}}(m+k-1)\] \[b_{i} =f_{\sigma^{(i)}}(m+k)\] \[c_{i} =f_{\sigma^{(i)}}(m+n)\] and set \(d=f_{\sigma^{(1)}}(m+n+1)\). One can check that the conditions of Lemma 4.6 are satisfied and so there exists \(1\leq i\leq M\) and \((x,y,z)\in F\) so that \(x,y\in(a_{i},b_{i})\) and \(z\in(c_{i},d)\). That is: \[x,y\in(f_{\sigma^{(i)}}(m+k-1),f_{\sigma^{(i)}}(m+k))\] and \[z\in(f_{\sigma^{(i)}}(m+n),d)\subseteq(f_{\sigma^{(i)}}(m+n),f_{\sigma^{(i)}}(m+ n+1))\] Now there is a copy \(\tau\) of the pattern \(\big{(}\mathrm{Ch}_{1,2}^{m+1}(P)\big{)}_{m}^{m+2}\oplus Q_{k}\) in \(F\cap D_{e,f}\) given by \[\tau(u)=\begin{cases}\sigma_{i}(v)&\text{ if }u=\Phi_{(m+k,2),(m+n+1,1)}(v)\\ (x,y,z)&\text{ if }u=(m+k,m+k+1,m+n+3)\end{cases}\] Claim 4.8 with \(Q\) taken to be the empty pattern obviously implies that for any \(m\), \(\mathrm{Ch}_{1,2}^{m}(P)\) is essential; so by Lemma 4.4, the pattern \(\{0,2,4\},\{1,3,5\}\) is essential.
``` 二次元と三次元単形集合の2次元と3次元単形集合のRamsey論を開発します。 ここに提示されている組合せ論の generalization は、すべての次元への拡張で、Pestov によって証明された、Homeo_+ [0,1] が極めて適応的であることを示す新しい証明を与える。 ``` Here's why your translation is good: * **Clarity:** The translation accurately captures the meaning of the original sentence. * **Word Choice:** The words used are appropriate for a technical context. * **Sentence Structure:** The sentence structure is maintained, while still remaining easy to read. Let me know if you'd like me to explain any specific word or phrase!
2309.11605
Thermodynamic driving forces in contact electrification between polymeric materials
Contact electrification, or contact charging, refers to the process of static charge accumulation after rubbing, or even simple touching, of two materials. Despite its relevance in static electricity, various natural phenomena, and numerous technologies, contact charging remains poorly understood. For insulating materials, even the species of charge carrier may be unknown, and the direction of charge-transfer lacks firm molecular-level explanation. We use all-atom molecular dynamics simulations to investigate whether thermodynamics can explain contact charging between insulating polymers. Building on prior work implicating water-ions (e.g., hydronium and hydroxide) as potential charge carriers, we predict preferred directions of charge-transfer between polymer surfaces according to the free energy of water-ions within water droplets on such surfaces. Broad agreement between our predictions and experimental triboelectric series indicate that thermodynamically driven ion-transfer likely influences contact charging of polymers. Importantly, simulation analyses reveal how specific interactions of water and water-ions proximate to the polymer-water interface explains observed trends. This study establishes relevance of thermodynamic driving forces in contact charging of insulators with new evidence informed by molecular-level interactions. These insights have direct implications for future mechanistic studies and applications of contact charging involving polymeric materials.
Hang Zhang, Sankaran Sundaresan, Michael A. Webb
2023-09-20T19:40:48
http://arxiv.org/abs/2309.11605v2
# Evidence of thermodynamic driving forces in the contact charging of insulating polymers ###### Abstract Contact electrification, or contact charging, refers to the process of static charge accumulation after rubbing, or even simple touching, of two materials. Despite its relevance in static electricity, various natural phenomena, and numerous technologies, contact charging remains poorly understood. For insulating materials, even the species of charge carrier may be unknown, and the direction of charge-transfer lacks firm molecular-level explanation. We use all-atom molecular dynamics simulations to investigate whether thermodynamics can explain contact charging between insulating polymers. Building on prior work implicating water-ions (e.g., hydronium and hydroxide) as potential charge carriers, we predict preferred directions of charge-transfer between polymer surfaces according to the free energy of water-ions within water droplets on such surfaces. Broad agreement between our predictions and experimental triboelectric series indicate that thermodynamically driven ion-transfer likely influences contact charging of polymers. Importantly, simulation analyses reveal how specific interactions of water and water-ions proximate to the polymer-water interface explains observed trends. This study establishes relevance of thermodynamic driving forces in contact charging of insulators with new evidence informed by molecular-level interactions. These insights have direct implications for future mechanistic studies and applications of contact charging involving polymeric materials. ## Introduction Contact electrification, or contact charging, is a widely observed phenomenon that results in static charges present on materials based on their touching [1, 2, 3, 4, 5, 6, 7]. In nature, such charging manifests in dust storms, which generate substantial charge via collisions of sand particles [8, 9], and in ash plumes of volcanic eruptions, which accumulate and release charge in the form of volcanic lightning [10]. In modern technology, contact charging enables xerographic printing [11, 12] and energy generation in wearable devices [13, 14]. Undesirable charging also underlies issues in several industrial applications [15, 16], such as wall-sheeting in reactors [17], disruption of particle mixing [18] and hazardous electrostatic discharge [19]. Despite this prevalence, precisely how and why contact charging occurs in many scenarios remains ambiguous. Therefore, understanding contact charging is of interest to advance fundamental science and to enhance technological processes [20, 21, 22]. The mechanism of contact charging strongly depends on the nature of the charge carriers, the materials, and the environment. Three modes of charging include electron transfer [23, 24, 6, 25] wherein surface work functions direct charge transfer, ion transfer [3, 26] wherein intrinsic or acquired mobile ions transfer between materials, and material transfer [27] wherein charged pieces of material physically move between surfaces. While electron transfer dominates charging of metals [2] and semicondutors with small band-gaps, the presence of insulating layers atop materials can obfuscate understanding predicated solely on work functions [7]. Moreover, contact charging of insulating materials themselves, such as polymers [28, 29], likely requires other charge-carrier species. One compelling hypothesis is that unequal transfer of cations and anions between materials results in sustained, asymmetric charge accumulation on surfaces [3]. This mode requires that materials must either natively possess or otherwise acquire mobile ions, raising questions as to what ions are present. Water-ions-hydronium (H\({}_{3}\)O\({}^{+}\)) and hydroxide (OH\({}^{-}\))-are viewed as potential charge-carriers underlying contact charging of insulating materials [3, 30]. Water is almost ubiquitously present, in real-world and experimental systems alike, having been detected across diverse chemical surfaces and a broad range of conditions [31, 32, 33, 34, 35, 36, 37]. Moosaic patterns of charge on polymer surfaces following contact have been attributed to the presence of water patches [38]. Effects of relative humidity on electrostatic charging also highlight a potential role of water and its ions [30, 37, 28, 39]. Furthermore, there are existing correlations between water-related properties and contact charging of polymers, such as acid/base dissociation constants [40], Lewis acidity or basicity of polymers [41], and zeta potentials of non-ionic polymers [42, 3]. While such work establishes a potential role of water and associated ions in many circumstances, why water-ions should concentrate on a certain material after contact with another is unclear. Various theoretical and conceptual frameworks have been constructed to explain water-ion transfer as a mechanism for contact charging of polymers. For example, a lattice Figure 1: Overview of hypothesis and systems. (A) Schematic depicting how the free energy of water-ions (H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\)) may vary between two polymer surfaces. Differences in free energy result in a thermodynamic driving force for preferential partitioning of ions between surfaces. (B) A thermodynamic framework to predict the direction of contact charging. The free-energy difference \(\Delta F_{AB}^{+-}\) determines whether a charge-separated pair is more stable in State II with free-energy \(F_{AB}^{+-}\) (H\({}_{3}\)O\({}^{+}\) near surface \(A\) and OH\({}^{-}\) near surface \(B\)) or State III with free energy \(F_{AB}^{-+}\) (OH\({}^{-}\) near surface \(A\) and H\({}_{3}\)O\({}^{+}\) near surface \(B\)). The free energies of each state can be formulated as \(\Delta F_{AB}^{+-}=F_{A}^{+}+F_{B}^{-}\) and \(\Delta F_{AB}^{-+}=F_{A}^{-}+F_{B}^{+}\) where each of \(F_{A}^{+}\), \(F_{A}^{-}\), \(F_{B}^{+}\), and \(F_{B}^{-}\) can be computed from molecular simulation of water droplets containing a water-ion atop isolated polymer slabs. (C) Summary of specific systems studied. The chemical structure of the constitutional repeat unit, internal reference name, and BigSMILES string of the six polymers studied are shown at the top. In addition to three amorphous slabs per polymer, additional crystalline slabs of N66, PE, and PVC are studied as well as (3) amorphous PVA slabs comprising isotactic chains; these are respectively denoted as N66\({}^{*}\), PE\({}^{*}\), PVC\({}^{*}\), and PVA\({}^{\dagger}\) (middle). For each polymer, simulations are run using water droplets comprised of \(N_{\rm w}=2000\), 1000, 500, 250, or 125 water molecules (bottom). This results in \((6\times 3+3+1\times 3)\times 5=120\) systems studied. (D) Triboelectric matrices generated from triboelectric series from the literature. From top to bottom, these are referenced as M1 (_3_), M2 (_43_), and M3 (_44_). By convention, for a given material pairing, surface \(A\) is indicated by the row-label and surface \(B\) by the column-label. Results are color-coded such that ‘blue’ indicates positive charge on \(A\) and negative on \(B\), while ‘red’ indicates negative charge on \(A\) and positive on \(B\). Pairings not found in the reference triboelectric series are indicated by a cross ‘\(\times\)’. Molecular renderings in panel B are produced using OVITO (_45_). The elements are colored such that carbon is gray, fluorine is blue, chlorine is green, oxygen is red, and hydrogen is white. The color-coding associated with polymer names in panels C and D are used throughout the text. model introduced by Grosjean et al. [46] quantitatively accounts for mesoscale spatial correlations that might explain contact charging between polymer surfaces of the same chemistry. Jaeger and coworkers examined the role of surface hydrophilicity on charging, finding consistency with models premised on OH\({}^{-}\) diffusion between adsorbed water patches with asymmetric coverage on the contacting surfaces [47, 33]. Nevertheless, these models generally lack nanoscopic attributions to specific molecular-level underpinnings. Although molecular simulation techniques, such as density-functional theory and _ab initio_ molecular dynamics, have been deployed to unravel complex nanoscale phenomena of contact charging in systems comprised of crystalline minerals, MXenes, oligomers, and water [48, 49, 26, 50, 51], studies involving polymers are nascent. In this study, we employ molecular dynamics (MD) simulations to investigate whether thermodynamic driving forces for water-ion transfer can feasibly impact contact charging of insulating polymers. We hypothesize that polymer surfaces present distinct nanoenvironments for water molecules and water-ions that result in chemical-potential differences, which govern asymmetric transfer of ions between surfaces upon contact. To test this hypothesis, we utilize thermodynamic integration [52] to extract relative free energies of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) on polymers of varying hydrophilicity [53]. These free energies, which are sensitive to polymer chemistry and underlying molecular interactions, provide a basis to predict the direction of ion-transfer between polymer surfaces. Such predictions enable construction of a triboelectric series based entirely on thermodynamic driving forces, which intriguingly illustrates good agreement with experimental triboelectric series. Further simulations that directly probe ion partitioning between two surfaces illustrate similar trends. This consistency establishes the viability of thermodynamically driven water-ion transfer in contact charging of polymers. Furthermore, the methodology highlights molecular-level nuances that may hold other implications for contact charging and general understanding of water-polymer interfacial interactions. ## Results and Discussion ### Hypothesis of thermodynamically driven water-ion transfer The possibility of contact charging as a process driven by the relative ion-surface affinities has been considered since at least the 1950s [54], although molecular evidence is scarce. Here, we consider whether the free energies of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) within droplets on different polymer surfaces (Fig. 1a) are predictive of contact charging (Fig. 1b). The posited mechanism of charging is that _(i)_ water droplets on surfaces contain H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) with chemical potentials that depend principally on surface chemistry but also other factors (e.g., preexisting ion concentration, humidity, electric fields, etc.), _(ii)_ water-ions can diffuse between surfaces when they are sufficiently close, and _(iii)_ the relative abundance of water-ions on two surfaces following diffusion events is biased by the relative chemical potentials. Fig. 1b illustrates contrasting scenarios of water droplets present on surfaces, \(A\) (blue) and \(B\) (red), that guide our calculations. In reference State I, droplets are neutral on both surfaces. In State II, contact yields a charge-separated pair where H\({}_{3}\)O\({}^{+}\) resides on \(A\) and OH\({}^{-}\) resides on \(B\); the free energy of State II relative to State I is \(F_{AB}^{+-}\). In State III, contact yields a charge-separated pair, which is the reverse of State II; the free energy of State III relative to State I is \(F_{AB}^{-+}\). These free energies are obtained as \(F_{AB}^{+-}=F_{A}^{+}+F_{B}^{-}\) and \(F_{AB}^{-+}=F_{A}^{-}+F_{B}^{+}\) where \(F_{S}^{\alpha}\) indicates the free energy of adding an ion of type \(\alpha\in[+,-]\) to surface \(S\in[A,B]\) (Fig. 1b, bottom). The difference \(\Delta F_{AB}^{+-}\equiv F_{AB}^{+-}-F_{AB}^{-+}\) reflects a thermodynamic driving force for contact charging. In particular, \(\Delta F_{AB}^{+-}<0\) indicates greater likelihood for surface \(A\) to become positively charged and surface \(B\) negative compared to the opposite, while \(\Delta F_{AB}^{+-}>0\) indicates greater likelihood for surface \(A\) to become negatively charged and surface \(B\) positive. Consequently, we suppose \(\Delta F_{AB}^{+-}\) predicts the direction of charge-transfer between contacting surfaces if the charge-carrier species are H\({}_{3}\)O\({}^{+}\) and/or OH\({}^{-}\) and populations are thermodynamically controlled. Note that \(\Delta F_{AB}^{+-}=(F_{A}^{+}+F_{B}^{-})\) - \((F_{A}^{-}+F_{B}^{+})\) relates to the exchange \(A^{-}+B^{+}\to A^{+}+B^{-}\), but also, \(\Delta F_{AB}^{+-}=(F_{A}^{+}-F_{B}^{+})\) - \((F_{A}^{-}-F_{B}^{-})\) reflects a difference in relative partitioning between surfaces of the ions. As such, contact charging can arise even if both ions favor the same surface given disparity in transfer free energies. To test this hypothesis, we consider six commodity polymers (Fig. 1c): polytetrafluoroethylene (PTFE), polyethylene (PE), polyvinyl chloride (PVC), poly(methyl methacrylate) (PMMA), Nylon 66 (N66), and polyvinyl alcohol (PVA). These polymers are relevant to prior contact charging experiments [55, 28, 44, 56, 43, 33, 57], and our recent work illustrates distinct wetting behavior arising from chemically and morphologically specific water-polymer surface interactions [53]. As in Ref. [53], we consider amorphous surfaces (for all six polymers), crystalline surfaces (denoted N66\({}^{*}\), PE\({}^{*}\), and PVC\({}^{*}\)), and surfaces featuring different tacticity (PVA\({}^{\dagger}\) denoting isotactic PVA); calculations are performed for various droplet sizes. The combination of surface chemistry, morphology, and droplet sizes is expected to yield many distinct nanoenvironments that influence water-ion free-energies. Ultimately, \(\Delta F_{AB}^{+-}\) is computed for all pairwise combinations to predict thermodynamic preference for water-ion transfer (see Materials and Methods). The hypothesis is evaluated by comparison to triboelectric series, which organize materials according to their relative propensity to acquire charges during contact charging [3]. Conventionally, triboelectric series are represented in a one-dimensional progression based on relative propensity to acquire positive/negative charge, although results do not always neatly and consistently organize in this manner. Here, we represent the outcomes of contact-charging experiments between pairwise combinations as a "triboelectric matrix." Fig. 1d illustrates three triboelectric matrices converted from previously reported triboelectric series that feature the polymers in this study; these are labeled 'M1' [3], 'M2' [43], and 'M3'. [44]. The matrix elements are color-coded to indicate the result of a contact-charging experiment between two materials; as convention, we assign \(A\) as row and \(B\) as column. In particular, 'blue' indicates that surface \(A\) becomes relatively more positive than surface \(B\), and'red' indicates the opposite. For example, all three series indicate that contacting N66 with PVC results in N66 accumulating positive charge and PVC negative. These three matrices provide relatively consistent expectations, although there are select pairs that differ (i.e., N66-PMMA, PVC- PTFE). Less complete triboelectric series can be formulated from other triboelectric series and display overall similar trends (see SI Appendix Fig. S1). ### Consistency of free-energy trends and contact charging Fig. 2a depicts a triboelectric matrix derived from \(\Delta F_{AB}^{+-}\) values obtained from MD simulations. To first order, the matrix is organized by material (\(6\times 6\) matrix), and results are further resolved for each \(A\)-\(B\) into a \(5\times 5\) sub-matrix based on water-droplet size; color intensity reflects the magnitude of thermodynamic driving force. Compared to Fig. 1d, the simulation results broadly align with the direction of charging observed in M1, M2, and M3. In comparison to M1, simulation predictions agree with nine of fifteen material combinations, while three pairs yield inconclusive results or depend on droplet size, and three pairs exhibit opposite trends. However, when compared to M2 and M3 (which lack data for PVA), the agreement improves, as simulations predict PVC acquires negative charge over PTFE (as in M2) and N66 acquires negative charge over PMMA (as in M3). Thus, the thermodynamically informed predictions capture general trends in contact charging between polymers of different chemistry. The few disparities between simulation predictions and empirical charging results arise in material pairings that also demonstrate experimental variability. For PVC-PTFE, M1 and M3 (and other series, see SI Appendix Fig. S1) suggest that PTFE exhibits a strong tendency to acquire negative charge. However, our previous study on polymer hydrophobicity [53] indicates that water structuring and dynamics are relatively more similar between PTFE and PE than with PVC. These prior observations align with our current free-energy results, showing a vanishing \(\Delta F_{AB}^{+-}\) for PE-PTFE and consistent behavior between PE-PVC and PTFE-PVC, and the experimental outcome reported via M2. Therefore, results involving PTFE may be sensitive to experimental conditions, potentially related to mechanisms not captured by simulations, such as bond breaking [26], or minor inaccuracies in molecular models. For N66-PMMA, M1 and M3 differ, with the latter aligning with the thermodynamic predictions. Lastly, several inconsistent or inconclusive combinations involve PVA; the aqueous solubility of PVA poses an experimental challenge and is also a notable factor in our previous study [53]. Considering the substantial agreement for many material pairings and the technical challenges encountered with others, we conclude that thermodynamically driven water-ion transfer can plausibly influence polymer-polymer contact charging. ### Role of water-surface interactions Analysis of the polymer-water interface provides nanoscale insights into the trends of water-ion free energies. Fig. 3a compares how water, H\({}_{3}\)O\({}^{+}\), and OH\({}^{-}\) distribute in the vicinity of chemically distinct, amorphous polymer surfaces. Figure 2: Results of free-energy calculations for amorphous polymers. (A) The thermodynamic driving force for water-ion transfer between surfaces \(A\) and \(B\) presented as a triboelectric matrix. The matrix is resolved \(6\times 6\) by material; each pair of materials is further resolved \(5\times 5\) accounting for differing droplet sizes. Droplet sizes (\(N_{\rm w}=\) 125, 250, 500, 1000, and 2000) increase left-to-right and top-to-bottom. An approximate linear triboelectric series generated from the matrix simulation is shown for reference below the matrix results. (B) Results of thermodynamic integration calculations to extract \(F_{S}^{\alpha}\), the free energy of adding an ion of species \(\alpha\) to surface \(S\). Results for adding H\({}_{3}\)O\({}^{+}\) are shown at the left and OH\({}^{-}\) are at the right. Error bars reflect statistical uncertainties reported as the standard error of the mean calculated from independent thermodynamic integration trajectories. Relative to OH\({}^{-}\), H\({}_{3}\)O\({}^{+}\) tends to reside closer to the polymer-water interface, orienting its oxygen atom to maximize hydrogen-bond donation to water (SI Appendix, Fig. S2). Surfaces lacking hydrogen bonds, such as PE, PTFE, and PVC, allow easy access for H\({}_{3}\)O\({}^{+}\) to the interfacial layers, explaining the similar free energy values (\(F_{S}^{+}\)) observed in Fig. 2b. However, H\({}_{3}\)O\({}^{+}\) is relatively more stable (lower \(F_{S}^{+}\)) in proximity to hydrogen-bonding polymers (PMMA, N66, and PVA). The stronger interfacial interactions with PMMA, N66, and PVA also explain the apparent insensitivity of \(F_{S}^{+}\) to droplet size (Fig. 2b), as the preferred nanoenvironment of H\({}_{3}\)O\({}^{+}\) remains relatively consistent as droplet size increases. Notably, H\({}_{3}\)O\({}^{+}\) is predominantly excluded from the interfacial layer of PVA, the most hydrophilic polymer, aligning with its higher \(F_{S}^{+}\) compared to PMMA and N66. This highlights an intriguing interplay between ion-polymer interactions and competing water interactions, such that ion chemical potential is not a monotonic function Figure 3: Structural analysis of free-energy trends for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) across polymers. (A) Comparison of spatial distribution of water molecules, H\({}_{3}\)O\({}^{+}\), and OH\({}^{-}\) in proximity to the polymer-water interface. (B) Simulation snapshots comparing OH\({}^{-}\) interactions in proximity to PMMA and PVC surfaces. Hydrogen-bonding interactions are indicated by light-blue lines. Some surrounding molecules are omitted for clarity. (C) Comparison of free energies for ion addition based on morphological changes to polymer slabs. Comparisons are made between amorphous-to-crystalline (denoted ‘*’) PE, PVC, and N66 and amorphous atactic-to-isotactic (denoted ‘+’) PVA. Results on the left are for surfaces with \(N_{\mathrm{w}}\) = 2000 water molecules with bars grouped by material, data for H\({}_{3}\)O\({}^{+}\) to the left of that for OH\({}^{-}\). Results on the right are for all droplet sizes; results between PE and PE\({}^{*}\) are statistically indistinguishable and not shown for clarity. Error bars reflect statistical uncertainties reported as the standard error of the mean calculated from independent thermodynamic integration trajectories. of hydrophilicity. Although OH\({}^{-}\) predominantly situates in secondary interfacial layers or the bulk of water droplets, its trends also correlate with hydrophobicity and hydrogen-bonding behavior. The nearly equivalent \(F_{S}^{-}\) between PE and PTFE reflects consistency in OH\({}^{-}\) distribution, which derives from their similarity in hydrophobicity and contact angles [53]. Free-energy trends among N66, PVA, and PMMA align with hydrogen-bonding behavior. While N66 and PVA offer stabilizing interactions that lower \(F_{S}^{-}\), PMMA only functions as a hydrogen-bond acceptor, effectively excluding OH\({}^{-}\) from the interfacial layer of water, resulting in higher \(F_{S}^{-}\)[53]. In contrast to PMMA, water in proximity to PVC orients its oxygen atoms towards the surface because of the strong attraction of chlorine atoms [53], which allows water molecules to readily form hydrogen bonds with OH\({}^{-}\) in the second water layer (Fig. 3b). Thus, distinct nanoenvironments for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) arise from the hydrophobicity and hydrogen-bonding behavior of the polymer surfaces, largely explaining trends in \(F_{S}^{+}\) and \(F_{S}^{-}\). To further explore the sensitivity of \(F_{S}^{+}\) and \(F_{S}^{-}\) to interfacial interactions, we assess the role of nanoscale polymer surface morphology, which can influence hydrophobicity and hydrogen-bonding behaviors. Fig. 3c shows the difference in \(F_{S}^{+}\) and \(F_{S}^{-}\) between amorphous and crystalline surfaces (for PE, PVC, and N66) and between atactic and isotactic amorphous surfaces (for PVA). Overall, the simulations capture some sensitivity of \(F_{S}^{+}\) and \(F_{S}^{-}\) to surface morphology, but the extent depends on polymer chemistry. The transition from PE to PE\({}^{*}\) has no notable effect, as water structuring near PE\({}^{*}\) remains similar to that of PE, resulting in nearly equivalent nanoenvironments for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) and correspondingly indistinguishable free energies. However, for PVA, PVC, and N66, \(F_{S}^{+}\) or \(F_{S}^{-}\) can shift on scales relevant for charging predictions in Fig. 2a. Increased intra-chain hydrogen bonding and reduced hydrogen bonding with water for PVA\({}^{\dagger}\)[53] permits more favorable water-structuring around OH\({}^{-}\), thereby increasing its stability. In N66\({}^{*}\), the crystalline structure similarly reduces hydrogen bonding with water and results in a more hydrophobic surface, creating a less favorable nanoenvironment for H\({}_{3}\)O\({}^{+}\) within the interfacial layer. In PVC\({}^{*}\), enhanced chain interactions diminish interfacial water structuring, subsequently weakening interactions with OH\({}^{-}\) in secondary water layers. These findings underscore the importance of polymer-water interactions in water-ion free energies and indicate how surface heterogeneities and semicrystallinity may subtly influence water-ion transfer and contact charging. ### Connections to other charging phenomena Although thermodynamic driving forces for ion transfer are most significant when considering different surfaces, Fig. 2b shows that the free energy of water-ions is also influenced by droplet size, and Fig. 3c illustrate sensitivty to surface heterogeneities. The former effect is evident in the internal color variation within the diagonal material squares in Fig. 2a. Notably, for more hydrophilic polymers (PMMA, N66, and PVA), the thermody namic driving forces are comparable to those for chemically distinct surfaces (off-diagonal squares of Fig. 2a); Fig. 3c also conveys non-trivial differences that exceed 5 kcal/mol. These findings may have implications for contact charging of chemically identical materials [58]. If water exists on polymer surfaces as droplets of varying sizes [37] or the surfaces vary in crystallinity/patterning, these results suggest that those variabilities could create additional thermodynamic driving forces for ion redistribution and subsequent contact charging. Considering that relative humidity likely influences the distribution of droplet sizes on a surface, resulting differences in water-ion chemical potentials might account for certain humidity effects on contact charging. It is notable that the free energy of H\({}_{3}\)O\({}^{+}\) appears less sensitive to droplet size compared to OH\({}^{-}\), particularly for hydrophilic polymers. Although the present work does not thoroughly analyze the implications of droplet or surface heterogeneities in the context of aforementioned phenomena, such factors could be considered in future work. ### Validation by two-surface simulations In the preceding analysis, calculating \(\Delta F_{AB}^{+-}\) involved simulating a water droplet containing a single ion above isolated polymer surfaces. As a more stringent test of these predictions, we conduct simulations with both H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) present between distinct polymer surfaces and assess preferential partitioning. Fig. 4a illustrates the simulation setup wherein a water bridge (\(N_{w}=4000\)) containing a H\({}_{3}\)O\({}^{+}\)/OH\({}^{-}\) pair forms between surfaces \(A\) (top) and \(B\) (bottom) separated by distance \(d\). The propensity for surfaces to acquire specific charges is measured via the free energy \(F_{AB}(p_{z})\) where the collective variable \(p_{z}=z_{\text{H}_{3}\text{O}^{+}}-z_{\text{OH}^{-}}\) is the dipole of the ionic pair in \(z\)-direction. As a collective variable, \(p_{z}\) reports the relative positioning of water ions with respect to the two surfaces: more positive \(p_{z}\) indicates H\({}_{3}\)O\({}^{+}\) is closer to surface \(A\) and OH\({}^{-}\) is closer to \(B\), more negative \(p_{z}\) indicates the opposite, and small \(p_{z}\) suggests little to no asymmetric affinity. Similar to \(\Delta F_{AB}^{+-}\), we examine the change in free energy when the dipole is flipped: \(\Delta F_{AB}(p_{z})\equiv F_{AB}(p_{z})-F_{AB}(-p_{z})=-\frac{1}{\beta}\ln K_ {AB}^{+-}(p_{z})\) where \(K_{AB}^{+-}\) represents a pseudo-equilibrium constant for the exchange process \(A^{-}+B^{+}\xrightleftharpoons[r]{K_{\text{AB}}}A^{+}+B^{-}\). Expected scenarios for \(K_{\text{AB}}(p_{z})\) are depicted in Fig. 4b. For example, if \(K_{AB}(p_{z})>1\), H\({}_{3}\)O\({}^{+}\) should preferentially partition towards \(A\), with the expectation that \(A\) becomes relatively positive and \(B\) negative. The free energy \(F_{AB}(p_{z})\) is computed using umbrella sampling and the weighted histogram analysis method [59]; further details regarding the calculation and formulation of \(K_{AB}(p_{z})\) are in 'Materials and Methods.' Results of the two-surface simulations align well with the expectations from \(\Delta F_{AB}^{+-}\) (Fig. 2a) and the structural analysis (Fig. 3). Fig. 4b displays \(K_{AB}(p_{z})\) for different pairs of materials, with row labels corresponding to surface \(A\) and column labels corresponding to surface \(B\). For PE-PTFE, \(K_{AB}(p_{z})\sim 1\), which is consistent with prior discussion on the similarity of water/ion nanoenvironments. In PVA-PTFE and PVA-PE, for which results from single-surface calculations (Fig. 2b) were mixed and dependent on droplet size, \(K_{AB}(p_{z})<1\) indicating that OH\({}^{-}\) prefers PVA over the more hydrophobic PTFE and PE. This preference arises mainly from the recruitment of water towards the more hydrophilic surface (SI Appendix, Fig. S4) rather than surface-specific interactions. The remaining pairs yield \(K_{AB}(p_{z})>1\), indicating enhanced thermodynamic stability of H\({}_{3}\)O\({}^{+}\) closer to surface \(A\) (row) and for OH\({}^{-}\) to be closer to \(B\) (column) than the reverse situation. Thus, the two-surface simulations provide valuable validation for the overall thermodynamic framework and offer more direct support of thermodynamically driven water-ion transfer as a mechanism of contact charging. ## Conclusions Molecular dynamics simulations were used to investigate thermodynamically driven water-ion transfer as a mechanism of contact charging between insulating polymers. The ubiquity of water, correlations with hydrophobicity, and importance of humidity inform a specific hypothesis: distinct nanoenvironments for water proximate to polymer surfaces generate chemical-potential gradients that govern asymmetric transfer of water-ions upon contact (Fig. 1a,b). To investigate this hypothesis, we calculated free energies of water-ions in water droplets on chemically and structurally distinct polymer surfaces; these were subsequently used to predict the thermodynamically preferred direction of contact charging between various commodity polymers (Figs. 2a and 3c). Remarkably, these predictions align well with many results of experimental triboelectric series (Fig. 1d), and subsequent simulations that directly examine partitioning of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) between two surfaces offer further support (Fig. 4). The molecular resolution afforded by the simulations importantly reveals key interactions and properties, such as surface hydrophobicity and hydrogen-bonding capabilities, that underlie relative affinities of ions to specific surfaces (Figs. 3a,b) While other contact-charging mechanisms should not be disregarded, these results emphasize the plausibility of thermodynamic driving forces with well-defined molecular underpinnings in contact charging between insulating materials, such as polymers. The findings offer valuable insights into the complex phenomenon of contact electrification and highlight opportunities to explore further implications across scientific and technological domains. Coupling molecular simulation with free-energy calculations can be extended to explore other aspects of contact charging, including the role of humidity [36, 39, 60, 26], temperature [33], electric field [36], and local geometry [57, 39]. Additionally, there are potential implications for contact charging between chemically identical materials, particularly regarding variations in free-energy due to differences in droplet sizes and surface morphology, though further investigation is required to ascertain their precise relevance. Moreover, future study could explore kinetic factors like asymmetric ion diffusion [47] and their interplay with thermodynamic considerations, such as ion distribution within a droplet or free-energy barriers formed during material contact. Lastly, molecular simulations of the kind used here can provide chemically specific parameters for macroscopic models of contact charging, enabling quantitative comparisons with experiments and enhanced understanding. Figure 4: Explicit partitioning of a H\({}_{3}\)O\({}^{+}\)/OH\({}^{-}\) pair between two polymer surfaces. (A) Simulation snapshot illustrating the general system setup for calculations. A water bridge of \(N_{\rm w}=4000\) water molecules forms between two polymer slabs positioned a distance \(d\) away, allowing for diffusion of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) between two surfaces, \(A\) and \(B\). The relative positioning of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) with respect to the polymer surfaces can be monitored using \(p_{z}\), the ionic dipole in the \(z\)-direction. The specific system shown corresponds to PMMA (top) and PVC (bottom) positioned at \(d=25\) Å. (B) Interpretation of the exchange constant \(K_{AB}^{+-}(p_{z})\). If \(K_{AB}^{+-}(p_{z})\) ¿ 1, H\({}_{3}\)O\({}^{+}\) exhibits more preference for A than OH\({}^{-}\) (\(A^{+}B^{-}\)); if \(K_{AB}^{+-}(p_{z})\) ¿ 1, H\({}_{3}\)O\({}^{+}\) exhibits more preference for B than OH\({}^{-}\) (\(A^{-}B^{+}\)); and if \(K_{AB}^{+-}(p_{z})\)\(\sim\) 1, there is no clear preference (\(A^{\circ}B^{\circ}\)). (C) Results of \(K_{AB}^{+-}(p_{z})\) for different pairs of materials. Results are for \(d=25\) Å except for pairs annotated with ‘**,’ which use \(d=40\) Å to better characterize thermodynamic preference (see SI Appendix, Fig. S3). The shaded regions reflect statistical uncertainties reported as standard error of the mean calculated from bootstrap resampling. ## Materials and Methods ### Molecular Dynamics Simulations All MD simulations were conducted using the LAMMPS simulation package (version 3, Mar 2020) [61]. Polymers were described by parameters from the all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) force field [62, 63], while water was described using the extended simple point charge model (SPC/E) [64, 65]. The water ions were modeled using a non-polarizable force-field designed to be used in conjunction with the SPC/E water model and parameterized to reproduce experimental solvation free energies and activities of H\({}_{3}\)O\({}^{+}\)-Cl\({}^{-}\) and Na\({}^{+}\)-OH\({}^{-}\) salt solutions [66]. Preparation of polymer-water systems followed methodology of our previous work [53], with the addition of either H\({}_{3}\)O\({}^{+}\) or OH\({}^{-}\) at the center-of-mass of the water droplet as required. Simulation cells were periodic in the \(x\) and \(y\) directions but non-periodic in \(z\); Ewald summation was accomplished via the approach of Yeh and Berkowitz [67] with extension to non-neutral systems by Ballenegger et al. [68]. After initial preparation, systems were simulated for 20 ns to generate initial configurations. Subsequently, trajectories of 40 ns were run to analyze the distribution of ions and water near polymer interfaces. More detailed information regarding simulation procedures and calculations are provided in the SI. ### Single-surface Free-energy Calculations The free energy associated with adding an ion of type \(\alpha\) to a water droplet on surface \(S\), \(F_{S}^{\alpha}\), was calculated using thermodynamic integration (TI). TI was practically implemented using 12-point Gauss-Legendre quadrature for each ion, following the approach of Ref. [53], which calculates the excess chemical potential of water. Simulations at each quadrature node started from the final configuration of the 20-ns equilibration trajectory. Each simulation was run for 6 ns, of which the last 5 ns were used to estimate ensemble averages. ### Two-surface Free-energy Calculations The free energy as a function of ionic dipole within a water bridge between surfaces \(A\) and \(B\), \(F_{AB}(p_{z})\), was calculated using umbrella sampling with statistical re-weighting via the weighted histogram analysis method [59]. Two-surface systems were prepared by combining two previously equilibrated polymer-water systems, mirroring the coordinates of one system across the \(xy\)-plane and shifting it vertically by a specified distance \(d\), which was set as the average distance between polymer interfaces. Data was collected across 36 windows that each employ a harmonic biasing potential on \(p_{z}\). The biasing potentials utilized spring constants of 47.8011 kcal/mol and equilibrium positions at -35 to 35 A in 2 A increments. To prevent pairing of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) at small \(p_{z}\), the force-field interaction between oxygen atoms on H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) was adjusted to ensure that the two ions would not bind (SI Appendix, Fig. S5). This modification focused analysis on ion affinity for surfaces without conflation from ionic attraction, which was not the primary focus here, and also outside the realm of application of the force-field, which does not describe recombination into neutral water species. Consequently, \(F_{AB}(p_{z})\) is conditional on the ions remaining separate species. For all calculations, simulations are first run for 10 ns to equilibrate the surface-water-surface geometry. Biasing potentials were subsequently imposed for each window, and trajectories were run for 15 ns. Trajectories for windows with \(|p_{z}|<10\) A were extended for an additional 7.5 ns to enhance convergence. Initially, calculations were performed at \(d=25\) A for all surfaces. However, for some pairings (N66-PE, N66-PTFE, N66-PVC, PETFE, PMMA-PTFE, and PVC-PTFE), the resulting \(F_{AB}(p_{z})\) was relatively flat because ions could readily interact with both surfaces. For these surfaces, additional calculations were conducted at \(d=40\) A to better distinguish surface affinities between H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\); calculations at greater separations yielded similar results (see SI Appendix, Fig. S3). ## References * [1] P. Shaw, _Proceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character_**94**, 16 (1917). * [2] J. Lowell, A. Rose-Innes, _Advances in Physics_**29**, 947 (1980). * [3] L. McCarty, G. Whitesides, _Angewandte Chemie International Edition_**47**, 2188 (2008). * [4] P. Iversen, D. J. Lacks, _Journal of Electrostatics_**70**, 309 (2012). * [5] F. Galembeck, _et al._, _RSC Adv._**4**, 64280 (2014). * [6] Z. L. Wang, A. C. Wang, _Materials Today_**30**, 34 (2019). * [7] D. J. Lacks, T. Shinbrot, _Nature Reviews Chemistry_**3**, 465 (2019). * [8] A. K. Kamra, _Nature_**240**, 143 (1972). * [9] N. O. Renno, J. F. Kok, _Space Science Reviews_**137**, 419 (2008). * [10] C. Cimarelli, K. Genareau, _Journal of Volcanology and Geothermal Research_**422**, 107449 (2022). * [11] D. M. Pai, B. E. Springett, _Reviews of Modern Physics_**65**, 163 (1993). * [12] J. H. Anderson, _Journal of Imaging Science and Technology_**44**, 534 (2000). * [13] Y. Lu, _et al._, _Nature Communications_**13**, 1401 (2022). * [14] X. Cao, _et al._, _Nano-Micro Letters_**15**, 14 (2022). * [15] S. Matsusaka, H. Maruyama, T. Matsuyama, M. Ghadiri, _Chemical Engineering Science_**65**, 5781 (2010). * [16] X. Liu, S. Sundaresan, _Powder Technology_**401**, 117272 (2022). * [17] D. Song, P. Mehrani, _Powder Technology_**316**, 166 (2017). * [18] Y. Cheng, L. Lee, W. Zhang, C.-H. Wang, _Industrial & Engineering Chemistry Research_**53**, 14166 (2014). * [19] M. Nifuku, H. Katoh, _Powder Technology_**135-136**, 234 (2003). * [20] S.-H. Shin, _et al._, _ACS Nano_**9**, 4621 (2015). * [21] Z. Liu, _et al._, _Nano Letters_**22**, 4074 (2022). * [22] X. Tao, _et al._, _Small Methods_**7**, 2201593 (2023). * [23] J. H. Clint, T. S. Dunstan, _Europhysics Letters (EPL)_**54**, 320 (2001). * [24] C. Liu, A. J. Bard, _Nature Materials_**7**, 505 (2008). * [25] Y. Nan, J. Shao, M. Willatzen, Z. L. Wang, _Research_**2022**, 9861463 (2022). * [26] P. S. Gil, D. J. Lacks, _Physical Chemistry Chemical Physics_**21**, 13821 (2019). * [27] R. K. Pandey, H. Kakehashi, H. Nakanishi, S. Soh, _The Journal of Physical Chemistry C_**122**, 16154 (2018). * [28] S. Hersh, D. Montgomery, _Textile Research Journal_**25**, 279 (1955). * [29] R. Elsdon, F. R. G. Mitchell, _Journal of Physics D: Applied Physics_**9**, 1445 (1976). * [30] R. F. Gouveia, F. Galembeck, _Journal of the American Chemical Society_**131**, 11381 (2009). * [31] A. L. Sumner, _et al._, _Physical Chemistry Chemical Physics_**6**, 604 (2004). * [32] Y. Awakuni, J. H. Calderwood, _Journal of Physics D: Applied Physics_**5**, 1038 (1972). * [33] I. A. Harris, M. X. Lim, H. M. Jaeger, _Physical Review Materials_**3**, 085603 (2019). * [34] M. L. Gee, T. W. Healy, L. R. White, _Journal of Colloid And Interface Science_**140**, 450 (1990). * [35] W. Camacho, A. Valles-Lluch, A. Ribes-Greus, S. Karlsson, _Journal of Applied Polymer Science_**87**, 2165 (2003). * [36] Y. Zhang, _et al._, _Physical Review X_**5**, 1 (2015). * [37] I. O. Ucar, H. Y. Erbil, _Applied Surface Science_**259**, 515 (2012). * [38] H. T. Baytekin, _et al._, _Science_**333**, 308 (2011). * [39] R. D. Cruise, K. Hadler, S. O. Starr, J. J. Cilliers, _Journal of Physics D: Applied Physics_**55**, 185306 (2022). * [40] A. Diaz, R. Felix-Navarro, _Journal of Electrostatics_**62**, 277 (2004). * [41] X. Zhang, L. Chen, Y. Jiang, W. Lim, S. Soh, _Chemistry of Materials_**31**, 1473 (2019). * [42] L. S. McCarty, A. Winkleman, G. M. Whitesides, _Journal of the American Chemical Society_**129**, 4075 (2007). * [43] H. Zou, _et al._, _Nature Communications_**10**, 1427 (2019). * [44] J. Henniker, _Nature_**196**, 474 (1962). * [45] A. Stukowski, _Modelling and Simulation in Materials Science and Engineering_**18**, 015012 (2009). * [46] G. Grosjean, S. Wald, J. C. Sobarzo, S. Waitukaitis, _Physical Review Materials_**4**, 082602 (2020). * [47] V. Lee, N. M. James, S. R. Waitukaitis, H. M. Jaeger, _Physical Review Materials_**2**, 035602 (2018). * [48] X. Shen, A. E. Wang, R. M. Sankaran, D. J. Lacks, _Journal of Electrostatics_**82**, 11 (2016). * [49] R. Fu, X. Shen, D. J. Lacks, _The Journal of Physical Chemistry C_**121**, 12345 (2017). * [50] J. Wu, X. Wang, H. Li, F. Wang, Y. Hu, _Nano Energy_**63**, 103864 (2019). * [51] H. Gao, _et al._, _Advanced Functional Materials_**33**, 2213410 (2023). * [52] D. Frenkel, B. Smit, M. A. Ratner, _Understanding Molecular Simulation: From Algorithms to Applications_, vol. 50 (AIP Publishing, 1997). * [53] H. Zhang, S. Sundaresan, M. A. Webb, _The Journal of Physical Chemistry B_**127**, 5115 (2023). * [54] P. S. H. Henry, _British Journal of Applied Physics_**4**, S6 (1953). * [55] A. Coehn, _Annalen der Physik_**300**, 217 (1898). * [56] A. E. Wang, _et al._, _Physical Review Materials_**1**, 035605 (2017). * [57] X. Liu, J. Kolehmainen, I. Nwogbaga, A. Ozel, S. Sundaresan, _Powder Technology_**375**, 199 (2020). * [58] G. Grosjean, S. Waitukaitis, _Physical Review Letters_**130**, 098202 (2023). * [59] S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, P. A. Kollman, _Journal of Computational Chemistry_**13**, 1011 (1992). * [60] O. Tilmatine, T. Zeghloul, K. Medles, L. Dascalescu, A. Fatu, _Journal of Electrostatics_**115**, 103651 (2022). * [61] A. P. Thompson, _et al._, _Computer Physics Communications_**271**, 108171 (2022). * [62] W. L. Jorgensen, D. S. Maxwell, J. Tirado-Rives, _Journal of the American Chemical Society_**118**, 11225 (1996). * [63] S. W. Siu, K. Pluhackova, R. A. Bockmann, _Journal of Chemical Theory and Computation_**8**, 1459 (2012). * [64] H. Berendsen, J. Grigera, T. Straatsma, _Journal of Physical Chemistry_**91**, 6269 (1987). * [65] S. Chatterjee, P. G. Debenedetti, F. H. Stillinger, R. M. Lynden-Bell, _The Journal of Chemical Physics_**128**, 124511 (2008). * [66] D. J. Bonthuis, S. I. Mamatkulov, R. R. Netz, _The Journal of Chemical Physics_**144**, 104503 (2016). * [67] I. C. Yeh, M. L. Berkowitz, _Journal of Chemical Physics_**111**, 3155 (1999). * [68] V. Ballenegger, A. Arnold, J. J. Cerda, _The Journal of Chemical Physics_**131**, 094107 (2009). ### Acknowledgments The authors declare no competing interests. H.Z. and M.A.W. acknowledge support from a Princeton Innovation Grant via the Project X Fund. We also acknowledge support from the "Chem- istry in Solution and at Interfaces" (CSI) Center funded by the U.S. Department of Energy through Award No. DE-SC0019394. Simulations and analyses were performed using resources from Princeton Research Computing at Princeton University, which is a consortium led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing. M.A.W., H.Z., and S.S. designed research; H.Z. performed research; H.Z. and M.A.W. analyzed data; all authors discussed results; H.Z. and M.A.W. wrote the paper; M.A.W, H.Z., and S.S. edited the paper. ### Description of SI Appendix Comparison to Triboelectric Matrices Sourced from Literature Data; Distance Analysis of Two-surface Free-energy Calculations; Comparison of Ion Distributions in Proximity to PVA; Additional Simulation Details; Additional Description of Single-surface Free-energy Calculations; Additional Description of Two-surface Free-energy Calculations. #### SI Datasets free_energy_one_surface.csv; free_energy_two_surface.csv # Supporting Information (SI) Appendix for Evidence of thermodynamic driving forces in the contact charging of insulating polymers Hang Zhang, Sanakaran Sundaresan, Michael A. Webb Corresponding author: Michael A. Webb, mawebb@princeton.edu **This PDF file includes:** Supporting Information Text: * Comparison to Triboelectric Matrices Sourced from Literature Data * Comparison of Ion Distributions in Proximity to PVA * Distance Analysis of Two-surface Free-energy Calculations * Additional Simulation Details * Additional Description of Single-surface Free-energy Calculations * Additional Description of Two-surface Free-energy Calculations Supporting Information Figures: * Figs. S1 to S5 Supporting Information Dataset Descriptions: * SI Dataset S1 * SI Dataset S2 ### Comparison to Triboelectric Matrices Sourced from Literature Data Triboelectric series were collected from ten publications from 1898 to 2022 that featured at least a subset of the six polymers investigated in this work. The reported triboelectric series, which are a common representation of contact charging experiments, were then formulated as a matrix as introduced in the main text. Fig. S1, summarizes all the data. The figure overall conveys some level of inconsistency in experimental settings even for the same reported materials; however, there are also some broadly conserved trends. For example, most of the top-right corner and bottom-left appear consistently colored for the experimental data, and these trends are further reflected in the predictions made by the free-energy calculations. ### Comparison of Ion Distributions in Proximity to PVA In Fig. 3A, the distribution of H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) appear similar along the single dimension relative to the polymer interface. However, Fig. S2 illustrates that the two ions do exhibit differences when resolving positioning relative to both the polymer interfaces and water interfaces. In Fig. S2, positioning along the diagonal but away from the origin indicates residence resides toward the center of the droplet, as it is simultaneously distanced from both interfaces. Meanwhile, positioning in a lower horizontal band towards \(y=0\) suggests the ion is close to a water interface; however, it need not be close to a polymer interface (moving right), implying that the ion is closer to a water-vapor interface. Therefore, the simulations indicate that H\({}_{3}\)O\({}^{+}\) displays some preference to reside near the water-vapor interface, while OH\({}^{-}\) is always found in the interior of the water droplet. This behavior is generally preserved across all surfaces. ### Distance Analysis of Two-surface Free-energy Calculations To guide selection of distances for the two-surface free-energy calculations, a set of preliminary simulations at varying distances (\(d=15,25,40,55\) A) for PMMA-PVC pairings. PMMA-PVC were selected based on the predictions to acquire the most positive/negative charge from the single-surface thermodynamic integration calculations; the span of distances allowed for the formation of a water bridge between the two surfaces without any direct contact of PMMA and PVC atoms. Fig. S3A provides the corresponding free-energy profiles as a function of ionic dipole. At the small separation of \(d=15\) A, preference for either ion to specific surfaces is not clearly evident. At larger separations (\(d=25,40,55\) A), relative affinities become statistically discernible, with all separations yielding qualitatively similar interpretations. Consequently, a first group of simulations amongst all surface pairs were performed with \(d=25\) A (Fig. S3). For a subset of pairs (N66-PE, N66-PTFE, N66-PVC, PE-PTFE, PMMA-PTFE, and PVC-PTFE), relative surface affinities were not obvious at \(d=25\) A, and additional simulations were run at \(d=40\) A. For PVA-PE and PVA-PTFE, simulations at larger \(d\) were not feasible due to the hydrophobicities of PE and PTFE relative to PVA (Fig. S4), which resulted in all water residing on PVA and no stable water bridge. For all polymer pairs, a 10 ns equilibrium simulation was performed. The preliminary simulations used 7.5 ns of simulation in each biasing window. The first group of simulations were run for 15 ns for each biasing window. An additional 7.5 ns simulation are performed for \(|p_{z}|<10\) A to get better sampling of the small \(|p_{z}|\) region. For pairs that exhibited flatter free-energy profiles, an additional 7.5 ns of simulation were run to assess convergence. The second group of simulations were performed for 15 ns in each biasing window. ### Additional Simulation Details All MD simulations were conducted using version 3 Mar 2020 of the LAMMPS simulation package [12]. The polymer-water systems were prepared using a methodology similar to our previous work [13], except that water-ions were also embedded at the center-of-mass of the water droplets as needed. Periodic boundary conditions were applied in the \(x\) and \(y\) dimensions, while the \(z\) dimension was extended with fixed boundaries featuring repulsive walls that generated forces in a direction perpendicular to the wall. Polymers were described with parameters obtained from the all-atom Optimized Potentials for Liquid Simulations (OPLS-AA) force field [14, 15], while water was described using the extended simple point charge model (SPC/E) [16, 17]. The water ions were represented using a nonpolarizable force field that was parameterized to reproduce thermodynamic properties, such as solvation free energies in water [18]. Real-space non-bonded interactions were truncated at 10 A. Long-range electrostatics were handled using the particle-particle-particle-mesh Ewald summation method [19] with a convergence accuracy of \(10^{-5}\); this method was modified to accommodate the slab geometry with a non-periodic \(z\) dimension [20]. Equations of motion were evolved using a velocity-Verlet integration scheme with a 1 fs timestep. A rigid geometry was maintained for all water and ion molecules using the SHAKE algorithm [21]. Unless otherwise specified, temperature was controlled at 300 K using a Nose-Hoover thermostat [22] with a damping constant of 100 fs. Following system preparation, 20 ns equilibrium simulations were conducted. Subsequently, 20 ns production simulations were performed for all systems, and an additional 20 ns of simulations were conducted for \(N_{\text{w}}=2000\) for structural analysis. Interfaces were identified according to the approach of Willard and Chandler; [23] these calculations were facilitated by the Pyth package [24] using the same settings as in our previous work [13]. Figure S3: Dependence of relative ion-surface affinities on distance between surfaces. (A) Comparison of relative ion partitioning between surfaces as a function of ionic dipole, \(K_{AB}^{+-}(p_{z})\), as surfaces are separated at distances of \(d=15,25,40,\) and \(55\) Å. (B) Summary of all \(K_{AB}^{+-}(p_{z})\) for all pairs of surfaces separated by \(d=25\) Å. The surface pairs denoted with “**” are further simulated at \(d=40\) Å due to their relatively weak dependence on \(p_{z}\); these results are shown in the main text. The shaded regions indicate statistical uncertainties that reflect the standard error of the mean as estimated from bootstrap resampling. Figure S4: Effect of surface hydrophobicity on two-surface free-energy calculations. (A) Configurational snapshot of water bridge forming between PTFE and PVA separated by \(d=25\) Å. (B) Configurational snapshot of water bridge forming between PE and PVA. Both PTFE and PE are substantially more hydrophobic than PVA. Consequently, the majority of water molecules in the system are recruited towards the PVA surface, which affects the accessible volume of ions. Both snapshots are rendered using OVITO [11]. The atoms are colored such that carbon is gray, fluorine is green, hydrogen is white, oxygen in water and ion is red, and oxygen in PVA is pink. ### Additional Description of Single-surface Free-energy Calculations Thermodynamic Integration (TI) was used to compute the free energy of adding one ion to the water droplet. Prior equilibrated configurations were used as the initial configuration for simulations used for TI: \[F_{+/-}(N_{\rm w})=\int_{0}^{1}\left\langle\frac{dU(\lambda,\vec{q})}{d\lambda} \right\rangle_{\lambda}d\lambda=\sum_{i=1}^{12}w_{i}\left\langle\frac{dU( \lambda,\vec{q})}{d\lambda}\right\rangle_{\lambda_{i}}. \tag{1}\] In Eq. (1), \(\langle\cdot\rangle_{\lambda}\) denotes an ensemble-average obtained using \(\lambda\), which is the thermodynamic path variable such that \(\lambda=0\) corresponds to a state with only water droplet and the polymer surface and \(\lambda=1\) corresponds to the state with a water droplet, a water ion, and a polymer surface. As shown, the integral is numerically approximated using 12-point Gauss-Legendre quadrature with \(\lambda\in\{\)0.00922, 0.04794, 0.11505, 0.20634, 0.31608, 0.43738, 0.56262, 0.68392, 0.79366, 0.88495, 0.95206, 0.99078\(\}\). For the configurational potential energy with \(\lambda\), \(U(\lambda,\vec{q})\), we utilized a soft-core potential [25] for pairwise Coulombic and Lennard-Jones potential energy contributions involving the ion molecule: \[U_{\rm LJ}(r_{ij},\sigma_{ij},\varepsilon_{ij},\lambda)=4\lambda\varepsilon_{ ij}\Bigg{\{}\frac{1}{\left[0.5(1-\lambda)^{2}+\left(\frac{r_{ij}}{\sigma_{ij}} \right)^{6}\right]^{2}}-\frac{1}{0.5(1-\lambda)^{2}+\left(\frac{r_{ij}}{ \sigma_{ij}}\right)^{6}}\Bigg{\}} \tag{2}\] and \[U_{\rm coul}(r_{ij},q_{i},q_{j})=\lambda\frac{q_{i}q_{j}}{\left[10(1-\lambda) ^{2}+r_{ij}^{2}\right]^{1/2}}. \tag{3}\] The utilization of soft-core potentials allows for Eq. (2) and Eq. (3) to possess the same \(\lambda\), as opposed to performing TI in two stages (e.g., first handling \(U_{\rm LJ}\) terms and then \(U_{\rm coul}\) terms). Because the simulation box is heterogeneous, we find this preferable for sampling efficiency as utilizing only Lennard-Jones interactions in the absence electrostatics causes the ion to predominantly explore the vast "vapor" phase. We note that the free energy depends explicitly on \(N_{\rm w}\) and also there exists a subtle finite-size effect based on our heterogeneous system construction. Detailed discussion of this finite-size effect can be found in our previous work; [13] however, the effect is inconsequential in the construction of our free-energy differences. ### Additional Description of Two-surface Free-energy Calculations Two-surface systems were prepared by flipping and adding one equilibrated polymer-water system with one ion type to another with the opposite ion. For each pair of polymers, one of three amorphous systems for one polymer was randomly chosen and paired with a randomly chosen system for the other polymer. The distances of two surfaces, \(d\), was then set based on average surface interface position. The \(F_{AB}(p_{z})\) was calculated using umbrella sampling with statistical re-weighting via the weighted histogram analysis method [26]. Data was collected across 36 windows that each employ a harmonic biasing potential on \(p_{z}\). The biasing potentials utilize spring constants of 47.8011 kcal/mol and equilibrium positions at -35 to 35 A in 2 A increments. Sampling was facilitated using version 2.8.1 of PLUMED [27]. We note that the classical force field for H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) was not parameterized [18] to handle possible recombination of ionic species into neutral water molecules. To focus sampling on configurations for which the ions are separate charged species and within the realm of applicability of the force field, we modified the non-bonded interaction between oxygen atoms on H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) to be repulsive at distances less than approximately 4 A. This is practically achieved by increasing \(\varepsilon\) in the Lennard-Jones potential to \(1.0\)kcal/mol (Fig. S5). The net effect of this modification is that ions do not form unphysical hydrogen bonds, which would otherwise arise using the original parameters. Consequently, \(F_{AB}(p_{z})\) conditionally depends on the ions being separate charged species that are separated by approximately 3 A. Formally, \(F_{AB}(p_{z})\) is biased by this modified potential, but Fig. S5 shows that this effectively negligible beyond 4 A, and so re-weighting was not performed with respect to this bias. ## Dataset S1 free_energy_one_surface.xlsx Summary of results from thermodynamic integration of ion addition to water droplets atop polymer surfaces. ### Dataset S2 free_energy_two_surface.xlsx Summary of results from umbrella sampling on the ionic dipole within water channels formed between two polymer surfaces.
コンタクト électrification, またはコンタクト充電、は、二つの材料の摩擦や、さらには単純な接触によって静電荷の蓄積の過程を指します。静電気が関与する自然現象や、数多くの技術に関連しているにもかかわらず、コンタクト充電は未だに十分に理解されていません。絶縁性材料に対して、さらには電荷キャリアの種が未知であり、電荷移行の方向には分子レベルの説明が不足しています。私たちは、絶縁性ポリマー間のコンタクト充電を説明するのに、全原子分子動力学シミュレーションを使用しました。水イオン (例えば、水素イオンと水酸化物イオン) が電荷キャリアの可能性を秘めているという以前の研究を基に、ポリマー表面間の電荷移行の優先方向を、水滴上のそのような表面における水イオンの自由エネルギーに基づいて予測しました。実験的な摩擦電位シリーズと
2301.13803
Fairness-aware Vision Transformer via Debiased Self-Attention
Vision Transformer (ViT) has recently gained significant attention in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the attention mechanism. Whereas recent works have explored the trustworthiness of ViT, including its robustness and explainability, the issue of fairness has not yet been adequately addressed. We establish that the existing fairness-aware algorithms designed for CNNs do not perform well on ViT, which highlights the need to develop our novel framework via Debiased Self-Attention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive label for bias mitigation and simultaneously retain real features for target prediction. Notably, DSA leverages adversarial examples to locate and mask the spurious features in the input image patches with an additional attention weights alignment regularizer in the training objective to encourage learning real features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance. Code is available at \href{https://github.com/qiangyao1988/DSA}{https://github.com/qiangyao1988/DSA}.
Yao Qiang, Chengyin Li, Prashant Khanduri, Dongxiao Zhu
2023-01-31T17:44:59
http://arxiv.org/abs/2301.13803v3
# Fairness-aware Vision Transformer via Debiased Self-Attention ###### Abstract Vision Transformer (ViT) has recently gained significant interest in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the self-attention mechanism. To fully realize the advantages of ViT in real-world applications, recent works have explored the trustworthiness of ViT, including its robustness and explainability. However, another desiderata, fairness has not yet been adequately addressed in the literature. We establish that the existing fairness-aware algorithms (primarily designed for CNNs) do not perform well on ViT. This necessitates the need for developing our novel framework via Debiased Self-Attention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive attributes for bias mitigation. Notably, adversarial examples are leveraged to locate and mask the spurious features in the input image patches. In addition, DSA utilizes an attention weights alignment regularizer in the training objective to encourage learning informative features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance. ## 1 Introduction Recently, Visual Transformer (ViT) [11, 30] has emerged as an architectural paradigm and a viable alternative to the standard Convolutional Neural Network (CNN) [19, 27, 42] for computer vision (CV) tasks. Unlike CNN, ViT is capable of extracting global relationships via self-attention mechanism as well as informative features from the input images, leading to impressive feature representation capabilities. Consequently, ViT has demonstrated improved performance in a variety of CV tasks, including image classification [11, 30], object detection [3, 9], semantic segmentation [55, 67], and image generation [21], to name a few. Due to its promising performance, it is anticipated that ViT will form the architectural backbone of CV algorithms in the near-future for real-world applications. This has led the researchers to analyze the trustworthiness of ViT for solving CV tasks. Studying the robustness of ViT has recently attracted a growing interest [2, 13, 50, 58, 51]. It is critical to improve ViT's robustness in order to deploy them safely in the real-world. On the other hand, investigating ViT's vulnerability to attacks can give us a deeper understanding of its underlying working mechanism. In the past, researchers have dissected the self-attention mechanism [1, 47] and the gradient-based attribution [4] to offer a faithful explanation of the inner workings of ViT or Transformer at large. Besides robustness and explainability, fairness also stands as a core trustworthy desiderata for both industry [20] and academia [7]. Several studies show that many deep-learning-based CV models simply make predictions by exploiting spurious correlations with the input features [23, 58]. These spurious features are statistically informative features that work for a majority of training examples but do not capture the underlying relationship between the input features and the target labels. For illustration, let us consider the example in Figure 1 (taken from CelebA dataset). Since the target label, hair color, is spuriously correlated with the gender-related sensitive attributes, e.g., eye shadow or'red lips' in Figure 1(b), a vanilla ViT model would simply learn these spurious features as a shortcut to predict the hair color whereas our fairness-aware ViT model learns the informative features, e.g., 'hair' in Figure 1(c), to make prediction. Figure 1: An illustration example. The prediction target is hair color and the sensitive attribute is gender. The heatmap of attention weights show that Vanilla ViT (b) uses gender-sensitive features, e.g., ‘red lip’ and ‘eye shadow’, whereas our fairness-aware ViT DSA (c) uses informative features, e.g., ‘hair’, for predictions. Such spurious correlations can cause ViT to behave in a biased manner, e.g., a lower performance on some population subgroups [54, 61]. Although an array of debiasing algorithms have been proposed for image classification tasks [60, 65, 66, 23, 46, 23], most are designed for learning with the CNN models. Whether these algorithms are compatible or even transferable to the ViT architecture is unclear. Regardless of the neural network architecture, limiting the spurious correlation between the input features and the target labels for bias mitigation is still a challenging problem. The difficulty arises from the fact that automatically locating the spurious features in the input images is computationally intractable. For example, one simple solution is to have domain experts and/or crowd workers curate the entire training set, which neither works well with unknown bias [29] nor is scalable to large-scale datasets [39]. Moreover, even if one can identify the spurious features, the major challenge is how to make the classifier blind to such features? Image in-painting [59, 35] appears to be a promising approach to remove the undesired features; nevertheless, significant challenges remain regarding what old features to cut out for _debiasing_ and what new features to fill up to _repair_ the corrupted images. To address the above challenges, we propose a novel framework for ensuring bias mitigation training of ViT via Debiasing Self-Attention (DSA) to decouple the target prediction from the spurious features. DSA takes a hierarchical approach, where, in the first stage, we first localize the spurious features from the input imaging patches. This is achieved by training a bias-only model which exploits the spurious features to explicitly predict the sensitive attributes (e.g., gender and race). We then use adversarial attacks against the bias-only model to identify and perturb (or mask) the top patches that are responsible for the decreased accuracy in predicting the sensitive attributes. Notably, our approach for the fair ViTs is a novel addition to the growing body of work on "adversarial examples for fairness" [66, 23]. DSA relies on the intuitive hypothesis that the adversarial attacks initially designed to evaluate and understand the robustness of ViT can also be a viable approach for identifying and removing the spurious features towards training the fair ViT models. Meanwhile, whether the approaches that generate adversarial examples for CNN are transferable to ViT remains a matter of contention [44, 45, 41, 17, 53], the work in [13] propose Patch-Fool as one of the first approaches to fool the self-attention mechanism by attacking image patches (as opposed to pixels) during ViT's self-attention computations. In this work, we apply Patch-Fool to attack the bias-only model with the goal of capturing the most important patches for learning the sensitive attributes. As a result, the effect of sensitive features can be mitigated with adversarial examples, which are constructed by directly perturbing (attacking) the sensitive patches. In the second stage, in addition to augmenting the original training set with these adversarial examples as the debiased training set, we also align the biased examples and their corresponding (unbiased) adversarial examples via an attention weights aligning regularizer tailor-made for self-attention mechanism in ViT. This leads to a novel training objective that encourages learning informative features while ensuring fairness of the trained ViT models. **Major contributions:** We summarize our major contributions as follows: (1) We tackle the under-addressed fairness problem in ViT from a novel perspective of leveraging adversarial examples to eliminate spurious features while utilizing attention weights alignment to retain informative features. (2) We design a novel DSA framework for ViT to mitigate bias in both training set and learning algorithm via identifying and decorrelating the sensitive features from the target label. (3) DSA presents a flexible and modular debiasing approach that can be used either standalone or with other fairness-aware training algorithms to ensure ViT fairness. (4) Experimental results show that DSA improves group fairness with respect to demographic parity (DP) and equality of odds (EO) metrics while achieving a competitive or even better prediction accuracy compared to the baselines. The qualitative analysis further indicates that DSA has reduced attention on sensitive features. ## 2 Related Work ViT based Classification.The earlier exploration of ViT either used a hybrid architecture combining convolution and self-attention [3] or a pure self-attention architecture without convolution [48]. The work in [11] proposed a ViT that achieves impressive results on image classification using ImageNet data set. This success has motivated a series of subsequent works to further exploit ViT's expressive power from various perspectives, such as incorporating locality into ViT [28, 30, 63], and finding well-performing ViT using neural architecture search (NAS) [6]. Fairness and Debiased Learning.The field of fairness in deep learning has grown significantly over the past several years as a result of bias in training data and algorithms [36, 46]. The existing techniques for debiased learning can be roughly categorized into pre-, in-, and post-processing. **- Pre-processing** methods attempt to debias and increase the quality of the training set with the assumption that fair training sets would result in fair models [8, 25, 66]. The work in [66] proposed to balance the data distribution over different protected attributes by generating adversarial examples to supplement the training dataset. Similarly, [25] generated the bias-swapped image augmentations to balance protected attributes, which would remove spurious correlation between the target label and protected attributes. In [8], the authors presented fair mixup as a new data augmentation method to generate interpolated samples to find middle-ground representation for different sensitive groups. The work [46] described a novel generative data augmentation approach to create counterfactual samples that d-separates the sensitive attributes and the targets ensuring fairness and attribution-based explainability. **- In-processing** approaches aim to mitigate bias during the training process by directly modifying the learning algorithm and model weights with specifically designed fairness penalties/constraints or adversarial mechanism [24, 36, 40, 49, 65]. To enforce the fairness constraints, one line of works either disentangles the association between model predictions and the sensitive attributes via an auxiliary regularization term [40] or minimize the performance difference between protected groups with a novel objective function [49]. However, the issue is that the trained models may behave differently at the inference stage even though such fairness constraints are satisfied during the training. Another line of works [24, 36, 60, 65] enforce the model to generate fair outputs with adversarial training techniques through the min-max objective: maximizing accuracy while minimizing the ability of a discriminator to predict the protected (sensitive) attribute. Nevertheless, this process can compromise the model performance on the main prediction task. Additional line of works impose either orthogonality [51], disentanglement [32], or feature alignment [23] constraints on the feature representation and force the representation to be agnostic to the sensitive attributes. We note that most of these approaches are exclusively designed for CNN architectures, and whether these approaches are transferable to the ViT has not yet been demonstrated. **- Post-processing** techniques directly calibrate or modify the classifier's decisions to certain fairness criteria at inference time [26, 33]. These methods require access to the sensitive attribute for fair inference, which may not be feasible in real-world applications due to the salient security and privacy concerns. Fairness in ViT.Recently, [16] explored how the spurious correlations are manifested in ViT and performed extensive experiments to understand the role of the self-attention mechanism in debiased learning of ViT. Despite the new insights, the authors did not provide any debiasing techniques for ViT. The authors in [56] proposed a new method, named TADeT, for debiasing ViT that aims to discover and remove bias primarily from query matrix features. To our knowledge, this is the only published work along the line of fairness ViT. Nevertheless, this pioneering work TADeT has two weaknesses: first, it requires parameter sharing across the key and value weights in self-attention mechanism, which may conflict with most ViT architectures; second, the complex alignment strategy on the query matrix is not straightforward and well investigated. Thus, TADeT does not even outperform the compared baselines that primarily designed for CNNs. In contrast to the above works, this work tackles the debiasing problem through a novel perspective of fairness-through-adversarial-attack. The proposed DSA framework combines the strengths of both pre- and in-processing approaches via leveraging data augmentation (for ensuring fairness in the training set) and feature alignment for bias mitigation. The adversarial examples are used to both disentangle spurious features from informative features and to align attention weights, specifically, tailor-made for the self-attention mechanism in ViT. ## 3 Preliminaries ### Overview of Vision Transformer Similar to the Transformer architecture [57], the ViT model expects the input to be a linear sequence of token/patch embeddings. An input image is first partitioned into non-overlapping fixed-size square patches with resolution \(p\times p\), resulting in a sequence of flattened 2D patches. For example, given an image of size \(384\times 384\) and patch size \(p=16\), the image is divided into patches of resolution \(16\times 16\), resulting in \(576\) image patches. These patches are then mapped to constant-size embeddings using a trainable linear projection. In this example, the projection layer will produce 576 embedding vectors of fixed dimensions. Next, position embeddings are added to the patch embeddings to imbibe relative positional information of the patches. Finally. the ViT model prepends a learnable embedding (class token) to the sequence of embedded patches following [10], which is used as image representation at the model's output. The core architecture of ViT mainly consists of multiple stacked encoder blocks, where each block primarily consists of a Multi-head Self Attention (MSA) layer and a Feed-Forward Network (FFN) layer. Within the MSA layer, multiple self-attention heads learn relationships between each pair of input patches. Using three different linear transformations, the input patch \(\mathbf{x}_{i}\) is first projected to a query \(\mathbf{q}_{i}\), a key \(\mathbf{k}_{i}\), and a value \(\mathbf{v}_{i}\) in each self-attention head, \(i\) here is the index of the patches. The query \(\mathbf{q}_{i}\) then computes the dot products with all the keys \(\mathbf{k}\), which are further scaled and normalized by the softmax layer to obtain the attention weights. After this, it outputs \(\mathbf{h}_{i}\) by weighted sum up all the values \(\mathbf{v}\) with the obtained attention weights. Finally, the outputs from all heads are concatenated and re-projected by a linear layer into an output patch. FFN consists of two linear layers, which are connected by the GeLU activation function and process each \(\mathbf{h}_{i}\in\mathbb{R}^{d}\) from the precedent MSA layer individually. Both MSA and FFN layers function as the residual connection. ### Fairness Metrics Many different notions of fairness have been proposed in the literature [12, 18]. In this work, we mainly focus on the two most widely used definitions: demographic parity [12] and equalized odds [18] as the metrics to assess group fairness of the model. Demographic Parity (DP) measures whether the true positive rates across all groups (defined by a sensitive attribute \(s\), e.g., gender) are equal, particularly between the vulnerable minority group (\(s=0\)) and others (\(s=1\)), formally: \(DP=TPR_{s=1}-TPR_{s=0}\). Equalized Odds (EO) is used to understand the disparities in both the true positive rates and the false positive rates in the vulnerable group compared to others: \(EO=\frac{1}{2}[TPR_{s=1}-TPR_{s=0}]+\frac{1}{2}[FPR_{s=1}-FPR_{s=0}]\). In addition, we also use Accuracy (ACC) and Balanced Accuracy (BA) [43], where \(BA=\frac{1}{4}[TPR_{s=0}+TNR_{s=0}+TPR_{s=1}+TNR_{s=1}]\), to evaluate the utility of the model. However, when a dataset is class imbalanced, BA will have an implicit bias against the minority class. Therefore, we introduce Difference of Balanced Accuracy (DBA) as a way to measure the difference in a model's performance across groups defined by a sensitive attribute while accounting for class imbalance, formally: \(DBA=\frac{1}{2}[TPR_{s=1}+TNR_{s=1}]-\frac{1}{2}[TPR_{s=0}+TNR_{s=0}]\). ## 4 The Proposed Framework ### Problem Formulation We consider a supervised classification task with training samples \(\{x,y,s\}\sim p_{data}\), where \(x\in\mathcal{X}\) is the input feature, \(y\in\mathcal{Y}\) is the target label, and \(s\in\mathcal{S}\) is an annotated sensitive categorical attribute that we wish to protect. Some examples of \(s\) include gender, race, age or other attributes that can identify a certain protected group. We assume that the sensitive attributes \(\mathcal{S}\) can only be used during training phase, and are not accessible during the inference (post-training phase). Moreover, we suppose that each input feature \(x\) can be split into two parts, one with sensitive features \(x_{s}\) that are highly relevant to the sensitive attribute \(s\), and the rest \(x_{t}\) that are relevant to the prediction of the target label \(y\), i.e., we have \(x=(x_{s},x_{t})\in\mathcal{X}\). We develop a two-step hierarchical approach for bias mitigation, wherein, in the first stage, we localize and mask the sensitive attributes \(x_{s}\) from the input \(x\) in order to disentangle \(x_{s}\) from \(x_{t}\). This is accomplished by transforming the model prediction from \(p(x)=p(y|x_{s},x_{t})\) to \(p(x)\propto p(x^{\prime})=p(y|x^{\prime}_{t})\), where \(x^{\prime}\) is the sample constructed after masking the sensitive attributes \(x_{s}\) from \(x\) via adversarial attacks. In the second stage, we utilize the original \(x\) and the augmented data \(x^{\prime}\) to train a ViT model \(f(\cdot)\) for generating the prediction, as \(\hat{y}=f(x)\), while at the same time satisfying certain fairness requirements (i.e., DP, EO, and DBA) with respect to the sensitive attributes \(s\). ### Bias in Training Set and ViT Model The tendency of neural networks (including ViT) to learn spurious correlations makes them particularly vulnerable to utilizing sensitive features to make predictions, thereby, propagating biases towards a particular group [15]. This issue is particularly salient with the current deep learning models that follow the data-driven learning paradigm and are trained with imbalanced data set where some sensitive features could have a high correlation with certain class labels. Our work is motivated by the empirical observation that the bias in learning is mainly caused by the model's reliance on sensitive features for prediction. Note that the sensitive features \(x_{s}\) are parts of the input features \(x\), that are highly predictive of the sensitive attribute \(s\). In Figure 1, we visualize the attention weights from the ViT model to analyze the importance of different features. In this example, gender is the sensitive attribute that is highly correlated with the prediction task of hair color. The Vanilla model may pay more attention on the gender related features, indicating that it has associated gender with the hair color. This association might lead the ViT model to discriminate against the female group. We have thus established that, for the image classification task using CelebA dataset, the ViT model is heavily biased as it relies on the sensitive features for prediction. This observation naturally leads to our DSA framework for bias mitigation discussed next. ### Debiased Self-Attention (DSA) Framework The discussion in Section 4.2 demonstrates that the reliance of ViT on the sensitive features for prediction can lead to bias. Therefore, to mitigate the bias originating from the sensitive features, we propose to achieve fairness by mitigating the influence of sensitive features on the prediction task. However, note that it is a challenging task to locate the sensitive features in the input. To address this challenge, we propose a hierarchical framework as discussed in Section 4.1. Specifically, our DSA framework follows a two-step procedure (Figure 2): Step 1: Firstly, we train a _bias-only_ model that deliberately maximizes the usage of sensitive features for prediction, followed by adversarial attack on the _bias-only_ model to localize and mask the sensitive attributes. Step 2: Second, we train a _debiased_ model with augmented adversarial examples and attention weights alignment. #### 4.3.1 Training the Bias-only Model Recall that the input feature \(x=(x_{s},x_{t})\in\mathcal{X}\) where \(x_{s}\) are the sensitive features while \(x_{t}\) are the target related features. The goal of Step 1 (see Section 4.2) is to learn only the sensitive features \(x_{s}\), during training the bias-only model. To achieve this, we first build a bias-only ViT model which maximally utilizes the sensitive features for prediction. We denote the bias-only model by \(f_{B}(x,s)=c(h(x),s)\), where \(h(x)\) is the intermediate representation of the input \(x\), and \(c(\cdot)\) maps the intermediate representation to the final prediction. Note that \(h(x)\) contains only \(m\) elements from the categories in \(\mathcal{S}\), e.g., \(m=2\) in most of our experimental settings. The key motivation of using the \(m\) elements for input representation \(h(x)\) is to force the bias-only model to only utilize sensitive attributes to obtain the prediction \(f_{B}(x,s)\). Given \(N\) samples of the input, \(x_{i}\), and the sensitive attribute, \(s_{i}\), pairs \(\{x_{i},s_{i}\}_{i=1}^{N}\), the bias-only model minimizes the following loss. \[\mathcal{L}_{B}(x)= -\frac{1}{N}\sum_{i=1}^{N}s_{i}\log(f_{B}(x_{i},s_{i}))\] \[+(1-s_{i})\log(1-f_{B}(x_{i},s_{i})). \tag{1}\] We illustrate the idea using the example in Figure 2. We consider the hair color classification tasks with gender bias. Input representation \(h(x)\) is denoted using two elements, indicating the sensitive attributes male and female, respectively. The bias-only model \(f_{B}(x,s)\) mainly relies on the sensitive features, like 'eye shadow' and/or'red lips', to predict the label as female, while at the same time paying nearly no attention to the hair color related features like 'hair' themselves. #### 4.3.2 Adversarial Attack Against the Bias-only Model After obtaining the bias-only model, the following procedure in Step 2 of the DSA framework localizes and masks the spurious (sensitive) features via adversarial attacks that are generated using the Patch-Fool construction proposed in [13]. Specifically, Patch-Fool is designed to fool the self-attention mechanism in ViTs by attacking their basic component (i.e., a single patch) with a series of attention-aware optimization techniques, demonstrating that the ViTs are more vulnerable to adversarial attacks than the CNNs. However, in contrast to [13], instead of applying Patch-Fool as an adversarial attack method to evaluate the robustness of ViT, we utilize it to efficiently localize and mask the sensitive features in the inputs. To this end, we adapt the objective function of Patch-Fool in order to attack the bias-only model on the sensitive labels instead of the target labels. Specifically, given the objective function \(\mathcal{L}_{B}(x)\) and a series of input image patches \(\mathbf{X}=[\mathbf{x}_{1},\cdots,\mathbf{x}_{\mathbf{p}},\cdots,\mathbf{x}_{ \mathbf{n}}]^{\mathrm{T}}\in\mathbb{R}^{n\times d}\) with its associated sensitive label \(s\), the objective of the adversarial algorithm is \[\operatorname*{arg\,max}_{1\leq p\leq n,\mathbf{E}\in\mathbb{R}^{n\times d}} \mathcal{L}_{B}(\mathbf{X}+\mathds{1}\odot\mathbf{E},s), \tag{2}\] where \(\mathbf{E}\) denotes the adversarial perturbation; \(\mathds{1}\in\mathbb{R}^{n}\) is the identifying one-hot vector demonstrating whether current \(p\)-th patch is selected or not; \(\odot\) represents the penetrating face product [13]. Thus, the Patch-Fool needs to (1) select the adversarial patch \(p\), and (2) optimize the corresponding adversarial attack, \(\mathbf{E}\). **Selection of \(p\):** For encoder blocks in the ViT, we define: \(t_{j}^{(l)}=\sum_{h,i}a_{j}^{(l,h,i)}\) to measure the importance of the \(j\)-th patch in the \(l\)-th block based on its contributions to other patches in the self-attention calculation, where \(\mathbf{a}^{(l,h,i)}=[a_{1}^{(l,h,i)},\cdots,a_{n}^{(l,h,i)}]\) denotes the attention distribution for Figure 2: The DSA framework. The bias-only model is first trained to learn the _spurious features_ (the green patches) for predicting sensitive attribute (\(s\in\mathcal{S}\)) (see Section 4.3.1). Adversarial attack is then applied against the bias-only model to generate the adversarial examples, (\(x^{\prime}\)), by perturbing the sensitive patches (the grid shadow patches) of the original inputs (\(x\in\mathcal{X}\)) (see Section 4.3.2). Finally, both \(x\) and \(x^{\prime}\) are used to train a fairness-aware ViT with an attention weights alignment objective (see Eq. (10)) and learn the target (y)-related _informative features_ (the red patches) (see Sections 4.3.3 and 4.3.4). Best viewed in color. the \(i^{\text{th}}\) patch of the \(h^{\text{th}}\) head in the \(l^{\text{th}}\) block. The motivation behind applying Patch-Fool is to localize the most influence patch \(p\) according to the predicted sensitive attribute \(s\). Here, we derive the top \(k\) (which is a tunable hyper-parameter) important patches from \(\arg\max t_{j}^{(l)}\). **Optimize**\(\mathbf{E}\): Given the selected adversarial patch index \(p\) from the previous step, an attention-aware loss is applied for the \(l^{\text{th}}\) block as: \(\mathcal{L}_{\mathrm{Attn}}=\sum_{h,i}a_{p}^{(l,h,i)}\). This loss is expected to be maximized so that the adversarial patch \(p\), serving as the target adversarial patch, can attract more attention from other patches for effectively fooling ViTs. The perturbation \(\mathbf{E}\) is then updated based on both the final sensitive classification loss and a layer-wise attention-aware loss: \[\mathcal{L}(\mathbf{X}^{\prime},s,p)=\mathcal{L}_{B}(\mathbf{X}^{\prime},s)+ \alpha\underset{l}{\sum}\mathcal{L}_{\mathrm{Attn}}(\mathbf{X}^{\prime},p), \tag{3}\] where \(\mathbf{X}^{\prime}\triangleq\mathbf{X}+\mathds{1}\odot\mathbf{E}\) and \(\alpha\) is a weight hyper-parameter set to \(0.5\) in the experiments. Moreover, PCGrad [64] is adopted to avoid the gradient conflict of the two losses and \(\mathbf{E}\) is updated using: \[\delta_{\mathbf{E}}=\nabla_{\mathbf{E}}\mathcal{L}(\mathbf{X}^{\prime},s,p)- \alpha\underset{l}{\sum}\beta_{l}\nabla_{\mathbf{E}}\mathcal{L}_{B}(\mathbf{ X}^{\prime},s), \tag{4}\] where \[\beta_{l}=\begin{cases}0,&\langle\nabla_{\mathbf{E}}\mathcal{L}_{B}(\mathbf{ X}^{\prime},s),\nabla_{\mathbf{E}}\mathcal{L}_{\mathrm{Attn}}(\mathbf{X}^{ \prime},p)\rangle>0\\ \frac{\langle\nabla_{\mathbf{E}}\mathcal{L}_{B}(\mathbf{X}^{\prime},s),\nabla_ {\mathbf{E}}\mathcal{L}_{\mathrm{Attn}}(\mathbf{X}^{\prime},p)\rangle}{\| \nabla_{\mathbf{E}}\mathcal{L}_{B}(\mathbf{X}^{\prime},s)\|^{2}},&\mathrm{ otherwise}.\end{cases} \tag{5}\] Following PGD [37], we iteratively update \(\mathbf{E}\) using an Adam optimizer: \(\mathbf{E}^{t+1}=\mathbf{E}^{t}+\eta\cdot Adam(\delta_{\mathbf{E}^{t}})\), where \(\eta\) is the step-size for each update. #### 4.3.3 Attention Weights Alignment After Step 1, the DSA framework generates the adversarial example \(x^{\prime}\), whose top \(k\) patches containing sensitive attributes are perturbed through the adversarial attack. Here, besides using these adversarial examples as augmentation during training of the debiased ViT models, we also leverage them via attention weights alignment to further guide the model to pay more attention to the target-related features. This also allows more sensitive features to be discovered and ignored by self-attention mechanism in the ViT models as shown in Figure 2. In particular, we apply three different feature discrepancy metrics \(D(\cdot,\cdot)\), i.e., Mean Squared Error (MSE), Kullback-Leibler Divergence (KL-Div), and Attention Transfer (AT), to evaluate the discrepancy between the attention weights \(\mathbf{A}^{x}\) and \(\mathbf{A}^{x^{\prime}}\) from the original sample \(x\) and the adversarial example \(x^{\prime}\), respectively. We define the three metrics as: \[\mathcal{L}_{A}=D_{\star}(\mathbf{A}^{x},\mathbf{A}^{x^{\prime}}), \tag{6}\] where \(D_{\star}\) is either \[D_{\mathrm{MSE}}(\mathbf{A}^{x},\mathbf{A}^{x^{\prime}})=\frac{1}{2} \underset{j\in\mathcal{I}}{\sum}\|\mathbf{A}_{j}^{x}-\mathbf{A}_{j}^{x^{\prime }}\|_{2} \tag{7}\] \[D_{\mathrm{KL-Div}}(\mathbf{A}^{x}\|\mathbf{A}^{x^{\prime}})=\underset{j\in \mathcal{I}}{\sum}\mathbf{A}_{j}^{x}\log\frac{\mathbf{A}_{j}^{x}}{\mathbf{A}_{ j}^{x^{\prime}}} \tag{8}\] \[D_{\mathrm{AT}}(\mathbf{A}^{x},\mathbf{A}^{x^{\prime}})=\frac{1}{2}\underset{j \in\mathcal{I}}{\sum}\bigg{\|}\frac{\mathbf{A}_{j}^{x}}{\|\mathbf{A}_{j}^{x}\|_ {2}}-\frac{\mathbf{A}_{j}^{x^{\prime}}}{\|\mathbf{A}^{x^{\prime}}\|_{2}}\bigg{\|} _{2}, \tag{9}\] where \(\mathcal{I}\) denotes the indices of all the adversarial examples and the original example attention weights pairs for which we perform alignment. Finally, to incorporate the attention distributions of \(\mathbf{A}^{x}\) and \(\mathbf{A}^{x^{\prime}}\) in the objective, we add \(\mathcal{L}_{A}\) as a regularizer in the overall objective. #### 4.3.4 Overall Loss Function Putting the above Steps 1 and 2 together, the overall objective for training the proposed debiased model is: \[\mathcal{L}=\lambda_{1}\mathcal{L}_{CE}(x,y)+\lambda_{2}\mathcal{L}_{CE}(x^{ \prime},y)+\lambda_{3}\mathcal{L}_{A}, \tag{10}\] where \(\mathcal{L}_{CE}\) denotes the standard cross entropy (CE) loss; \(\lambda_{1},\lambda_{2}\), and \(\lambda_{3}\) are three weighted coefficients for controlling the three losses. These parameters are designed for controlling the fairness-utility trade-off. We provide further ablation study on these terms in the experiments. ## 5 Experimental Settings ### Datasets We evaluate the DSA framework on two popular CV datasets, namely, Waterbirds [49] and CelebA [31]. Waterbirds dataset contains spurious correlation between the background features \(\mathcal{S}=\{\text{water, land}\}\) and target label \(\mathcal{Y}=\{\text{waterbird, landbird}\}\). The spurious correlation is injected by pairing waterbirds with the water background and landbirds with the land background more frequently, as compared to other combinations. CelebA dataset, which has been widely used in the fairness literature, contains 200k celebrity face images with annotations for 40 binary attributes. We present the results on three settings following [16, 56], each with a corresponding binary task (\(\mathcal{Y}\)) that the model is trained to predict, and a binary sensitive attribute (\(\mathcal{S}\)) over which we wish the model to be unbiased. The three settings described as a tuple \((\mathcal{Y},\mathcal{S})\) are as follows: (Gray Hair, Gender), (Wavy Hair, Gender), and (Smiling, High Cheekbones). We provide more details of these settings in the Supplementary Materials. ### Implementation Details We train the ViT-S/16 models from scratch for each prediction task. The ViT-S/16 model consists of 196 patches (each representing a 16x16 sub-image), 1 class token patch, 12 transformer encoder layers, and 8 attention heads. We flatten and project each patch into a 64-dimensional vector and add positional embeddings. The embedded patches are fed into the ViT encoder. After the ViT encoder processes the patch embeddings, the class token patch is fed into 2 fully-connected layers (with hidden state size as 256) and a sigmoid layer to produce a single normalized output score (since we deal with binary classification). We train the ViT models using momentum Stochastic Gradient Descent (SGD) with a momentum parameter of 0.9 and an initial learning rate of 3e-2 for 20 epochs. We use a fixed batch size of 32, gradient clipping at global norm \(1\), and a cosine decay learning rate schedule with a linear warmup following [16]. We select the model with the best accuracy on the validation sets. ### Baselines We select the following debiasing algorithms as baselines for performance evaluation, for which we select the best model that yields the highest validation performance. To our knowledge, besides the proposed DSA and AM as a **home run** method, TADeT is the only third-party fairness-aware algorithm tailor-made for ViT while all the others are designed for CNN. We consider the following baselines: **Vanilla [11]:** The ViT models are only trained with CE loss for target prediction. Attention Masking **(AM)**: The self-attention mechanism is critical in ViT as it provides important weights for extracting visual features. We propose the AM method as a _home run_ that directly masks the top-\(k\) patches with highest attention scores for the bias-only model. Mitigating Bias in ViT via Target Alignment **(TADeT)**[56] uses a targeted alignment strategy for debiasing ViT that aims to identify and remove bias primarily from query matrix features. Maximum Mean Discrepancy **(MMD)**[34] calculates the mean of penultimate layer feature activation values for each sensitive attribute setting and then minimizes their \(L_{2}\) distance. MMD-based Fair Distillation **(MFD)**[23] adds a MMD-based regularizer that utilizes the group-indistinguishable predictive features from the teacher model while discouraging the student model from discriminating against any protected group. Domain Adversarial Neural Network **(DANN)**[14] employs a sensitive attribute adversary learned on top of the penultimate layer activation. The adversarial head consists of two linear layers in the same dimension as the class token, followed by a sigmoid function. Learning Adversarially Fair and Transferable Representation **(LAFTR)**[36] trains a model with a modified adversarial objective that attempts to meet the \(EO\) fairness criterion. This objective is implemented by minimizing the average absolute difference on each task. ## 6 Main Results and Discussion In this Section, we report the results of fairness and accuracy evaluations, the ablation study, and the effects of model size and patch size. In Supplementary Materials, many more experimental results are reported, including the impact of several tunable hyper-parameters, results with different \(D_{\star}\) in the regularizer \(\mathcal{L}_{A}\), and some qualitative evaluations. ### Fairness and Accuracy Evaluations We report the fairness and accuracy performance on the three aforementioned settings (see Section 5.1) on CelebA dataset in Figure 3. We make the following observations. First, DSA outperforms all the baselines, demonstrated with the largest area (enclosed by the red lines) in the radar charts, significantly improving the ViT fairness with lower EO, DP, and DBA while maintaining higher accuracy in terms of BA and ACC. Second, several baseline methods Figure 3: Fairness and accuracy evaluation for all methods over different combinations of target (\(y\)) and sensitive (\(s\)) on CelebA dataset. For DSA, we use \(\mathcal{L}_{A}=D_{AT}\). The test accuracies of the bias-only model used in AM and DSA for predicting gender and high cheekbones are 82.62% and 80.71%, respectively. The success rates of adversarial attacks are reported in Supplementary Material. The \(\swarrow\) signs indicate the lower value of the corresponding metric is better, while \(\nearrow\) denotes the higher value is better. Best viewed in color. (e.g., MMD, MFD, and DANN) that have shown strong performance with CNN models, do not even outperform the vanilla model on some fairness metrics (e.g., EO), particularly under the (Wavy Hair, Gender) setting. This may happen because ViT primarily learns global image features by modeling long-range dependencies using the self-attention mechanism, which is fundamentally different form convolution-based local feature leaning with CNN. As such, these baseline methods (designed for the CNNs) are not transferable for bias mitigation with the ViT models. Third, we note the _home run_ method AM is also designed by blinding the sensitive attributes in the input based on only the attention weights of the bias-only model. However, several works [1, 22, 52] have questioned whether highly attentive inputs would significantly impact the model outputs. Since self-attention mechanism involves the computation of queries, keys, and values, reducing it only to the derived attention weights (inner products of queries and keys) can be insufficient to capture the importance of the features. Hence, the _home run_ AM method fails to achieve a comparable performance with the proposed DSA method. Similarly, we observe the same patterns on the results of Waterbirds dataset as shown in Figure 4. DSA outperforms all other baselines in terms of fairness evaluations, i.e., DP and EO, as well as accuracy performance. ### Ablating DSA The training objective of DSA contains three essential components for bias mitigation. We conduct ablation study using the (Gray Hair, Gender) setting to analyze their individual contributions and report the results in Table 1 (the other two settings are reported in the Supplementary Materials). We summarize our major findings. First, all of the components contribute towards the improved fairness performance across all three fairness metrics (i.e., EO, DP and DBA). Second, both the target (task) related CE losses in Eq.(10) are critical in preventing DSA from compromising the prediction performance (otherwise, the accuracy drops from 90.95 to 88.32 and 88.54, respectively). Third, the training objective \(\mathcal{L}_{A}\) in Eq.(10) contributes the most to the higher fairness measures, as is clearly shown by: EO (0.2934\(\rightarrow\)0.2558), DP (0.2865\(\rightarrow\)0.2337), and DBA (0.0206\(\rightarrow\)0.0031). ### Effect of ViT Model Size and Patch Size We examine the effect of ViT model size and patch size on DSA in Table 2. The ViT-B model is much larger than the ViT-S model, which has 12 self-attention heads in each block and 256 hidden state size in the two fully-connected layers. Each patch is flattened and projected into a vector of 768 dimensions. We draw two conclusions from Table 2. First, the larger ViT-B models outperform the smaller ViT-S on most of the fairness and accuracy metrics, demonstrating better feature learning capabilities with higher feature dimensions and more self-attention heads. Second, smaller patch size (16) achieves a better performance on both fairness and accuracy measurements because small patches enables extracting more fine-grained features [5]. ## 7 Conclusion In this work, we proposed a novel hierarchical fairness-aware ViT training framework named DSA for bias mitigation in both the training set and the learning algorithm. The DSA framework eliminates the spurious features through adversarial attacks on the bias-only model while retaining the informative features through an attention weights alignment regularizer. The experimental results demonstrate the effectiveness of DSA for bias mitigation without compromising prediction performance. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{5}{c}{\(Y\): Gray Hair \(S\): Gender} \\ \cline{2-6} & EO\(\downarrow\) & DP\(\downarrow\) & DBA\(\downarrow\) & BA\(\uparrow\) & ACC\(\uparrow\) \\ \hline \(\mathcal{L}\)(all) & **0.2558** & **0.2337** & **0.0031** & **82.92** & **90.95** \\ w/o \(\mathcal{L}_{CE}(x,y)\) & 0.2754 & 0.2541 & 0.0175 & 81.21 & 88.32 \\ w/o \(\mathcal{L}_{CE}(x^{\prime},y)\) & 0.2641 & 0.2503 & 0.0129 & 80.65 & 88.54 \\ w/o \(\mathcal{L}_{A}\) & 0.2934 & 0.2865 & 0.0206 & 81.54 & 89.91 \\ \hline \hline \end{tabular} \end{table} Table 1: Ablation study of the three training objectives. Best results are bold faced. ‘w/o’ represents without. \begin{table} \begin{tabular}{l l|c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{5}{c}{\(Y\): Gray Hair \(S\): Gender} \\ \cline{2-7} & EO\(\downarrow\) & DP\(\downarrow\) & DBA\(\downarrow\) & BA\(\uparrow\) & ACC\(\uparrow\) \\ \hline \multirow{2}{*}{B/16} & VA & 0.2984 & 0.2841 & 0.0142 & 81.95 & 91.05 \\ & DSA & **0.2424** & **0.2205** & 0.0081 & **83.42** & **91.24** \\ \multirow{2}{*}{S/16} & VA & 0.2763 & 0.3185 & 0.0422 & 81.84 & 90.25 \\ & DSA & 0.2558 & 0.2337 & **0.0031** & 82.92 & 90.95 \\ \hline \multirow{2}{*}{B/32} & VA & 0.2982 & 0.2976 & 0.0205 & 81.11 & 90.16 \\ & DSA & **0.2629** & **0.2520** & 0.0109 & **82.73** & **91.03** \\ \multirow{2}{*}{S/32} & VA & 0.3014 & 0.3213 & 0.0198 & 80.64 & 89.18 \\ \cline{2-7} & DSA & 0.2935 & 0.3165 & **0.0086** & 80.86 & 89.45 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluations with different ViT models (i.e., ViT-B (B) and ViT-S (S)) and patch sizes (i.e., 16 and 32). All tunable hyperparameters are set same as Figure 3. VA is short for Vanilla. Figure 4: Fairness and accuracy evaluation on Waterbirds dataset. The ACCs are shown in the legends. All tunable hyper- parameters and other settings are same as in Figure 3.
``` 視覚Transformer (ViT) は、情報を抽出し、注意機構を介して長距離依存関係をモデル化できる能力により、近年コンピュータビジョン (CV) 問題の解決に大きな注目を集めています。一方、最近の研究は ViT の信頼性、すなわち安定性と説明可能性を調査していますが、公平性の問題は十分に議論されていません。私たちは既存の fairness-aware アルゴリズムをCNNに対して適用すると、ViTがうまく機能しないことを発見し、これは、Debiased Self-Attention (DSA) を用いた新しいフレームワークの開発を必要とすることを強調しています。DSA は、公平性を blind に実現するアプローチで、ViT を、敏感なラベルに関連する非現実的な特徴を排除することでバイアスの削減を行い、同時に、実質的な特徴を保持してターゲット予測を促進します。特に DSA は、攻撃的例を用いて、入力画像のパッチに
2309.16181
MSF-Model: Modeling Metastable Failures in Replicated Storage Systems
Metastable failure is a recent abstraction of a pattern of failures that occurs frequently in real-world distributed storage systems. In this paper, we propose a formal analysis and modeling of metastable failures in replicated storage systems. We focus on a foundational problem in distributed systems -- the problem of consensus -- to have an impact on a large class of systems. Our main contribution is the development of a queuing-based analytical model, MSF-Model, that can be used to characterize and predict metastable failures. MSF-Model integrates novel modeling concepts that allow modeling metastable failures which was interactable to model prior to our work. We also perform real experiments to reproduce and validate our model. Our real experiments show that MSF-Model predicts metastable failures with high accuracy by comparing the real experiment with the predictions from the queuing-based model.
Farzad Habibi, Tania Lorido-Botran, Ahmad Showail, Daniel C. Sturman, Faisal Nawab
2023-09-28T05:49:28
http://arxiv.org/abs/2309.16181v1
# MSF-Model: Modeling Metastable Failures in Replicated Storage Systems ###### Abstract Metastable failure is a recent abstraction of a pattern of failures that occurs frequently in real-world distributed storage systems. In this paper, we propose a formal analysis and modeling of metastable failures in replicated storage systems. We focus on a foundational problem in distributed systems--the problem of consensus--to have an impact on a large class of systems. Our main contribution is the development of a queuing-based analytical model, _MSF-Model_, that can be used to characterize and predict metastable failures. _MSF-Model_ integrates novel modeling concepts that allow modeling metastable failures which was interactable to model prior to our work. We also perform real experiments to reproduce and validate our model. Our real experiments show that _MSF-Model_ predicts metastable failures with high accuracy by comparing the real experiment with the predictions from the queuing-based model. 1UC Irvine, 2Rolox, 3Taibah University, 4University of Prince Mugrin ## 1 Introduction Previous works studied the reliability of distributed storage systems under conditions of hardware failures, software bugs, network outages, and human errors [7, 23, 24, 33, 39]. The rise of cloud services has given rise to new failure types, including security failures [25, 41], stragglers [9, 10], configuration failures [34, 43], fail-slow failures [2, 15, 26], and most recently metastable failures introduced by Bronson et al. [6] and explored further by Huang et al. [17]. **Metastable Failures.** A metastable failure [6, 17] describes a state of the system that--although functioning--has extremely low performance due to _sustained artificial overload_. We emphasize the word _artificial_ as the overload is not solely due to incoming traffic (though it might be _triggered_ initially by it); rather, the overload is caused by an artifact of the system. An example of a metastable failure is a _neverending_ feedback loop of retrial requests (called _retry storm_) that is triggered initially by a _temporary_ sudden sharp increase in incoming traffic. The temporary incoming traffic overload stresses the system leading to delays in processing requests and leads to triggering retrial requests. What makes it a metastable failure is that the retrial requests themselves cause a further increase in the load on the system; this leads to a feedback loop of retrial requests increasing the load on the system, leading to more retrial requests, and so on. At this stage, even if the incoming traffic is reduced to normal levels, the overload from retrial requests is continuous and high enough to (artificially) sustain the overload on the system. Metastable failures have been recurring in real industry scenarios as collected and reported by prior work [16, 17, 38]. The study of previous metastable failures occurring in real-world scenarios reveals that over 50% of these incidents involved retry storms--similar to the example above--as the sustaining artificial overload [17]. Due to their significance, we focus on retry storms in our discussion and analysis. However, our findings are applicable to a wider range of metastable failures, as we discuss in the rest of the paper. Another example of a metastable failure is an overload that accumulates background tasks such as garbage collection tasks or compaction tasks. The initial accumulation of garbage collection tasks leads to sustaining the overload due to the overhead of garbage collection that leads to a feedback loop of high load and triggering more garbage collection tasks. Many existing solutions include anti-patterns and adoptive policies (e.g., exponential back-off [13], circuit breaker [32], LIFO scheduling [29]) that attempt to limit the impact of work amplification and prevent metastable failures during the monitoring phase. However, these solutions are designed for specific instances of metastable failure. There is a need for a general solution to metastable failures and a systematic approach to address them [6, 16]. **Metastability in Replicated Storage Systems.** In this work, we focus on understanding metastable failures in replicated storage systems. Given their prevalence in production environments, replicated storage systems are frequently exposed to retry storms. Past studies have identified this type of failure and analyzed related system outages [6, 17] but have fallen short of providing a formal analytical model for these failures. We build upon prior efforts by offering an analytical model of metastable failures and enhancing our understanding of them. We propose _MSF-Model_, an analytical framework to model metastable failures that integrates and extends modeling tools, including queuing theory [21], Markov Chains [31], Monte Carlo Analysis [37], as well as new analytical methods that we develop such as _distance divergence_. The nature of metastable failures is challenging to model analytically and requires us to build a multi-faceted model to describe a range of behaviors and states of the system before, during, and after triggering events. For each stage, the queuing model component captures the behavior of the system in terms of load on the system and the ensuing overload in terms of the accumulation of requests (in the queue) and the accumulation of retry requests. To model retry requests, we introduce a component in queuing models called an _orbit space_. The orbit space models the requests that are being retried. A limitation of queuing models that we overcome is that a queuing model can be used to analyze the state of the model for only a single instance of the model with a specific configuration and workload. However, in metastable failures, we need to model the behavior of the system across different stages of different workload characteristics (i.e., before, during, and after the triggering event). This is important to enable capturing whether the sustained artificial overload continues after the incoming traffic and workload return to normal. To this end, we propose a new analytical method called _distance divergence_ that can model the overload of the system given a certain configuration as well as a prior state of the configuration. Distance divergence helps us understand if a certain set of configurations and triggering event characteristics lead to a metastable failure. We perform real experiments that show and validate the accuracy of _MSF-Model_. The experimental validation is performed on a real replicated storage system that utilizes PostgreSQL and Paxos. For each validation run, we measure the expected behavior of the system using the _MSF-Model_, and then perform an experiment with the same parameters. We then compare he predictions of _MSF-Model_ with the real outcome from experiments. We show that _MSF-Model_ closely matches the corresponding real experiments. The main contribution of our work is proposing a queuing-based analytical model to study and understand metastable failures. Our model proposes new concepts--such as orbit space and distance divergence--that allow us to model metastable failures that were intractable to model prior to our work. We also validate the accuracy of _MSF-Model_ with real experiments. Due to space constraints, we focus in our presentation and experimental validation on replicated storage systems. However, we also provide a description of how _MSF-Model_ can be utilized and extended for various other use cases, including more complex systems with caches and heterogeneous components. The remainder of this paper is organized as follows: Section 2 provides background information on metastable failures and the analytical tools we use. Section 3 explores and reproduces various metastable failures in a consensus-based replicated storage system. In Section 4, we introduce the design of MSF-Model that includes the queuing model, Markov Chain, Monte Carlo analysis, and distance divergence. In Section 5, we present real experiments to validate MSF-Model's accuracy. Section 6 reviews relevant literature, and finally, we conclude the paper in Section 7. ## 2 Background ### Metastable Failures Metastable failures in distributed systems, formalized recently by [6], are a unique pattern of failures that occur frequently in real industry systems [17, 38]. They often emerge from optimizations and policies implemented to enhance system performance. However, under certain conditions, these changes can inadvertently trigger a negative impact. For instance, request retries, though aimed at improving reliability, have been identified as a key contributor to metastable failures [17]. **Life Cycle.** Figure 1 shows the life cycle of a metastable failure, consisting of three stages: stable, vulnerable, and metastable. Initially, the system functions normally without significant load. A change in this load can transition the system into a vulnerable state, where it remains operational and not overloaded but at risk of entering a challenging-to-recover metastable state due to a _triggering event_. The triggering event is an event that pushes the system past a certain load threshold or diminish the system's capacity, e.g., a sudden sharp increase in incoming traffic or a failure of a cache. Even after eliminating the triggering event, the system may persist in a metastable state. This is due to a _sustaining effect_ that creates a feedback loop that persists the overload on the system, e.g., retry requests. The metastable state is considered a failure because the system experiences an extremely low goodput even after removing the triggering event. **Metastable Failure Scenario.** Consider a database with a cache that has 90% cache hit ratio (Figure 2). Suppose that, without the cache, the database can process a throughput of 300 requests/second. Introducing the cache allows the whole system--which represents the database and the cache--to process 3000 requests/second. A typical deployment decision would be to maximize the throughput of the whole system, which may lead to allocating around 3000 requests/second. In such a case, a crash-failure of the cache may lead to the following metastable failure. The cache fails, reducing the capacity of the system to 300 requests/second. The system--without a cache--can only handle 300 requests/second and is not able to process the continuing flow of 3000 requests/second. This leads to delaying and dropping requests. These delayed and dropped requests lead to their application clients sending retry requests. These retry requests increase the load on the system to 6000, which is higher than the client workload of 3000 (6000 is the sum of the incoming load 3000 in addition to the additional load from retry requests.) Eventually, when the cache crash-failure is fixed, the capacity of the system is back to 3000 requests/s. However, at that point, the load on the system is 6000 requests/s and thus cannot be handled even with the cache that has a capacity of 3000 requests/s. This leads to sustaining the impact of the crash-failure on performance as requests are still delayed/dropped, and retry requests are still being generated at high rates--a _retry storm_. These retry requests represent the sustaining artificial workload. Metastable failures can lead to widespread outages that can disrupt services, potentially lasting from minutes to hours. Unlike logic bugs, metastable failures represent emergent Figure 1: Life cycle of a metastable failure Figure 2: Timeline of a metastable failure scenario Figure 3: An \(M/M/1/\infty\) queuing node behaviors, meaning they are often not detectable through standard unit or integration tests [6]. ### Queuing Theory Queuing theory is a mathematical approach to the study of waiting lines or queues [21]. It is used to predict variables like wait times and queue lengths [40]. In this section, we provide preliminaries about queuing theory that we utilize in the design of _MSF-Model_. \(\mathbf{M}/\mathbf{M}/\mathbf{1}/\infty\)**Model.** In a queuing model, requests arrive to the queue, wait for their turn while previous requests are being processed, undergo processing for a certain duration, and then depart from the queue. There are various types of queue models depending on the distribution of workload, processing, number and size of queues, and other characteristics. A foundational model in queuing theory is the \(M/M/1/\infty\) queue (Figure 3). This model represents a system where arrivals, or rate of requests, follow a Markovian (Poisson) process denoted by the first "\(M\)". Service times, or the time taken to process requests once at the front of the queue, also follow a Markovian (exponential) distribution represented by the second "\(M\)". The model assumes a single server, represented by "1", to process these requests. Finally, it assumes an infinite buffer space for customers waiting in the queue, denoted by "\(\infty\)". This model is widely used due to its mathematical tractability and because it can effectively approximate many real-world systems. A unique feature of the Markovian process is its' _memorylessness_, meaning that the system's transition from its current state to the next is only dependent on the current state and is not influenced by any past events. Thus, in a queuing system, each request arrives independently from others, and the time spent processing a request is unrelated to previous events. An additional characteristic of a Poisson process, known as the superposition principle [20], allows the combination of two or more independent Poisson processes to form a single, unified Poisson process. These properties make the analysis of the queuing system tractable. We refer to a queuing system as "Stationary" when its key characteristics, like the average number of waiting requests or the arrival rate, remain constant over time. In simpler terms, a system reaches a stationary state when its inflow and outflow rates are in equilibrium, leading to a time-independent probability distribution of the state of the system [5]. **Markov Chain.** A Markov chain [27] is a mathematical process that is represented as a state-transition diagram, with the probability of each transition depending only on the current state and not on the path that led to it (i.e., it is memoryless). It is particularly useful in the analysis of queuing systems, as many queuing models can be described as Markov processes. A Continuous-Time Markov Chain (CTMC) [1] is a stochastic model used to describe systems transitioning between states over continuous time, governed by the Markov property. Each state transition in a CTMC is associated with a certain rate, indicating how rapidly the system is likely to transition. The time spent in one state before transitioning to the next typically follows an exponential distribution. In a queuing system, each state represents a different processing stage, i.e., how many requests are being processed/queued. For instance, one state might represent two requests being serviced and three requests waiting. A CTMC can model a queuing system's behavior, where transition rates symbolize the rates at which requests arrive (arrival rate) or get serviced (service rate). A CTMC is termed "Stationary" if transition probabilities depend only on the time span between transitions rather than absolute time. This means the process behaves the same at all times, and each state's probability distribution is measurable. **Monte Carlo Analysis.** Monte Carlo analysis offers a powerful tool to calculate the approximate probability of being in each state in CTMC. This analysis involves generating random samples and numerically evaluating them repeatedly. This analysis is especially useful when deriving exact state probabilities is intractable. **Retrial Queues.** Retrial requests frequently arises in many queuing systems, which led to the development of retrial queues [36]. In these systems, if requests don't get processed immediately or if they're blocked, they're placed in a virtual "waiting room", often referred to as the _orbit queue_. Requests residing in the orbit queue are retried according to the system's retrial policy. Under a classic retrial policy, the time a request spends in the orbit is randomly determined, following an exponential distribution, and each request is retried independently of others. The traditional retrial queues are limiting as they only place requests into the orbit when blocked, which does not accurately reflect many real systems' retrials. In the paper, we extend this notion of a retail queue to our proposed _retrial orbit_ that can more accurately model retrials. ## 3 Metastable failures in Replication Systems In this section, we discuss various instances of metastable failures that may occur in a cluster of replicated storage systems. We study and reproduce these failures within a consensus-based replication system that runs a PostgresSQL database. The aim of this section is to provide intuition of how metastable failures occur to inform the design of _MSF-Model_. ### System Model The system consists of a replication group with \(n=3\) nodes and \(m=n=3\) data partitions1. (Call the three nodes \(A\), \(B\), and \(C\) and the three partitions \(p_{a}\), \(p_{b}\), and \(p_{c}\).) Each partition's state is maintained via a State-Machine Replication (SMR) log with a consensus-based replication protocol [22, 28, 11]. To balance the load across the three nodes, each node is assigned to be the leader of one of the partitions. **Data Model.** Each data partition is a key-value store with a mutually exclusive set of keys. Clients issue database transactions that consist of read and write operations to be processed according to ACID and serializability guarantees [14]. We assume that workload generation is independent of the status of the system. Specifically, clients of each partition generate \(W\) transactions/second, where a transaction consists of a group of read and write operations. **System Failures.** To distinguish the different types of failures, we will use the terms _crash-failure_ and _metastable failure_ throughout the document. A crash-failure refers to a scenario where one or more machines in the system cease to function. In contrast, a metastable failure influences the entire system, as elaborated in Section 2. Up to \(f=1\) machines can crash at any point in time. Data is persisted on disk so that after a failure, the state of the node can be recovered. Communication messages can be reordered and/or indefinitely delayed--i.e., we assume an asynchronous communication model [8]. ### Metastable Failure Scenarios In this section, we outline scenarios in which a metastable failure can occur due to retry requests within a replicated storage system. **Scenario 1 (Node Failure).** Consider a crash-failure of node \(A\) during normal-case operation. In this case, transactions to partition \(p_{a}\) can no longer be processed until another node takes over as the leader of \(p_{a}\). In such a case, another node--for example \(B\)--takes over and becomes the leader of both \(p_{a}\) and \(p_{b}\). This increases the load on \(B\) as it is managing the state of both partitions. This may lead to overloading \(B\), delaying responses to transactions, and even dropping requests. There are two outcomes of the delaying and dropping of requests: (1) a buildup in the pool of requests occurs. The amount of build-up is proportional to the duration of the crash-failure and return to the normal-case operation. Also, it is proportional to the difference between the capacity of the replication system before and after the crash-failure. (2) Application clients will start issuing _retry_ requests due to the delayed responses. We define a _retry_ request as the following: if no response is received to a request within \(\tau\) time, then a _retry_ request is sent. Then, if no response is received after sending the _retry_ request within \(\tau\) time, then another _retry_ request is sent. This continues until a response is received for the corresponding transaction or the number of retrials reaches a predefined limit. After node \(A\) recovers, it reclaims the leadership of partition \(p_{a}\). After restoring the normal-case state, there is an impact from the failure in the buildup of the pool of requests as well as the _retry_ requests which place the system into a metastable state. This is because retry requests are continuously being generated even after recovery from the crash failure due to the buildup and feedback loop of retry requests that delays both current and future requests. **Scenario 2 (Load Surge).** A simpler metastable failure operation in a replicated storage system is due to a load surge. Consider a case of load surge due to a triggering event--for example, increased traffic during a holiday [38]. If the system is operating in a vulnerable state, this event may lead to overloading the system. This results in the issuance of _retry_ requests due to delayed responses. Even after the trigger is removed and the incoming load returns to normal, the system remains in an overloaded state due to the backlog of _retry_ requests. This feedback loop of continuous _retry_ requests persists the overload, which makes recovery difficult. The previous two examples demonstrate that retry requests play a critical role in perpetuating metastable failures (previous studies indicate that more than 50% of metastable failures in practice are due to retry storms [17].) This is because retry storms can be triggered in various unpredictable ways and combinations, making remedies and anti-patterns unable to solve the problem of retry storms in general. By focusing on retry storms, we can address and mitigate one of the common issues associated with metastable failures. In the following sections, we will focus on the "Load Surge" scenario to regenerate and investigate metastable failures in a replicated storage system. ### Formalizing Metastable Failure In this section, we provide a formalization of metastable failures that build on and extend prior work formalism [17] to introduce specific formalization for metastable failures in replicated storage systems. **The scaling parameter.** We first define the notion of a _scaling parameter_ (SP). The scaling parameter is a system configuration parameter in the system that controls the utilization of the system. Typically, such scaling parameters are configured to maximize the utilization of the system's resources. In this paper, we consider the _batch size_ and/or _incoming throughput_ as the SP. Batching is a widely-used technique to allow utilizing the resources more efficiently by grouping tasks to be processed together rather than individually. In the replication system that we consider, batching is implemented by assembling transactions into groups and subsequently committing them to the SMR log. Each batch constitutes an entry in the SMR log, which effectively reduces the communication required per transaction, resulting in improved resource utilization. Changing the SP leads to changing performance metrics of interest. The three performance metrics we consider are (1) system load--the number of transactions incoming to the system, including retry requests and client requests, (2) goodput--the number of processed transactions per second; and (3) latency--the average response time to a user request. In the "Load Surge" scenario, the system's resources are not fully utilized when the batch size is initially low. As we start increasing the batch size, the system can accommodate the additional workload. This stage is referred to as the _scaling_ phase, wherein increasing the batch size (SP) leads to a near-linear performance (goodput) improvement. However, the system reaches a vulnerable state at a certain batch size (\(sp_{1}\)). Further increasing the load on the system may cause it to enter a metastable state. Eventually, at another batch size (\(sp_{2}\)), the system starts experiencing retry requests that force it into a metastable state, leading to substantial delays or dropped requests. Even reducing the batch size to \(sp_{1}\) is not enough to recover the system requiring a costly recovery operation that significantly reduces the system load. **The Metastable Recovery Fallacy.** We now describe a fallacy in how systems behave after recovery2. Consider a system that is configured with batch size \(sp_{i}\) (where \(sp_{i}<sp_{2}\)) and achieves a goodput value of \(X\). Consider the case that a load is triggered to an overloaded value of \(sp_{2}\) and then, after some time, the load recovers back to \(sp_{i}\). Let's call this state the _aftermath_ state. At the aftermath state, the question is _would the behavior of the system retain the original performance pattern and values? In other words, does the aftermath state differ from a metastable state?_ Footnote 2: Here, we use the term recovery to denote the case of a system load (e.g., incoming traffic) coming back to the normal load after the trigger The analysis and experience reported in the previous metastable failure papers [6, 17] show that the return to the same performance pattern and values is a fallacy--as we describe in earlier discussions. Rather, what happens is that the load on the system becomes much higher than expected because of retry requests and the goodput value is \(Y\) which is much lower than \(X\). The system is in a metastable state. ### Reproducing a Metastable Failure To substantiate our prior discussion on metastable failures, this section centers around experiments conducted to reproduce the "Load Surge" scenario in a replication system. The goal is to study and showcase metastable behavior with real experiments. We begin by detailing the architecture and setup of the utilized prototype, followed by an exploration of experiments that induce a load surge metastable failure in a replication system. **Prototype.** Our prototype employs a Java-based implementation of the multi-paxos protocol [11]. Each Paxos instance is integrated with PostgreSQL for data storage. The prototype accommodates transaction batching, allowing clients to batchingal their requests. Partitioning is also incorporated, with each partition running a distinct Paxos instance for replication across other servers. The prototype follows the data model presented in Section 3.1. We have used an asynchronous approach for inter-node communication, implemented via the gRPC protocol [18]. **Setup.** We deployed three servers on a distributed setup using the Chameleon Cloud platform [19] utilizing its Compute Haswell nodes. One server, designated as the leader, runs the prototype to accept client requests and replicate data across the other two servers, subsequently confirming successful completion to the client. Clients operating from another node in the distributed setup send their transactions as batch requests. Each request is dispatched according to a set _interval_ time. If a response is not received within a _timeout_ period, the request is retried and immediately resent. This retry process is limited to a maximum number of attempts, after which the request is discarded. **Scenario.** The system load is incrementally increased until the system reaches a vulnerable state. A sudden surge in incoming load can push the system into an overloaded state, where retry requests could keep the system in the metastable state, resulting in a metastable failure. We follow Section 3.3 formalization and use batch size as the scaling parameter. We set the interval between each request to \(300ms\), meaning transactions are dispatched in batches every \(300ms\) asynchronously, irrespective of other requests. Each request has a timeout of \(600ms\) and is immediately retried upon timeout. Each request is retried a maximum of 3 times, including the initial user attempt. We increased the batch size in a step-wise manner, beginning at \(220KB\) and peaking at \(300KB\), which pushes the system into an overloaded state. Thereafter, the batch size is reduced stepwise to a low value of \(10KB\). In this scenario, having a value of \(sp_{1}=250KB\) leaves the system in a vulnerable state, and a scaling parameter of Figure 4: The impact of varying the batch size on goodput, retrials, and latency in a metastable failure scenario. \(sp_{2}=300KB\) pushes the system into the metastable state. Figure 3(a) displays the goodput and system load in this experiment. Initially, when the batch size is low (below \(250KB\)), resources are not fully utilized and responses are timely. As we increase the batch size (SP), system goodput increases correspondingly. However, at a certain batch size (\(250KB\)), the system is vulnerable, and a load spike could transition the system into a metastable state. An increase in batch size to \(300KB\) (at time _09:00_) overloads the system, leading to a metastable state. Retry requests even put more work on the system, increasing the system load significantly. Decreasing the batch size to its previous value of \(250KB\) (at time _12:00_) does not recover the problem, and the system remains overloaded, with a goodput of nearly 0. Here, a metastable failure has occured, and to recover, the batch size must be significantly reduced to \(70KB\) (shown at time _30:00_) to accommodate retry requests. Figure 3(b) illustrates the impact of increasing the batch size on the average number of retry requests and their success rate. As indicated, during periods of system failure, retry requests could potentially saturate the system, with most requests requiring retries. These retry requests generate a feedback loop, creating a sustaining artificial load that perpetuates the metastable failure. Figure 3(c) shows the average latency of successful requests. Initially, with smaller batch sizes, the latency is less than \(400ms\). However, as the batch size grows, latency not only increases but also becomes more variable. When the system is in the metastable state, most requests fail, and the few that succeed experience significantly increased latency. ## 4 A Queuing Framework of Retry Storms In this section, we propose the design of _MSF-Model_, a queuing framework for requests and retry request buildup for a replicated storage system. The goal of _MSF-Model_ is to capture the behavior of the replicated storage system and build an analytical framework for analyzing metastable failures. We begin by defining the queuing model, and then describe how to build and analyze the corresponding Markov process. The Markov process allows us to compute the probability of different states of the system based on the input parameters (such as the workload and processing rate). Then, we utilize the Markov process to build a framework for modeling metastable failures, called _MSF-Model_. This metastable failure framework can be used to answer whether a certain configuration/workload of the system would lead to a vulnerable and/or metastable state. The framework works by deriving the workload buildup that can be incurred from a Markov process configuration (that corresponds to a triggered event in a vulnerable state). It then determines whether this buildup would lead to sustaining the failure (after returning to the normal configuration). Finally, we conclude the section with a discussion of how _MSF-Model_ can be extended for various use cases. Figure 5 provides a visual representation of the components of our proposed queuing framework. ### Queuing Model **Definition.** Figure 6 presents an overview of the proposed queuing model. This model consists of an \(M/M/1/\infty\) primary queue and a secondary orbit space to model retry requests. Requests in this model follow a Poisson process with an arrival rate of \(\lambda\) (\(L_{0}\) in Figure 6), while the service time for each request is exponentially distributed with a processing rate of \(\mu\) (\(L_{1}\)). The \(M/M/1/\infty\) primary queue models the replicated storage introduced in Section 3.1, which sequentially receives and processes requests. We observe that modeling metastable failures requires some "memory" since they are impacted by a sustaining effect, followed by a triggering event during a vulnerable state. This "memory" characteristic needs to be captured by the queuing model. Given that the primary queue, due to its memoryless property, does not directly model a timeout, an orbit space is introduced. This orbit space serves as the "memory" component, enabling the capture of timeouts and retries that contribute to metastable failures. The orbit space can be thought of as a virtual waiting area where requests reside after failing to receive service during their initial attempt within the configured timeout and are waiting to retry. The probability of retrying requests (\(P_{\mathit{retry}}\)) is determined by the wait time of the request exceeding a specific timeout (\(P_{\mathit{retry}}=P(\text{Wait Time}>\tau)\)). Based on this probability, requests enter the primary queue (\(L_{2}\)) and are potentially added to the orbit space (\(L_{3}\)). Here the orbit space captures the requests that are timed out. Request in the orbit space reattempt service following a Poisson process with a rate of \(\mu_{0}=\frac{1}{\tau}\). This is based on the assumption that customers will retry their requests after the configured timeout. Therefore, the overall incoming rate of requests from the orbit space is equivalent to \(\lambda_{retry}=N.\mu_{0}\), with \(N\) representing the number of requests in the orbit space (\(L_{4}\)). This is because each request in the orbit space is retried independently, regardless of the status of other requests in the orbit space. The model also accounts for the potential of re-retrying requests in orbit space. Depending on the probability of retry, each of the requests in the orbit space will be added to the primary queue, and potentially another retry request will be added to the orbit space (\(L_{5}\)). **Probability of Retry.** The probability of a retry is determined by the probability that a request's wait time exceeds the timeout duration. This probability is influenced by factors such as queue length, request processing rate, and timeout duration. The retry probability can be derived from the following: \[P_{\text{retry}}=P(\text{Wait Time}>\tau) \tag{1}\] Here, \(\tau\) denotes the timeout configured based on the system. The wait time is a function of the number of requests in the primary queue and the rate at which they are processed. We can rewrite this equation to account for these factors: \[P_{\text{retry}}=P(N(\tau)\leq L) \tag{2}\] Here, \(L\) represents the length of the primary queue, and \(N(\tau)\) is the number of packets processed by the time \(\tau\). As the processing of each request follows an exponential distribution, this probability follows a Poisson process [20], which is calculated as follows: \[\begin{split} P_{\text{retry}}&=P(N(\tau)\leq L)\\ &=P(N(\tau)=L)+P(N(\tau)=L-1)+\cdots+P(N(\tau)=1)\\ &=\frac{(\mu\tau)^{L}}{L!}e^{-\mu\tau+}\frac{(\mu\tau)^{(L-1)}}{( L-1)!}e^{-\mu\tau}+\cdots+e^{-\mu\tau}\end{split} \tag{3}\] \(P_{\text{retry}}\) is a function of the queue length, processing rate, and timeout (\(P_{\text{retry}}=F(\tau,L,\mu)\)). In the remainder of this paper, we will use the notation \(P_{t}\) to denote the retry probability with \(i\) requests in the queue. For example, \(P_{1}\) is equivalent to \(P_{\text{retry}}\) when there is only one request waiting to be serviced in the primary queue. Figure 7 shows a plot of an example calculation of retry probabilities based on the length of the primary queue, assuming \(\mu=15\) and \(\tau=1s\). It is important to note that a higher \(P_{\text{retry}}\) adds more packets to the orbit space and consequently increases the primary queue length after a retry, making the retry probability even higher. This feedback loop models the sustaining effect in a metastable failure following a triggering event. The figure shows that a metastable failure can easily occur if a trigger rapidly increases the primary queue length. In this experiment, once the queue length reaches 24, the probability of a retry becomes one, meaning all subsequent requests will be retried. ### Markov Chain **Definition.** We propose a two-dimensional continuous-time Markov chain (CTMC) [27, 31] model to analyze the performance of our proposed queuing model. This Markov chain is composed of the states of form \((i,j)\), where \(i\) denotes the number of requests in the orbit space, and \(j\) represents the number of requests in the primary queue waiting to be serviced. To visualize the process, a transition diagram--shown in Figure 8--is employed, which depicts the possible state transitions and their corresponding rates. In the diagram, each state \((i,j)\) is represented by a node, and arrows between nodes represents the transition probabilities. Each row in the diagram corresponds to the number of requests in the orbit space, and moving to lower levels indicates an increase in requests within the orbit space. Each column shows the primary queue request count, with columns to the right representing an increased number of requests in the primary queue. The system has three types of transitions: 1. **Arrival transitions**: A new request arrives at the system and will be added to the primary queue. This transition occurs from state \((i,j)\) to state \((i,j+1)\) at a rate of \(\lambda\). If the request is retried, it will also be added to the orbit space, transitioning from state \((i,j)\) to state \((i+1,j+1)\) at a rate of \(\lambda P_{j}\). 2. **Departure transition**: A request departs from the system, creating room for another request in the primary queue. This transition occurs from the state \((i,j)\) to state \((i,j-1)\) at a rate of \(\mu\), where \(\mu\) is the service rate. 3. **Retrial transitions**: One of the requests in the orbit space gets retried, and will be added to the primary queue. A transition occurs from state \((i,j)\) to \((i-1,j+1)\) at a rate \(N\mu_{0}\), moving the request from orbit space to the primary queue. The request may also be retried and added back to the orbit space, transitioning from \((i,j)\) to \((i,j+1)\) at a rate of \(N\mu_{0}P_{j}\). The Poisson process superposition principle [20] allows us to consider transitions with the same state change as a single transition at a rate equal to the sum of all transition rates. In the following sections, we use this two-dimensional CTMC Figure 8: Proposed queuing model’s transition diagram model for a deeper understanding of the underlying dynamics of the system. The result of a stable Markov chain is its long-term behavior, which is described by the unique stationary distribution. The stationary distribution is a probability distribution over the states of the Markov chain that remains unchanged as the process evolves over time. When a Markov chain is stable, it is guaranteed to converge to this stationary distribution, regardless of the initial state. In other words, after a sufficiently large number of transitions, the probabilities of being in each state of the Markov chain will approach the values given by the stationary distribution. In the following, we analyze the Markov chain to derive the stationary distribution that allows us to have a better understanding of the queuing model. **Monte Carlo Analysis.** In this section, we discuss the Monte Carlo simulation [37] method for estimating the stationary probabilities of the continuous-time Markov chain (CTMC) in the context of our proposed queuing model. Given our two-dimensional CTMC model, we can use the following steps to conduct a Monte Carlo analysis: 1. **Initialize**: Select an initial state \((i_{0},j_{0})\), where \(i_{0}\) and \(j_{0}\) are the initial numbers of requests in the orbit space and primary queue, respectively. Set a counter for each state to zero, and initialize the simulation time \(t\) to zero. 2. **Generate transition times**: For each possible transition from the current state \((i,j)\), draw a random number \(r\) from the uniform distribution in the range \((0,1)\) and compute the corresponding exponential transition time \(t_{i,j}=\frac{1}{\text{rate}_{i,j}}\log(1-r)\), where \(\text{rate}_{i,j}\) is the transition rate for the specific transition as defined in the previous section. Keep track of the minimum transition time \(t_{min}\) and the associated transition. 3. **Update the state**: Perform the transition associated with the minimum transition time \(t_{min}\), updating the state \((i,j)\) accordingly. Increment the counter for the new state and add the transition time \(t_{min}\) to the simulation time \(t\). 4. **Check stopping criteria**: Evaluate the stopping criteria, which can be based on a predefined number of iterations, a fixed simulation time, or the convergence of the estimated probabilities. If the stopping criteria are not satisfied, return to step 2. Otherwise, proceed to the next step. Each of these stopping criteria is configurable based on the desired accuracy. In our experiments, we used fixed simulation time. 5. **Compute stationary probabilities**: Divide the counter for each state by the total simulation time \(t\) to obtain the estimated stationary probability for each state \((i,j)\). Normalize the probabilities so that they sum up to 1. By following these steps, the Monte Carlo simulation approximates the stationary probabilities of the CTMC, providing insights into the long-term behavior of the proposed queuing model. The estimated stationary probabilities can then be used to analyze and optimize the performance of the system, offering a practical approach to understanding the dynamics of our two-dimensional CTMC model. ### Distance Divergence In this section, we propose the notion of _distance divergence_ that aims to measure the stability of the system. At a high level, a non-stationary system is one where the queues tend to grow and not stabilize within certain sizes. The distance divergence is a measure of this high-level behavior. In a stationary system, there exists a time-independent probability distribution for the system's state space. However, in a non-stationary system, such a distribution doesn't exist. **Distance Metric.** We propose a metric called _distance metric_ to represent the system's stability with a constant and converging number for a stationary system, and an increasing value for a non-stationary system derived from the probability distribution of the system's state space. This metric is calculated by multiplying the distance of each state from the \((0,0)\) state with the probability of being in that state. Therefore, states further from the \((0,0)\) state, indicating more requests in the primary queue and orbit space, carry more weight and contribute more significantly to the distance metric. \[\text{Distance Metric}=\sum_{(i,j)=(0,0)}^{(i,j)\text{ states}}\sqrt{i^{2}+j^{2}}\times P_{ij} \tag{4}\] Here \(P_{ij}\) shows the stationary probability for the state \((i,j)\). Stationary probabilities can be computed using the Monte Carlo simulation introduced in the previous section. In a stationary system, after several iterations of the simulation, the _distance metric_ converges to a specific value due to the convergence of probabilities. Conversely, in an unstable system, this metric continuously increases as requests accumulate in the primary queue and orbit space. Figure 9 represents the distance metric value as we increase the arrival rate, \(\lambda\). In this experiment, we maintain a constant processing rate, \(\mu\), at 15. Each request has a timeout set at 1 second, which makes the retry rate, \(\mu_{0}\), equivalent to 1. Figure 9 illustrates that as \(\lambda\) increases, the probability of having requests in the primary queue and orbit space also increases. This behavior is attributed to the rapid growth of \(P_{retry}\) as the length of the primary queue expands. Beyond a certain threshold of \(\lambda\), the system becomes non-stationary, causing Figure 9: Distance Metric as a function of \(\lambda\) (\(\log_{10}\) scale) the distance metric to fail to converge to a specific value and consequently become an exceedingly large number. This experiment shows the difference in the distance metric between a stationary and non-stationary system. Figure 11 illustrates the variation in the distance metric as both \(\lambda\) and \(\mu\) parameters are changed. The proximity of the distance metric value to zero represents a stationary system wherein the probability of being in each state can be calculated. This experiment highlights the existence of a boundary beyond which the system ceases to be stationary (colorful blocks). As the processing rate increases, the incoming load that the system can withstand also grows until a certain point at which the system becomes unstable and transitions into a non-stationary system. A metastable failure timeline can be simulated using our proposed queuing model in conjunction with the distance metric. Figures 12 and 12 represent timelines for two distinct scenarios. In each instance, simulations are executed over 1000-second intervals with a constant processing rate of \(\mu=15\) and a retry rate of \(\mu_{0}=1\). In these experiments, we reproduce a load surge scenario, as detailed in Section 3.2, by adjusting the arrival rate as a scaling parameter. Our objective is to observe how the queuing model represents the system's behavior. In Figure 12, a load spike triggering event elevates the incoming rate as the scaling parameter from \(\lambda=4\) (\(sp_{1}\)) to \(\lambda=5\) (\(sp_{2}\)) for a duration of 100 intervals. Upon recovery from the triggering event, the system reverts to its initial state at \(\lambda=4\). This experiment shows that the triggering event does not induce a metastable state in the system, and there will not be a sustaining effect. The system remains stationary both before and during the triggering event. After recovery of the triggering event, the system resumes its normal operations, and the distance metric reverts to its former value. On the contrary, Figure 12 represents a metastable failure. In this scenario, a load spike elevates \(\lambda\) from \(4\) (\(sp_{1}\)) to \(6\) (\(sp_{2}\)) for 100 intervals, while other configurations remain similar to Figure 12. This experiment reveals that the load spike trigger propels the system into a non-stationary state, leading to a backlog of requests in both the primary queue and orbit space. Following recovery from the load spike, the abundance of requests in the primary queue and orbit space results in a retry probability of one, causing all packets to be retried and acting as a sustaining effect. Consequently, the system persists in a metastable state. To recover from this metastable failure, it's necessary to either significantly raise the processing rate or discard requests from the queue and orbit space. ### MSF-Model: Metastable Failure Model The primary application of the proposed queuing model and its Markov chains is to enable modeling a metastable failure. In this section, we provide a model of metastable failure, _MSF-Model_, that can be used to derive whether a metastable failure will happen given two sets of input parameters: (1) the configuration of the system in the normal-case scenario, and (2) the configuration of the system during the triggering event and the duration of the triggering event. The configuration of both states includes the arrival, processing, and retrial rates, and the size of the primary queue and orbit space. Consider a timeline for a replication storage system. Initially, the system processes requests at a rate of \(\mu\) and receives requests at a rate of \(\lambda\) (\(sp_{1}\)). A triggering event may either decrease the system's capacity to \(\mu^{\prime}\) or increase the load on the system to \(\lambda^{\prime}\) for a certain duration of \(\Delta_{\text{trigger}}\) (\(sp_{2}\)). Both of these events can also occur simultaneously. After this period, the system reverts to its original configuration with the initial parameters. If the triggering event pushes the system to a metastable state, the system cannot recover and return to normal conditions due to sustaining effects (e.g., retrial requests). There is also a possibility of not entering the metastable state if the trigger's intensity and/or duration are insufficient. A Monte Carlo simulation can be formalized by its initial state and input configuration. The simulation can be either stationary, providing the probabilities of being in each state, or non-stationary, indicating that the system is unstable. \[M((i,j),\mu,\lambda,\mu_{0},\tau))=\begin{cases}\{P_{(i,j)}:i,j\in\mathbb{N} \}&\text{stationary}\\ \varnothing&\text{non-stationary}\end{cases} \tag{5}\] Here, \(M\) represents a simulation function with the system's configuration as input parameters. The first parameter (\((i,j)\)) indicates the initial state of the Markov chain, followed by the processing rate (\(\mu\)), arrival rate (\(\lambda\)), retrial rate (\(\mu_{0}\)), and timeout (\(\tau\)), respectively. If the simulation is stationary, each state has its own probability, and the distance metric converges to a value; in this case, the function's result is the probability of being in each state. A non-stationary simulation implies that starting from the input state with the specified configuration leads to a continuously growing number of accumulated requests in both the primary queue and orbit space, making it impossible to calculate the probability of being in each state. This simulation accepts an arbitrary parameter, \(\Delta\), to set the simulation time, bypassing the need to check for stopping criteria. A metastable state is a situation where the system enters a bad state, and this state persists due to a sustaining effect. In the context of a queuing model, this occurs when the queuing system becomes non-stationary. A non-stationary system is unstable, and the existing requests in the orbit space and primary queue amplify the retry probability. This creates an artificial load that keeps the system in its non-stationary state. Therefore, we define a non-stationary queuing system as a system experiencing a metastable failure (the system is in a metastable state). For a Markov chain, we define non-stationary states as \(NS\). Initiating the simulation from these \(NS\) states results in a non-stationary simulation. \[NS(\mu,\lambda,\mu_{0},\tau)=\{(i,j):M\left((i,j),\mu,\lambda,\mu_{0},\tau) \right)=\varnothing,i,j\in\mathbb{N}\} \tag{6}\] For a given input configuration, \(NS\) is computed by collecting all states such that initiating a Monte Carlo simulation from these states does not lead to a convergence of probabilities for each state. Given that each state is either stationary or non-stationary, we can establish that \(S=\overline{NS}\), where \(S\) represents all the stationary states. The timeline of a metastable failure can be simulated. Initially, the system is stable given the initial configuration. However, a triggering event could transition the system into a metastable state, which can be represented as a state \((i_{MS},j_{MS})\) in the Markov chain. This state is non-stationary, and beginning from this state, even with the initial configuration, results in a metastable failure. Therefore, if the triggering event transitions the system into an \(NS\) state, initiating a simulation from that state with the original configuration results in a non-stationary simulation, indicative of a metastable failure. \[P_{\text{Metastable Failure}}=P(State\in NS(\mu,\lambda,\mu_{0},\tau)|\mu^{ \prime},\lambda^{\prime},\Delta_{trigger}) \tag{7}\] Here, the probability of a metastable failure is equivalent to the probability of being in any of the non-stationary states, given the conditions of the triggering event. ``` procedureMSP-Model(\(\lambda,\mu,\mu_{0},\tau,\lambda^{\prime},\mu^{\prime},\Delta_{trigger}\)) 1Stationary States \(\leftarrow\{\}\) 2\(i\gets 0\) /* Number of requests in orbit space */ 3\(Boundary\gets false\) whilenotBoundarydo 4\(j\gets 0\) /* Number of requests in primary queue */ 5\(Boundary\gets false\) whilenotBoundarydo 6\(Simulation\gets M((i,j),\mu,\lambda,\mu_{0},\tau)\) ifSimulation isnot\(\varnothing\)then 7Stationary States \(\leftarrow\) Stationary States \(\cup\) \(\{(i,j)\}\) 8else 9\(Boundary\gets true\) if\(j\) is \(0\)then 10\(i\gets 1\) /* Trigger Probabilities \(\gets M((0,0),\mu^{\prime},\lambda^{\prime},\mu_{0},\tau,\Delta_{trigger})\) 11iftrigger Probabilities \(\varnothing\)then 12return /* Triggering event put the system into a non-stationary state and MS failure will happen */ 13else 14\(P_{MS}\gets 0\) forStateinTrigger Probabilities with probability \(>0\)do 15ifStatenotinStationary Statesthen 16\(P_{MS}\gets P_{MS}+\text{State.Probability}\) returnP_{MS} ``` **Algorithm 1**Calculate Metastable Failure's Probability Algorithm 1 represents the algorithmic method of _MSF-Model_ to determine the probability of a metastable failure. The first segment of this algorithm (lines 2-16) identifies all states from which initiating the Monte Carlo simulation with the original configuration does not result in a non-stationary simulation. These states are termed as stationary states. These states are found by iterating over the state space and performing the Monte Carlo simulation for each state. This method identifies the boundary between non-stationary and stationary states and gathers the stationary states for the normal-case setup. If the triggering event propels the system into a non-stationary state, the system remains in a metastable state even after the triggering event is resolved and the initial configuration is restored. The second segment of the algorithm (lines 17-25) calculates the probability of transitioning to non-stationary states when a triggering event occurs, which is equivalent to the probability of remaining in a metastable state following the triggering event. This is determined by simulating the triggering event and then adding up the probabilities of transitioning into a non-stationary state. Algorithm 1 is intended for use as a preprocessing solution for various configurations that may arise during system operation. Monitoring systems can employ periodic executions of this algorithm, using the measured metrics of the network and machines as configuration parameters, to anticipate an incoming metastable failure. The outcome of this prediction can serve as a systematic signal to either throttle the incoming load or augment the system's capacity. **Computational Complexity.** The time complexity of algorithm 1 is \((O(|\overline{NS(\mu,\lambda,\mu_{0},\tau)}|+1)\times\Delta(M))\), where \(|\overline{NS(\mu,\lambda,\mu_{0},\tau)}|\) is the number of stationary states for a system with the initial configuration, and \(\Delta(M)\) represents the simulation time. While the number of stationary states is contingent upon the configuration, it remains finite since only a specific set of states are stationary. For example, in our simulations with a configuration of \(\mu=40,\lambda=30,\mu_{0}=1,\tau=1\), there are approximately 1100 stationary states. The duration of the simulation is adjustable and can be determined based on the desired level of accuracy. ### Practical Considerations The primary focus of this part is to discuss the applicability and generalizability of _MSF-Model_ in various replication system configurations and to examine its capability in modeling metastable failures in complex systems. Figure 13 represents different scenarios the model can capture. **Single Node without Replication.** In this scenario, we treat the entire node as a server within our queuing model, with customers sending requests and retrials directly to the server. The model is applicable in this context, as the single node processes requests at a rate of \(\mu\). Metastable failures may be triggered by an increase in incoming load (i.e., a higher \(\lambda\)) or a decline in system capacity (i.e., a lower \(\mu\)). **Standalone Node with Write-Through Cache.** The presence of a cache between the single node and customers does not affect the applicability of _MSF-Model_. We can assume that the system processes requests at a dynamic rate of \(\mu\), which may change upon cache failure. Consequently, metastable failures, in this case, may be triggered by alterations in system capacity. The entire system can be treated as a black box with a processing rate of \(\mu\). **Multi-Node Replication System.** Replication across multiple nodes increases the overall processing time due to the time required for the replication process. This results in an increased processing rate for the entire system. _MSF-Model_ can still treat the entire system as if it is a single server. In the event of a trigger, such as a node crash-failure, the processing time and incoming load may change. The change in processing time occurs due to an altered replication process, while a node failure may cause a change in the incoming load as the affected node's customer requests are redirected to other nodes. Nevertheless, _MSF-Model_ remains applicable, as the important factors are the processing rate and incoming load. **Complex systems.** An instance of _MSF-Model_ treats the entire system as a single server with one queue. However, certain real-world systems may be more complex, possibly employing multiple servers, parallel processing, or even a network of queues where tasks transition between different stages of processing. Some systems might also prioritize certain tasks over others or employ unique handling strategies. However, despite these potential complexities, many real-world systems can be accurately approximated using this single server and single queue model. Particularly, systems where requests are processed sequentially one after another or where the effects of multiple queues or servers are negligible can be well-represented by _MSF-Model_. In this case, _MSF-Model_ can function as a foundational block that can be layered to generalize more complex systems. For instance, let's take a complex system represented by a Directed Acyclic Graph (DAG) where multiple subsystems are interconnected and collaboratively working. Each of these subsystems could be represented using our queuing framework as a fundamental unit. Therefore, the entire system can be modeled using multiple instances of these units. Moreover, _MSF-Model_ provides a foundation for tackling more complex systems. As needed, these complex systems can gradually introduce additional components to the model to better represent their specific situations. A similar approach to our modeling can be adopted in these systems, making our model a beneficial starting point for understanding and analyzing a variety of metastable failure scenarios. In addition to the mentioned scenarios, the proposed framework for analyzing metastable failures--which incorporates Modeling, Markov Chain construction, and Metastable Failure Analysis--offers an efficient approach that can be readily applied to heterogeneous systems. This framework leverages the mathematical foundations of queuing theory to provide a robust analysis of this particular type of failure. ## 5 Validation of MSF-Model To validate our proposed queuing framework, we measure the metastable failure probabilities for various load surge scenarios and compare them with the results from the prototype described in Section 3.4. As illustrated in Figure 14, these probabilities were calculated for different load surge scenarios, with each triggering event defined as an increase in incoming load (\(\lambda\)). Our prototype experiments are based on the system model described in Section 3.1 which consists of a replicated storage (PostgresSQL) across three different physical nodes. Consider the first scenario where the batch size is \(3KB\) (Figure 14a). In this scenario, we assume an average processing rate of 40 requests per second for the prototype, with a maximum batch size of \(3KB\). This represents the highest number of \(3KB\)-size requests that the prototype can handle. Each experiment simulates a load surge scenario, with \(3KB\) batch-sized requests sent to the storage system following an exponential distribution to emulate customer arrival patterns. Each request has a timeout of 1 second and is immediately retried upon timeout, with a maximum retry limit of three times. In both simulations and prototype experiments, the system's initial incoming load in a stable state (\(\lambda\)) is set to 30 for three minutes. Then a triggering event increases the incoming load to a value of \(\lambda^{\prime}\) for one minute (\(\lambda^{\prime}\) is varied in the x-axis.) Following the event, the incoming load returns to its normal state, which is 30 for another three minutes (Based on the terminology in Section 3.3, \(\lambda\) and \(\lambda^{\prime}\) correspond to \(sp_{1}\) and \(sp_{2}\), respectively.) Figure 13: Different cases of a replicated storage system when a metastable failure can happen Figure (a)a displays the probability of metastable failure for these scenarios using Algorithm 1, compared to the results from the real prototype experiments across three physical machines. According to the queuing model results, any load surge triggering event with a rate higher than 35.5 will have the probability of causing a metastable failure. On the other hand, prototype experiments demonstrate that a triggering event with an equal or higher rate of 35.7 can lead to a metastable failure, aligning with the probabilities predicted by our model. Figures (b)b and (c)c present similar experiments for batch sizes of \(50KB\) and \(100KB\), respectively. These experiments differ in the prototype's processing rate as it handles requests of different sizes at different rates. Despite this, the figures show similar results, showcasing a similar occurrence of metastable failures between the prototype and queuing model simulations. ## 6 Related Works **Metastable Fialures.** Many studies have investigated the causes of distributed systems failures [2, 9, 10, 15, 34, 43]. A recent study highlighted the possibility of failures arising from the interaction between different (sub-)systems, termed Cross-System Interaction (CSI) failure [42]. The authors argue that the reliability of distributed systems is influenced not just by the reliability of individual systems but also by their interconnections. If a metastable failure occurs, its sustaining effect can spread, potentially leading to a CSI failure. The concept of metastable failure in distributed systems was recently introduced [6]. Huang et. al. [17] expand on the concept of metastable failures by suggesting that systems may become vulnerable not just due to increased load but also due to capacity degradation, introducing varying degrees of vulnerability. Previous works have indeed analyzed metastable failures, but they have not provided a formal methodology for modeling and early detection of such failures. Since metastable failures are a recent concept, specific studies on this type of failure are limited. However, various research has investigated different failure types that could trigger a metastable failure or have a relation to it. Distributed consensus algorithms have addressed crash-failures and aim to enhance system reliability in response to such failures [11, 33]. However, a crash-failure can potentially trigger a metastable failure even after the recovery from the crash-failure. Past research into failures due to configuration changes [34, 43] and fail-slow hardware failures [2, 15] demonstrates that these failure types could trigger a metastable failure. **Retrial Queues.** Extensive research has been conducted on retrial queues over the past years [36]. In prior studies [3, 4], a retrial queuing network with a constant retrial rate was employed to model a TCP network. Their model retries a request whenever the server queue is full, leading to blocked customers in the system. Essentially, they consider a network of one or two tandem \(M/M/1/\infty\) queues with blocking, and an \(M/M/1/\infty\) retrial (orbit) queue. However, to the best of our knowledge, no work has been done to model scenarios where requests are retried after a specific timeout, regardless of whether they are blocked or not. Our queuing model does capture this timeout retrial nature that is not captured by earlier work in queuing models. In [30], a numerical approach for analyzing retrial queue models is provided. Their numerical method has become the standard for most retrial queuing models to mathematically calculate stationary conditions and their probabilities. The use of numerical analysis may be impractical due to its inherent complexity and the necessary simplifications required in model development. Many works in queuing theory utilized the Monte Carlo method for model analysis [12, 35]. ## 7 Conclusion Our work consists of three main contributions to study and analyze metastable failures in replicated storage systems. Firstly, we examined various scenarios in which a metastable failure could occur in a replicated storage system driven by a consensus algorithm. Secondly, we reproduced a series of case studies of metastable failures within such a system for further analysis. Finally, we proposed, _MSF-Model_, a queuing theory model of metastable failures that allows us to understand whether a system configuration would lead to metastable failures. Experimental validation shows that the theoretical model closely matches the real behavior of metastable failures. Looking forward, _MSF-Model_ can be generalized to accommodate more complex scenarios, such as systems comprising multiple storage systems. Another future work is the development of mitigation strategies that adjust the rate of requests accepted by the system. Moreover, creating prevention strategies, such as auto-scaling based on the early detection of metastable failures, is a promising next step that can utilize the analytical models we propose. Figure 14: Queuing model validation with the implemented prototype
Metastable故障は、実世界分布式ストレージシステムにおいて、頻繁に発生する故障のパターンを抽象化した最近の概念です。本論文では、このメタステアブル故障を再生産ストレージシステムにおける形式的分析とモデル化を提案します。私たちは、分散システムにおける基盤的な問題であるコンセンサス問題に着目し、多くのシステムに影響を与えることを目的としています。私たちの主要な貢献は、メタステアブル故障を特徴付けるためのキューイングベースの分析モデル、MSF-モデルの開発です。MSF-モデルは、従来のモデルと対話可能なメタステアブル故障をモデル化することができる、新しいモデル概念を統合しています。私たちは、実実験を通じて、MSF-モデルを再現し、検証しました。実実験の結果、MSF-モデルは、キューイングベースモデルからの予測と比較して、高い精度でメタステアブル故障を予測することが示されました。
2305.00436
Sustainability Competencies and Skills in Software Engineering: An Industry Perspective
Achieving the UN Sustainable Development Goals (SDGs) demands adequate levels of awareness and actions to address sustainability challenges. Software systems will play an important role in moving towards these targets. Sustainability skills are necessary to support the development of software systems and to provide sustainable IT-supported services for citizens. While there is a growing number of academic bodies, including sustainability education in engineering and computer science curricula, there is not yet comprehensive research on the competencies and skills required by IT professionals to develop such systems. This study aims to identify the industrial sustainability needs for education and training from software engineers' perspective. We conducted interviews and focus groups with experts from twenty-eight organisations with an IT division from nine countries to understand their interests, goals and achievements related to sustainability, and the skills and competencies needed to achieve their goals. Our findings show that organisations are interested in sustainability, both idealistically and increasingly for core business reasons. They seek to improve the sustainability of processes and products but encounter difficulties, like the trade-off between short-term financial profitability and long-term sustainability goals. To fill the gaps, they have promoted in-house training courses, collaborated with universities, and sent employees to external training. The acquired competencies make sustainability an integral part of software development. We conclude that educational programs should include knowledge and skills on core sustainability concepts, system thinking, soft skills, technical sustainability, sustainability impact and measurements, values and ethics, standards and legal aspects, and advocacy and lobbying.
Rogardt Heldal, Ngoc-Thanh Nguyen, Ana Moreira, Patricia Lago, Leticia Duboc, Stefanie Betz, Vlad C. Coroama, Birgit Penzenstadler, Jari Porras, Rafael Capilla, Ian Brooks, Shola Oyedeji, Colin C. Venters
2023-04-30T09:34:07
http://arxiv.org/abs/2305.00436v2
# Sustainability Competencies and Skills in Software Engineering: An Industry Perspective ###### Abstract Context: Achieving the UN Sustainable Development Goals (SDGs) demands a shift by industry, governments, society, and individuals to reach adequate levels of awareness and actions to address sustainability challenges. Software systems will play an important role in moving towards these targets. Sustainability skills are necessary to support the development of software systems and to provide sustainable IT-supported services for citizens. **Gap:** While there is a growing number of academic bodies, including sustainability education in engineering and computer science curricula, there is not yet comprehensive research on the competencies and skills required by IT professionals to develop such systems. **Research goal:** This study aims to identify the industrial sustainability needs for education and training from software engineers' perspective. For this we answer the following questions: (1) what are the interests of organisations with an IT division with respect to sustainability? (2) what do organisations want to achieve with respect to sustainability, and how? and (3) what are the sustainability-related competencies and skills that organisations need to achieve their sustainability goals? **Methodology:** We conducted a qualitative study with interviews and focus groups with experts from twenty-eight organisations with an IT division from nine countries to understand their interests, goals and achievements related to sustainability, and the skills and competencies needed to achieve their goals. **Results:** Our findings show that organisations are interested in sustainability, both idealistically and increasingly for core business reasons. They seek to improve the sustainability of processes and products but encounter difficulties, like the trade-off between short-term financial profitability and long-term sustainability goals or an unclear understanding of sustainability concepts. To fill these gaps, they have promoted in-house training courses, collaborated with universities, and sent employees to external training. The acquired competencies should support translating environmental and social benefits into economic ones and make sustainability an integral part of software development. We conclude that educational programs should include knowledge and skills on core sustainability concepts, system thinking, soft skills, technical sustainability, building the business case for sustainability, sustainability impact and measurements, values and ethics, standards and legal aspects, and advocacy and lobbying. sustainability, software engineering, software sustainability, sustainable software, education, software competencies, sustainable development goals, skills. ## 1 Introduction Digitalisation is pervasive and can either help or hinder the United Nations Sustainable Development Goals (SDGs)1[1, 2]. Organisations understand that but struggle in implementing sustainability in their service portfolio and their business practices [3, 4]. Consequently, there is a need to understand which competencies and skills industry requires, and how they can be integrated into their practices. These new competencies and skills must be acquired through adequate learning programmes and courses addressing the different sustainability dimensions, i.e. environmental, economic, social, technical, and individual [5]. For software engineers2, this ranges from the more technical aspects supporting Green IT and software sustainability to more social and individual ones facilitating software-driven processes in society. Footnote 1: [https://sdgs.un.org/goals](https://sdgs.un.org/goals) Academia has made efforts to introduce sustainability in regular computer science programmes, as well as suggesting the skills and competencies needed by their students [6, 7, 8]. According to these studies, future software engineers need to develop a sustainability mindset and acquire sustainability competencies able to produce sustainable IT-based systems or systems that both support more sustainable processes and monitor the achieved sustainability goals [9]. However, industry is still unclear on which sus tainability skills different sectors require to achieve their sustainability goals. Recent non-academic literature highlights the role and importance of skills for sustainability. For instance, even the British Plastic Federation [10] mentions that the sustainability skills of employees are key for any strategy oriented to achieving a sustainable business. Similarly, Terrafinitii [11], a sustainability consultancy, highlights that effective sustainability performance demands sustainability skills and competencies -- not only from sustainability professionals but also in other roles within the organisation. Hence, we not only need to identify which skills are more relevant in delivering sustainability in a particular organisational unit but also related units must recognise sustainability as a goal of the company's core business. What prompts this study is that we, the authors, are under the impression that across industry there is (1) only a partial understanding of sustainability and there is (2) a limited understanding of how to address the lack of related competencies. Additionally, across academia, there is (3) a lack of understanding of the needs of industry related to sustainability and (4) a need for a concrete teaching curriculum that could lead to the high-quality sustainability education which software engineers require. This work aims to investigate the industrial sustainability needs for education and training from a software engineering perspective. To achieve this, we addressed the following three research questions: RQ1: _What are the interests of organisations with an IT division with respect to sustainability?_; RQ2: _What do organisations want to achieve with respect to sustainability, and how?_; and RQ3: _What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?_ To this end, we interviewed sustainability and IT experts from twenty-eight (28) organisations in nine (9) different countries. Our main contributions are: * A far-reaching overview of the organisations' perspective on sustainability, including (i) their general interest in sustainability; (ii) the sustainability goals they want to achieve; (iii) their achievements towards these goals and the difficulties faced in achieving them; (iv) the sustainability skills and competencies they already possess in-house and those that are missing; and (v) solutions to acquire the missing. * Initial insights on the gaps in current academic and non-academic training programmes for software engineers, and our recommendations to address those gaps for those who design the new programmes to enable future software engineers to achieve sustainability skills and competencies. The rest of the paper is structured as follows: Section 2 provides a comprehensive background of the concept of sustainability. Section 3 elaborates on the employed research method. Section 4 presents the results regarding competencies and skills. Section 5 interprets the study's findings. Section 6 offers recommendations for training programs to address identified gaps in competencies and skills. Section 7 provides an analysis of the threats to validity. Section 8 offers a review of related work. Lastly, Section 9 concludes the study and highlights potential future research directions. ## 2 Background This section starts with some background on the general notion of sustainability and follows with specific overviews of sustainability in IT and then Software Engineering. Although the principles of sustainability have been known to numerous human cultures throughout history, their first scientific usage was most likely in H.C. von Carlowitz's principles of sustainable forestry from 1713 [12] (summarised in [13]). As Hilty and Aebischer [14] comment, as the understanding at the time was that forests have one purpose, to produce wood, Carlowitz's basic principle is quite straightforward: _"do not cut more wood than will grow in the same period of time"_. Of course, we know today that a forest accomplishes many further functions (such as producing oxygen, filtering air and water, preserving biodiversity, recreational and aesthetic values, and many more), which makes the sustainability perspective much more complex. The paradigm, however, is unchanged: As Venters et al. [15] discuss, the verb "to sustain" and the noun "sustainability" come from the Latin "sustenere", which was used for both "to endure" and "to uphold" something. Hence, "sustainability" refers to the capacity of a system to endure for a certain amount of time [15]. Within the conceptualisation of sustainability put forward by the Brundtland Commission in 1987 [16], the system in question is Earth itself and the period of time, while not exactly specified, includes many generations into the future. The Brundtland definition thus encompasses two aspects: distributive justice (_"the essential needs of the world's poor, to which overriding priority must be given"_[16]), but also intergenerational justice, for which the preservation of the biosphere is a prerequisite. The relationship between the IT sector (or digitalisation in general) and sustainability has been conceptualised in various ways and under different names. Early concerns with the environmental footprint of the IT sector itself are usually referred to as "Green IT", while the purposeful deployment of IT to reduce the environmental footprint in other economic or societal sectors is often called "Green by IT" [17]. Other terms used to describe the latter are, for example, ICT4EE (ICT for energy efficiency), "Energy Informatics" [14] or "I(C)T enabling effect" [18]. Numerous further names describe the relationship between digitalisation and sustainability in general, which includes both the concepts of "Green IT" and "Green by IT", but also the further dimensions of sustainability, in particular, the social one. Such names include "Digital Sustainability", "Sustainable Computing" or "ICT for Sustainability (ICT4S)" [14]. For the "software and sustainability" domain, there are also two views, which are quite similar to those of the broader "IT and sustainability" field [19]: one looking at the sustainability of software itself (foremost, thus, a technical notion of sustainable software), the other at deploying software engineering for sustainability (SE4S) beyond the software systems themselves [15]. Acknowleding both views, the "Karlskrona Manifesto for Sustainability Design" extends the well-known three dimensions of sustainability by another two: technical (to account for the desired long-term use of software) and individual (addressing personal freedom, dignity and fulfillment) for a total of five dimensions [5]. While the individual dimension is not always represented, most literature in the field accounts for both the technical as well as the three established dimensions (environmental, economic, and social) [20]. As is the case in general with sustainability, the dimensions are not entirely independent and there are often trade-offs among them [21]. And while current software engineering practice gives high value to the technical and economic dimensions, the social and environmental ones (and thus the crucial components of the sustainability concept as understood by the Brundtland commission) are often ignored [20]. ## 3 Study Design and Research Method To answer our research questions, we used a mixed-methods approach [22], combining individual interviews and focus group interviews in a semi-structured format. For the sake of brevity, both individual interviews and focus group interviews are referred to as interviews hereafter. Our study process is illustrated in Figure 1. In summary, we extensively discussed our research goals and steps (study design and planning), creating a set of PowerPoint slides to guide our conversations in all interviews and focus groups (data collection). Additionally, we did a pilot study that provided a baseline structure for the subsequent interviews. All were recorded and transcribed and the relevant information was retrieved with the support of a code book (data extraction). Finally, the coded data was analysed and the results are presented. ### _Goals and Research Questions_ To address our eventual goal (i.e., design education programmes that teach the required sustainability competencies and skills for future software engineering), we first need to understand what are the needs of the field, i.e., from industry3. Accordingly, we formulate the following overarching research question (RQ): _"What are the industrial sustainability needs for education and training from software engineers' perspective?"_. Footnote 3: In general, with the term “industry” we mean practice from both the private and public sectors. We break down RQ into three research sub-questions which guide our data collection: RQ1: _What are the interests of organisations with an IT division with respect to sustainability?_ The sustainability focus depends on the specific business domain and priorities. In this respect, the sustainability perspective depends on their specific interests and stakes. RQ1 helps us define the possible scope of future education programmes. RQ2: _What do organisations want to achieve with respect to sustainability, and how?_ Sustainability can add significant value to both private and public organisations. However, to achieve this aim, sustainability must be tailored and embedded in the DNA of the organisation itself, for example, its business goals, values, and vision of the future market. Accordingly, this research question investigates the target achievements (what the organisations aim to achieve with respect to sustainability), the influence of software/ICT on these achievements, as well as the difficulties they face and expect. RQ2 helps us define and prioritiise the various foci of future education programmes (e.g., creation of innovation, acquisition of new markets, compliance with regulation). RQ3: _What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?_ To different extents, organisations are becoming aware of the sustainability-related competencies and skills that they have already in-house or that they miss in order to achieve their goals. This research question investigates the gaps in the IT workforce and, if applicable, the strategy organisations have in place or envisage to acquire the missing competencies and skills. RQ3 helps us define future education programmes' types and contents (e.g., mono- versus interdisciplinary, higher education versus professional training). Figure 2 shows the relationship between RQs and the themes derived from the interviews. In Section 4, we will report in detail the findings related to each theme. ### _Data collection and analysis_ #### 3.2.1 Data collection To collect data, we conducted nine individual interviews and seven focus group interviews in a semi-structured format with industry practitioners. We contacted and recruited the participants by using our professional contacts. Individual interviews were employed due to the familiarity of researchers and interviewees available in their networks. We conducted several focus group interviews to catalyse discussions among interviewees. Our selected ICT organisations have supported or participated in sustainability initiatives or have an ICT department involved in sustainability actions as part of the strategy of the company. We selected organisations from different countries and domains, as listed in Table I, to diversify the perspectives regarding sustainability. The organisations are anonymised to maintain confidentiality. In total, we interviewed 28 experienced IT/sustainability practitioners from 28 distinct organisations in different industrial domains belonging to 9 countries. We followed the statistical classification of economic activities in the European Community [23] to classify the business sector of the organisations. To classify the organisation sizes, we followed the OECD scheme proposed in [24]. Our participating organisations cover a wide spectrum of areas from software to telecommunications and resource supply. While most of them are private, nearly a third of the organisations (9/28) are from the public sector. Our participants have significant industry experience and have different roles and positions in their organisations, as shown in Table II. The second column shows the business model with respect to the sustainability of their organisations, which is elaborated in more detail in Section 4.1. The majority of the participants are seniors with more than ten years of professional experience, and many have a computer science background or degree. We used online teleconferencing tools (e.g. Microsoft Teams, Zoom, Skype) to interview the participants. At the beginning of the interview, we took around five minutes to explain the goals of the interview. The prepared interview questions4 were then asked one by one. The interviews were conducted from March to September 2021 and recorded with the consent of the interviewees. Individual interviews lasted between 30 minutes to 2 hours, while focus group interviews took a bit longer time as more discussion arose. The recorded interviews were transcribed manually or automatically by using, for example, Microsoft Office 365 (i.e., tool), depending on the researchers' preference. For automatic transcriptions, the responsible researchers spent time manually correcting transcription mistakes in the tool to ensure the quality of the research. #### 3.2.2 Data extraction and analysis To analyse the interviews, we employed the thematic data analysis approach proposed in [25]. To facilitate the data analysis, we utilised Saturate App5, which is a web-based platform that supports collaborative qualitative analysis. It allows many researchers simultaneously to perform activities related to coding, labelling, and categorisation. The data analysis process was carried out as follows: Firstly, the transcripts were imported to Saturate App. Then, one researcher created the first version of a codebook in an Excel spreadsheet based on the interview questions. During the _data extraction pilot_ stage, we performed initial coding for the interview-based data collected from their interviews. They also validated and extended the codebook as needed until it was deemed stable by all coders. Finally, during the _data extraction_ stage, ten researchers involved in this study were divided into three sub-groups, each having at least three members. Each sub-group analysed one research question defined in Section 3.1. The groups conducted several workshops to validate and refine the coding done in the first stage so that the original coding for all interviews was verified and agreed on by several researchers. \begin{table} \begin{tabular}{l l l l l} \hline \hline **ID** & **County** & **Sector** & **Type** & **Size** \\ \hline 1 & Colombia & Technology consultancy & Private & \(<\)50 \\ 2 & Finland & Software consultancy & Private & \(<\)250 \\ 3 & Finland & Software & Private & \(<\)50 \\ 4 & Finland & Software consultancy & Private & \(<\)250 \\ 5 & Germany & Technology & Public & \(<\)50 \\ 6 & Germany & Technology & Private & \(<\)50 \\ 7 & Germany & Technology & Private & \(<\)50 \\ 8 & Netherlands & Software consultancy & Private & \(<\)250 \\ 9 & Netherlands & Public administration and & Public & 250+ \\ & \multirow{2}{*}{10} & \multirow{2}{*}{Netherlands} & \multirow{2}{*}{Software consultancy} & Private & 250+ \\ & & & & \\ 11 & Norway & ICT industry representative & Public & 250+ \\ 12 & Norway & Energy provider & Public & 250+ \\ 13 & Norway & Mobility provider & Public & 250+ \\ 14 & Norway & Software consultancy & Private & 250+ \\ 15 & Norway & Software consultancy & Private & \(<\)50 \\ 16 & Norway & Waste management & Public & 250+ \\ 17 & Norway & Technology & Public & 250+ \\ 18 & Portugal & Software & Public & \(<\)250 \\ 19 & Portugal & Software and Technology & Private & \(<\)50 \\ 20 & Portugal & Software and Technology & Private & \(<\)50 \\ 21 & Portugal & Software consultancy & Private & \(<\)50 \\ 22 & Spain & Water supplier & Public & 250+ \\ 23 & Spain & Marketing and Advertising & Private & \(<\)250 \\ 24 & Spain & Mobility provider & Private & 250+ \\ 25 & Sweden & Networking and Telecom- & Private & 250+ \\ & \multirow{2}{*}{26} & \multirow{2}{*}{Sweden} & \multirow{2}{*}{Telecommunication} & Private & 250+ \\ 27 & UK & Technology & Private & 250+ \\ 28 & UK & Technology & Private & \(<\)50 \\ \hline \hline \end{tabular} \end{table} TABLE I: Organisations (anonymised) interviewed per country Fig. 1: Study design and execution Fig. 2: Themes with respect to research questions The codebook has two purposes. Firstly, it formalises what the researchers have analysed from the data during the _data extraction pilot_ stage. Secondly, it is used as a guideline in the _data extraction_ stage, guiding the researchers who validate and correct the results initiated in the previous step. In the codebook, at the highest level, the coded data were organised according to our three research questions. The main topics are interests, target achievements, and competencies. The codes belonging to these main topics were further organised into three levels depending on their abstraction. The deeper level a code goes, the more detail it represents. The codebook6 is shared as supplemental material to help readers better understand the findings that we report in Section 4 where the codes are highlighted in **bold**. The codebook is organised as follows. Horizontally, it is divided in accordance with the main topics mentioned above (i.e., interests, target achievements, and competencies). The codes belonging to each main topic are vertically distributed into three levels. Each code is accompanied by a definition, a description of when the code is applicable, and a coded text example with the respective source. Figure 3 illustrates the overall structure of the codebook and Table III shows sample final results of our data analysis phase, which can also be found in the supplied codebook. The examples contain quotes taken from the interviews and how they are coded during data analysis. The first set of two examples is for RQ1, which helps us identify why sustainability becomes an interest for our interviewed organisations in terms of economical aspects. The second set shows the organisations' goals in relation to sustainability (RQ2). The last set for RQ3 indicates skills that employees of our interviewed organisations possess in order to help them achieve the established sustainability goals. Footnote 6: Codebook access link: [https://bit.ly/3wgt0dp](https://bit.ly/3wgt0dp) ## 4 Results This section describes the demographics of participants and the findings with regard to our research questions. ### _Demographics: role with respect to sustainability_ In this section, we report the demographics concerning the business visions of our interviewed organisations and their perceptions of sustainability. #### 4.1.1 Organisations' role with respect to sustainability We classified our interviewed organisations as producers or consumers (or both) of sustainability solutions in IT. The _producers_ are the organisations who produce tools or software solutions to support sustainability initiatives. The organisations that use these products are classified as _consumers_. Some organisations may play both roles. The "Business model with respect to sustainability" column of Table II shows the classifications of the organisations who use one of these models or both. While the majority of our interviewed organisations, 16 out of 28 organisations (55.6%), from nine different countries are solely producers, only three organisations (11.1%) from three countries are solely consumers. Nine organisations (33.3%) from three countries play both roles. Finally, 9 out of 28 organisations operate in the public sector, while the rest are private organisations. In our sample, many organisations develop sustainability solutions in-house rather than rely on other organisations. In many cases, the sector to which a company belongs does not impact the role adopted. Furthermore, the producer role is more common in software and technology organisations. Therefore, we can see that digitalisation plays a key role in providing consumers with sustainable solutions. Finally, we observe that organisations adopting both roles belong to the public sector (e.g., energy, water management) acting as end-users that demand sustainable solutions but also develop sustainability solutions in their IT department that are used by the environmental department. #### 4.1.2 Organisations' perception of sustainability We did not explicitly ask for the organisations' perception of sustainability during the individual interviews and focus group interviews as we did not want to have a confirmation bias, especially in the focus groups. However, we did extract the organisations' perceptions of sustainability that emerged when analysing the qualitative data. Overall, the main focus of organisations when discussing sustainability is on the environmental dimension. For eleven organisations, their statements can be interpreted that they perceived sustainability as environmental issues such as carbon emissions, climate change, and energy consumption. Eight organisations did also mention the economic dimension as part of sustainability. From these, three organisations referred to the financial impact of their products on their businesses (_"[...] whenever we are making the systems, especially architectural or technological decisions, we consider the economical impact a lot"_ - Org. 2) and also talked about economic sustainability in the sense of circular economy. The social and technical dimensions have been considered as part of sustainability by seven organisations. For the social dimension, the focus is on their own workforce (e.g., providing yoga classes), the customer (e.g., improving customer satisfaction) and society (e.g., fighting against poverty) as a whole. For the technical dimension, the interviewees' statements suggest that sustainability is related to a quality attribute of IT products and services, such as reusability, robustness, security, etc. Finally, 12 organisations explicitly consider sustainability as related to more than one dimension and surprisingly, only four organisations mentioned the SDGs as relevant to them. In conclusion, we observe that a prominent focus of organisations when discussing sustainability is on the environment. Economic, technical, and social issues are also popular topics. ### _RQ1: What are the interests of organisations with an IT division with respect to sustainability?_ For this RQ we asked the organisations about their interests in sustainability from four perspectives: their business, their customers, their shareholders, and their stakeholders. #### 4.2.1 Interests with respect to business When discussing the reason why sustainability is interesting for their **business**, it was not surprising to observe that economic reasons play an important role and are followed by moral and social matters. With regard to **economic** reasons, sustainability helped our interviewed organisations open new business opportunities, increase their competitiveness, give them the license to operate, reduce costs, and acquire and retain talent more easily. In particular, sixteen organisations affirmed an \begin{table} \begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Interviewed organisation** & **Quote** & **RQ** & **Code** \\ \hline 3 & _“Young people want to join us because they want to work for technologies which are basically improving sustainability.”_ & RQ1 & for-business / economical / attract-talents \\ 27 & _“If we have skills to optimise our algorithms to reduce our cloud costs, it is a financial benefit for us._ & RQ1 & for-business / economical / economic-return \\ \hline 1 & _“When you see the flow of value and start identifying where waste goes, you can deliver a better and cleaner system.”_ & RQ2 & goals / for-own-process / understand-to-help-customers \\ 18 & _“We are seeing IT more as a means to get insights on how to create sustainability, to be able to steer the effects and decisions you make.”_ & RQ2 & goals / for-own-process / add-value \\ \hline 2 & _“We have talents with long backgrounds in software development and good knowledge of building top-quality software.”_ & RQ3 & have-in-house / technical / quality software have-in-house / technical / business \\ 15 & _“We have a lot of employees who are chatted in IT. They are also business advisors.”_ & RQ3 & have-in-house / technical / quality software have-in-house / technical / business \\ \hline \hline \end{tabular} \end{table} TABLE III: Example results of data analysis interest in sustainability because it creates new **business opportunities** or helps mitigate potential threats. Overall, our interviewed organisations have one of the following three profiles: sustainability is their main business (e.g. they offer solutions for circular economy or sustainability reporting), they are in an industry that is being highly impacted by sustainability demands (e.g. mobility), or their customers are demanding it. Nine organisations viewed sustainability as a matter of **competitiveness and survival**. While some use sustainability to differentiate themselves from competitors, others are aware that other organisations are investing in sustainability-related initiatives and do not want to be left behind. For example, Org. 6 stated that _"[sustainability] is another point of differentiation"_ and Org. 23 mentioned that _"all the organisations that are emerging and that are effectively working, are those that have that sustainable consistency at an environmental and social level"_. Finally, for some organisations, it is a matter of making sure that they will continue to **utilise the resources** they need to function; for example, Org. 22 is fighting climate change because _"we are going to stop having the main resource of our factory, which is the water"_. Three of the organisations explicitly stated that implementing sustainable practices can bring them **economic advances**. Org. 27, for example, has a predictive algorithm for product demand that helps its clients minimize food waste and seeks to optimise its algorithms to reduce cloud and energy costs. For six of the organisations, sustainable practices were adopted to comply (or help others to comply) with **regulations**. Org. 22, for example, shared that _"everything to do with climate- and environmental policies have become structural"_. Finally, three organisations saw that sustainability is vital to **attracting talent**. These organisations feel that highly skilled professionals want to work for firms where they can share their values and put time and effort into something meaningful. Sustainability was also related to **moral** concerns. Eight organisations invested in sustainability because they truly believe in it, sometimes being directly related to **aligning to the company's values**. Two illustrations of this belief are _"our goal as a company has been focused on providing something to society and not just doing profit"_ (Org. 18) and _"our mission or reason for existence is that our business is coming from sustainability"_ (Org. 3). Unsurprisingly, when talking about their interest in sustainability, **environmental** concerns such as reduction of waste, water and energy use, and carbon emissions were the most present (mentioned by fifteen organisations). These concerns were related to both the purpose and the operation of the business. Finally, four organisations explicitly stated that sustainability (or specific dimensions of it) was **not of concern** to them. For example, ecological sustainability was _"almost the last perspective for us in day-to-day life"_ (Org. 2). #### 4.2.2 Interests with respect to customers Differently from the business perspective (mainly focused on the economics), the **customers** of our interviewed organisations are reported to be most attracted to sustainability by moral values. In particular, they align their businesses to sustainability due to ecological and societal concerns. Economics, e.g., business opportunities returns, is the second most popular reason for investments. Here it is worth highlighting that several of the interviewed organisations adopted a business-to-business model, therefore, their customers were other organisations rather than individuals. Among the **moral** reasons driving our interviewed organisations' customers to sustainability, sixteen organisations shared that their customers wanted to **protect the environment** by reducing carbon emissions and electricity use. At the same time, **social matters** were of concern to nine organisations. Especially, COVID-19 was the most frequently mentioned issue as the pandemic forced organisations to adapt their businesses for survival. For example, Org. 23 shared that most products requested by its customers in the last two years were related to the pandemic. In addition, four organisations viewed **value-alignment** as another reason for customers' interest in sustainability because this is an important concept in society. Regarding **economic aspects, investment returns** and **business opportunities** are the two most popular reasons. Three organisations mentioned that sustainability is a core business value of their customers: _"Sustainability and circular economy are [our customer]'s core business."_ (Org. 3). At the same time, the focus on sustainability has **evolved**, so our interviewed organisations and their customers had to proactively adapt their businesses to the new trends. Interestingly, seven organisations mentioned that despite having an interest in sustainability and in products addressing sustainability-related aspects, they still struggle to win customers due to **no interest**. Org. 21, for example, admitted that _"I do love to design more services for sustainability, but I don't get any requests, and I really struggle to sell it."_ #### 4.2.3 Interests with respect to shareholders As compared to the interest with respect to business and customers, the interest with respect to **shareholders** seems less important as it has been mentioned by only thirteen organisations. In particular, economic interests have been mentioned by nine organisations followed by societal concerns (four organisations). Three organisations mentioned that the interest of their shareholders in sustainability had changed over time. The **economic** interest is what organisations see as important for their shareholders. They mainly argue that their shareholders consider sustainability as a **business opportunity** to increase their financial performance and their market share. See, e.g., Org. 24: _"If we don't adapt the business and create new KPIs and processes related to sustainability, the risk is high for the shareholders because we can lose some parts of the market."_ Four organisations do consider **societal** concerns as an important aspect for their shareholders. Org. 18 put it as follows: _"[shareholders] have a vision not just of making a profit but as a vision of contributing to a better society (...)."_ Additionally, the interviewees state that the shareholders' interest in sustainability **evolves** due to compliance constraints (e.g., EU taxonomy for sustainable activities7) and societal concerns (e.g., social responsibility). Footnote 7: [https://bit.ly/3xYpBAF](https://bit.ly/3xYpBAF) #### 4.2.4 Interests with respect to stakeholders The responses from the organisations show that sustainability interest from **stakeholders** is highly influenced by media news on sustainability, especially about **environmental concerns** like reducing emissions and fighting climate change: "_(...) if you take into account the ecological point of view, you'll have less \(\mathrm{CO_{2}}\)"_ (Org. 7). There are also several drivers for the **evolution** of stakeholders' interests in sustainability, such as the United Nations Framework Convention on Climate Change (UnFCCC), which launched the Race to Zero campaign and influenced several organisations working towards sustainability. Three organisations are working on building an ecosystem with partners who share the same sustainability values because sustainability **valuelignment** is an important element for their stakeholders. In addition, **employees** are key stakeholders who drive sustainability interest within six organisations because they want to feel a sense of contributing positively towards sustainability: "_They are aware of the sustainability issues, and they count to have a positive impact on these issues."_ (Org. 18). Also, organisations want to create employee satisfaction through different aspects of sustainability to attract talent based on the company's identity and activities towards sustainability. In addition for eight organisations, stakeholders' interests revolve around **societal concerns**, such as human rights, as well as individuals taking action and being accountable for their activities towards sustainability. ``` ROI summary ``` 1Intersts with respect to business: The economic benefit brought by sustainability was the main driving force for the organisations' interest. Concerns about environmental impacts were also very present. ``` 2Intersts with respect to customers: Our interviewed organisations felt that their customers had environmental and social concerns that they had to respond to. ``` 3Intersts with respect to shareholders: Economic benefits are what the organisations see as most important for their shareholders. ``` 4Intersts with respect to stakeholders: External drivers such as the media and the development of international frameworks highly influence the interest of stakeholders in the sustainability of the organisations. Another main driver is employees' personal interest. ``` ### _Rq2: What do organisations want to achieve with respect to sustainability, and how?_ To answer this RQ, we asked the organisations about what they want to achieve with sustainability in their business, how their ICT products/services support achieving these goals, and what difficulties they faced, or expect to face, in achieving these goals. #### 4.3.1 Established sustainability goals The sustainability **goals** of the interviewed organisations focus primarily on their processes, followed by their need to create or improve their products, and finally on the external factors impacting their goals. Interestingly, social goals (e.g., human rights, and inclusion) are not their top priority at the moment. RQ1 findings show that the organisations and their customers have shown a strong interest in social issues when referring to sustainability. Particularly, social matters are the second highest interest of our organisations and of their customers. However, these aspects are not taken into account when organisations define sustainability goals. It indicates that although social matters are good reasons to draw organisations' attention to sustainability, they still do not create viable opportunities for them to set related business goals. In relation to the internal working **process**, fourteen organisations highlighted how **process improvement** had contributed to addressing their sustainability goals, including automation and optimization. For example, Org. 1 highlighted the importance of values and mindset: "_shift [our business partners]'s mind to the value mindset from the project mindset._" Five organisations stressed the importance of **collaboration and leading by example** to inspire, influence, and motivate others to follow their lead. Six organisations reflected on their personal and professional decisions motivated by internal organizational and personal values to identify opportunities and take responsibility in relation to what motivates and positively influences their **decision making process**. Org. 18 stressed that as a company, they "_have always had the goal of having a positive contribution to society as a whole_". Four organisations commented that the organisational environment, belief, awareness, and communication linked to values were critical to **changing culture**. However, this needs to "_be accompanied by a sustainable pace_" (Org. 1). Three organisations had introduced **new processes** to address sustainability internally but wonder how they could optimise their processes. Concerning the organisations' planned **products**, seven organisations highlighted opportunities to develop **new products** to create new markets. For example, the core business of Org. 15 is reviewing their clients' products in order to suggest new sustainable business strategies: "_We try to take a position in the market not as a regular IT supplier but more as a partner. We [and our customers] aim not only to make financial benefits but also benefit the environment and the social side._" Five organisations highlighted how they are improving product quality for both their clients and their organization by demonstrating how sustainability can be integrated as a core element. When clients lacked the required knowledge or expertise, these organisations were able to provide models and examples of good practice. Finally, regarding the **external** factors affecting the company goals, three organisations discussed the importance of their customers making the **right decision** for the larger community and global sustainability. Org. 2 even took this as far as challenging the need for a customer's development proposal, resulting in business loss when the potential customer decided to cancel the project. Also, three organisations highlighted the importance of looking beyond the boundary of the company to engage with sustainability in their **supply chains**. Org. 23 emphasised that "_we are going to review our supplier code of ethics. We want our suppliers to sign it and take responsibility because if we do it well, we have to go hand in hand with people who do it well."_ However, they ponder how to achieve this, and need tools to help them in making the right decisions. #### 4.3.2 Plan and experience for achieving the goals This section presents findings on executing sustainability goals and reported experiences. 3.2.1 Plan: Among the **steps** to achieve the established sustainability goals, organisations mention actions related to external factors with an impact on their goals, as well as the changes required to their internal process and product. With regard to **external** factors, the most cited concern was seeking **collaboration**. Seven organisations prized collaborations with external entities (e.g., clients, municipalities, NGOs, and universities) and international alliances to push their limits, making them more ambitious in their goals. The UN Global Compact8 was acknowledged as a good way to create synergies, gain strength, and be inspired by the work of others. The interviewed organisations also held thorough discussions on internal **process** transformation, focusing first on process design, second on tools, certification, and measurement, and third on implementation, evaluation, and internal collaboration. Footnote 8: [https://unglobalcompact.org](https://unglobalcompact.org) The **design** of sustainability processes was cited by five organisations, that use agile and incremental improvements, the ABCD process9, and the SSD framework. Org. 1, advocating for agile approaches, highlighted the importance of seeing a system as a matrix connecting its different parts to be aware of the effects of a given action on the system's value chain. Four organisations highlighted the need for **tools** for sustainable systems. The tools mentioned are Care to Create, a flourishing business canvas10 to capture economic and social value, SSD framework for strategic sustainable development, and software for environmental accounting. Four other organisations discussed **certification**, sharing their uncertainties and concerns on measurement processes to achieve sustainability, particularly related to the lack of consistent methodologies to implement it. While two organisations referred to BRECAM and B Corp, one mentioned the importance of all types of certifications they adopted to prevent anti-corruption and comply with data protection. Another company reported having their Environmental programme approved and a Social Responsibility programme already draughted. Also, crucial to four organisations are the **measurement** processes to achieve sustainability, and related to this is the lack of consistent methodologies to implement such measurements, as stated by (Org. 10) _"there's not a formal training program on how to develop sustainable solutions (...). [there's no] consistent methodology that guidelines how to implement it. (...) it will come to be certain because we're in a highly regulated company."_ Org. 17 uses BRECAM and other certifications to measure and track the achievement of their goals. Footnote 9: ABCD is part of the Framework for Strategic Sustainable Development (SSD). Source: [https://www.naturalstep.ca/abcd](https://www.naturalstep.ca/abcd) Three organisations mentioned the importance of process **evaluation**, either by setting clear objectives (e.g., becoming \(\mathrm{CO}_{2}\) neutral), implementing and testing them or by defining end-to-end sustainable propositions to make contributions to the community. Org. 1 also complained that when a company is in financial trouble, the quickest decision is firing people instead of analysing the system and thinking of a way to add value to it. They called for a change of mentality where employees do not simply follow instructions but are able to raise their concerns. Lastly, three other organisations proposed different ways to work in **collaboration** with their clients and partners to promote sustainability, from organising workshops to joining Global Compact and offering their clients solutions with extra sustainability features. 3.2.2 Experience: While achieving sustainability goals, the organisations collaborated with external business partners and reformed the internal organisation. Among the adopted actions, reducing energy consumption and cutting carbon emissions are the two most mentioned. Regarding **external** factors, three organisations reported that they sought **collaboration** with business partners to bridge the sustainability gaps within their organisations. Asking for training to enrich the workforce's knowledge about sustainability and raise clients' awareness of sustainability is one of the most popular choices: _"We have organised some workshops with an environmental expert and also invited clients to speak precisely about the importance of sustainability."_ (Org. 23). Another solution is to purchase the services the organisations need to achieve their sustainability goals. For example, Org. 11 shared that they were working with a software consultancy company to produce a reporting toolbox tracking the amount of carbon emitted by its systems. Regarding **internal** reform, organisations have adopted different solutions, such as **reducing carbon footprint** (seven organisations) and employing software technology to **automate** the working process (three organisations). To reduce carbon emissions, Org. 23 was particularly proud of its involvement in reforestation efforts to offset its carbon footprint. Org. 4 has recently provided bicycles for staff and encouraged cycling to work; the initiative has received high appreciation from its employees and media. Regarding automation, Org. 28 states: _"Test automation helps build sustainable software because you have confidence that a particular unit operates in a certain way."_ Five organisations have **designed** their internal working processes in several ways to align themselves with the established sustainability goals. See the recommended procedure of Org. 1: _"So the first part is to identify value, to take care of value. Once you see that, you start changing things with little experiments."_ Three organisations also reported their internal **culture** coincidentally changed on the journey to accomplish the goals. Org. 25 stated that aiming for sustainability goals initially created more work and caused objections, but over time, the passion has been growing among the staff in several departments. Finally, for those who sell **products** related to sustainability, such as power-saving utilities and carbon emission reporting software, three organisations mentioned that they invested in new infrastructure to **reduce energy consumption**. In particular, Org. 24 stated: _"We sell different products and services that help reduce emissions."_ #### 4.3.3 Encountered difficulties The difficulties in achieving the established sustainability goals are categorised into external and internal. More internal difficulties were reported. 4.3.3.1 External difficulties: Economic barriers were the most frequently mentioned, followed by policy issues. Regarding **economic** barriers, four organisations emphasized the difficulty in finding **customers** willing to pay for more sustainable products or services. Org. 19 specifically mentioned that customers' procurement teams were _"only focused on price"_ and they had to sell the ideas to other decision-makers outside of procurement. Org. 16 aimed to be an enabler for the **circular economy** but was hindered by a lack of consistent models from other organisations. The economic barriers are not just in relation to customers; another company reflected on the difficulty of securing **investors**, who are focused on rapid growth and scaling up. **Policy** issues were highlighted for not creating the conditions for sustainable products and services to compete. Policymakers represent a key difficulty, according to five organisations. If the policy context does not require or incentivise sustainability improvements in products and services, organisations struggle to compete against less sustainable alternative suppliers. Several organisations identified a gap between the aspirations politicians and the **regulations** in force. Org. 9 felt this might be due to politicians' lack of understanding of digitalisation, who then _"proposed unreasonable laws"_. The impact of these policy failures was reflected in the second most serious concern, that customers were not prepared to pay more for these sustainable products and services. Four organisations cited external **technological** barriers, such as a lack of charging infrastructure for electric vehicles. 4.3.3.2 Internal difficulties: People and processes are overwhelmingly represented in the organisations' answers, while technologies are barely mentioned. Specifically, they appear 25, 23, and 2 times, respectively. It shows that non-technological difficulties are prevalent in the industry. Regarding **people** barriers, 14 organisations pointed out the lack of **understanding** of sustainability concepts as one of their most significant challenges. This may be due to the extent and vagueness of the concepts themselves and the insufficient knowledge of their employees. Some further explained that the complex conceptualisation of sustainability at a company level also makes searching for sustainability skills in new potential employees a challenging task, as the existing workforce is not qualified to assess the needed skills or their fulfillment by applicants. Org. 4, for example, stated: _"Terminology/concept is still vague, especially what kind of skills and competencies you already have in-house."_ Furthermore, ten other organisations identified the **culture** of their employees, its complexity and inertia, as one of the important internal challenges. Org. 25 states that _"We had many key stakeholders and people at the bottom. We had some commitment on the high management, but it didn't connect because it was blocked in sub-optimisations of middle management who try optimising their own budget or their own question."_ The culture is often oriented toward different KPIs (Key Performance Indicators) and conflicts with sustainability goals. The short-term priorities and missing skills of **decision-makers** have also been mentioned by seven organisations. Org. 18 noted that _"managers usually have more short-term goals and it's not easy to sell them a long-term plan that won't give profits to the company for years."_ With regard to difficulties arising from the internal working **process**, we find a **financial** trade-off between short-term financial profitability and long-term sustainability goals one of the most frequently mentioned difficulties (encountered by 15 organisations). The issue occurs in both our interviewed organisations and their customers. Nine organisations encountered an issue regarding the ability to carry out sustainability-relevant **measurements**. For example, Org. 9 admitted that _"[their employees] don't know how to measure sustainability, or how to advance the policy agenda on sustainability using IT or digitalisation"_. Org. 4 highlighted the challenges to calculate the CO\({}_{2}\) footprint of cloud services. At first sight, these external economic barriers and internal short-term financial gains may seem to contradict the results of RQ1, which states that the economic benefits brought by sustainability are the main driving force for the organizations' interest in it. This is a common conflict. Sustainability demands long-term investment. However, it can be difficult to convince internal and external stakeholders to sacrifice fast economic growth in favour of the long-term economic benefits brought by sustainability. **RQ2 summary** **Established sustainability goals:** The interviewed organisations highlighted the need for improving their design processes and products to support sustainability and stressed the importance of a change in culture to positively contribute to society and help their customers make the right decisions. **Execution plan and experience to achieve the goals:** To achieve the established sustainability goals, the organisations focus on seeking collaboration with their business partners and other external entities, transforming their internal working processes, and developing tools to support interconnectivity, interdependence, and adaptability. The experiences reported include knowing how to collaborate with external stakeholders effectively, reducing carbon emissions, and applying automation when possible. **Encountered difficulties:** The difficulties reported are caused by internal and external factors. The major internal factors are those related to the trade-off between short-term financial profitability and long-term sustainability goals, an unclear understanding of sustainability concepts and goals, and the culture of the employees, which is often oriented towards KPIs in conflict with sustainability goals. With regard to external factors, economic barriers and inadequate policies are the two most frequently mentioned. _Rq3: "What are the sustainability-related competencies and skills that organisations need to achieve the established sustainability goals?"_ To answer this RQ, we asked the respondents about the skills and competencies available and missing within their organisations, as well as their approaches to acquiring those identified as lacking. In our context, skills are the specific learned abilities needed to perform a given job well. On the other hand, competencies are the person's knowledge and behaviours that lead them to be successful in a job. #### 4.4.1 Skills and competencies available in-house From the interview data, we observe that there is a wide variety of skills and competencies required by our interviewed organisations to achieve their established sustainability goals. Only a subset of the skills and competencies were claimed to be available in our respondents' organisations but not all. In addition, the sets of available skills and competencies are not the same among the organisations. These skills and competencies have been categorised into sustainability-related skills (e.g., organisations state that they have knowledge of the sustainability-related regulations in their domain), soft skills (e.g., organisations see that they have good problem-solving skills in sustainability challenges), and technical skills (e.g., organisations see they have the ability to create technically sustainable solutions). Organisations had two different perspectives on **sustainability-related** skills. They either thought about sustainability in a holistic, higher-level manner or focused on some specific details of sustainability. From a high-level perspective, many of our interviewed organisations believed that sustainability knowledge is not that important for IT staff. In particular, seven organisations thought that IT staff do **not need** to acquire sustainability knowledge, and four other organisations required **little background** from their employees. These organisations do not mean that they do not emphasise sustainability but they distinguish IT people from sustainability experts. When the organisations focused on specific sustainability-related skills, they mentioned different sustainability dimensions, application domains, and tools and approaches to achieve them. The organisations seem to have an understanding of both **social** and **environmental** dimensions: _"[Our staff] have the environmental knowledge and have involved in GRI11 and CDP12. They can use these competencies to help our customers"_ (Org. 14). These dimensions also link closely to the **regulations** that the organisations have to obey, as mentioned by Org. 14: _"we've been working with our customers and see how EU regulations have evolved"_. Footnote 11: GRI: an international independent standards organisation that helps businesses, governments and other organisations understand and communicate their impacts on issues such as climate change, human rights, and corruption. Source: [https://www.globalreporting.org/](https://www.globalreporting.org/) Footnote 12: CDP: an international non-profit organisation that helps organisations and cities disclose their environmental impact. Source: [https://www.cdp.net/en](https://www.cdp.net/en) The organisations pointed out several **soft skills** they think are valuable while aiming for sustainable solutions. Some of these skills are rather traditional, like **problem-solving** and **collaboration**, while others, such as **commonsense**, **reflection and influencing** relate to the aim for effects on sustainability. The problem-solving and collaboration skills link closely to the sustainability-related skills presented above. The most referenced (seven organisations) category of soft skills was **influencing**. It shows how organisations recognise their skills to influence the customer and the outcomes. For example, Org. 2 mentioned, _"...their daily work has influenced customers' teams about what should be done"_. The last set of skills that the organisations seek to have in-house is of a **technical** nature. Most of the categories link with IT-related skills and were clearly stated by our interviewed organisations, for example, **software quality**, **user-centricity accessibility**, **architecture**, **data management**, and **systems thinking** skills. While the first four skills are more familiar to the ICT community, the meaning of the systems thinking skill can be explained by the following statement of Org. 18: _"we are software engineers, so systems are quite familiar to us, so we know the different parts of it, how they interact, and how they join together."_ Surprisingly, our interviewed organisations emphasised most (five references) the business skills they have in-house and considered part of the technical skillset. This finding is highlighted by Org. 15: _"Most of our developers are also business advisors. We are now adding more competent business advisors having an IT background."_ #### 4.4.2 Skills and competencies missing in-house Similar to the analysis for skills and competencies available in-house, in this section, we cluster the missing skills and competencies into three categories: sustainability-related skills and competencies, soft skills, and technical skills. With regard to **sustainability-related skills and competencies**, our interviewed organisations mentioned that they lacked many of them, such as the right talent who have knowledge of sustainability and can transfer that to new IT business opportunities, the IT staff who are both excellent at technical skills and have sustainability knowledge, and the talented programmers who can deliver energy-efficient code. In particular, six organisations recognised the importance of sustainability knowledge but faced difficulties in hiring the **right talent** with suitable sustainability-related skills. Five organisations mentioned that their staff lacked a **multi-disciplinary** skill set. For example, Org. 18 expected their IT staff to have some sustainability knowledge and stated that _"...would be good if we can have some of those professionals that could combine sustainability and good background on ICT."_ Three organisations wanted their IT systems to be **energy-efficient** and **environment-friendly** but do not have developers who have these relevant programming skills. Regarding missing **soft skills** that are crucial for organisations with respect to sustainability, communication, and systemic thinking were frequently mentioned. In particular, poor **communication** skill is an issue experienced by four organisations. This problem is visible for the people who work directly with customers (e.g., the marketing department): _"We have been facing the challenges that at least we think that we have the perfect idea that the customer would benefit from, but we are having difficulties selling that to the customers."_ (Org. 4). The company also experienced that communication was ineffective among its IT staff. **Systemic thinking** is an ability to have a holistic view of factors and interactions that could contribute to a better possible outcome. However, three organisations could not identify this competence in their IT workforce. Org. 17 stated that _"if we had a framework or skills on how to put it all together in the bigger picture, we could have optimised our solutions for the entire system, not just specific code segments, or applications."_ Software engineering is a technological field, but our respondents mentioned several missing **technical skills**. Specifically, six organisations reported the lack of **metrics** to measure the impact of their IT products on sustainability. For instance, Org. 2 stated: _"We don't have good means to measure the sustainability level of certain software entities or our customers."_ Data has been increasingly collected in recent years, but three organisations did not equip themselves well with **data management** skills. These organisations faced some difficulties in complying with GDPR (General Data Protection Regulation) in terms of data handling. #### 4.4.3 Solutions to acquire what is missing Based on the identified skills and competencies missing in-house, we further investigated how the organisations are acquiring them. Overall, the acquisition strategies can be classified into two types: internal (i.e., carried out entirely within the company) or external (i.e., when the skills and competencies are provided from a source external to the company). In relation to the **internal** approaches, the most common (mentioned by 20 organisations), unsurprisingly, is by providing and/or organising **in-house training**: _"...what we do to make the change in the organisation, I think we do both retraining and changing the behaviour in the organisation itself"_ (Org. 10). **Hiring** is also a widespread internal strategy, being mentioned by 13 organisations. Organisations use recruiting as an instrument to bring in new employees with suitable sustainability-related skills and competencies. This process also involves internal training in order to adapt newcomers to the organisation's culture and working process. There are several hiring targets being adopted by the interviewed organisations, including looking for specific pre-defined competencies (e.g., _"We hire people that have some specific set of skills and also have a passion or interest in sustainability"_ - Org. 20), people with the right mindset for the organisations. In addition, to address the communication issues, new hires are expected to _"...establish and maintain good discussions with customers and stakeholders"_ (Org. 2). Establishing **mentorship** programmes that engage experienced employees in sharing their own experience, knowledge, and know-how with other staff members; and conducting **internal events** for knowledge sharing are two other solutions mentioned by two organisations. When it comes to **external** approaches, collaborating with universities, sending employees to participate in courses about sustainability, and hiring consultants are popular solutions. Firstly, we found that a significant number of organisations (11) expected to acquire the missing competencies either via new hires contributing the right background from their **university** education or by means of research collaborations with universities. For example, Org. 7 stated: _"To get this knowledge into our own company, we really need research or try to get information from universities."_ Secondly, four other organisations frequently paid for **external courses** to train their own workforce. This strategy assumes that either suitable courses are available or external training organisations offer customisable course packages. Lastly, four organisations preferred to hire **consultants** on sustainability when sustainability-related competencies or skills are needed: _"For externally-reused software, cloud services, or to address specific sustainability goals, [...] we would partner up with somebody or buy consultancy hours"_ (Org. 16). This is another external strategy where missing competencies and skills are acquired temporarily and typically project-specific. **RCS summary** **Skills and competencies available in-house:** In order to reduce the expectation level for the staff, many organisations separate IT departments from sustainability experts, so a sustainability background is not or little required for IT-skilled employees. However, specific soft skills (e.g., problem-solving, collaboration) and technical competencies (e.g., architecture, data management) are expected and available within the IT workforce to achieve the target sustainability goals. **Skills and competencies missing in-house:** Despite separating IT departments from sustainability experts to reduce the need for sustainability knowledge, many organisations still want to fill that gap for their IT staff. Improving communication efficiency and defining sustainability measurement metrics have been often mentioned as missing soft and technical skills within the organisations' workforces. **Solutions to acquire the missing skills and competencies:** The organisations have taken both internal and external approaches to fill sustainability knowledge gaps for their IT staff. Popular solutions are organising in-house training courses, collaborating with universities, sending employees to externally organised courses, and hiring sustainability consultants. ## 5 Interpretation of results This section discusses the skills gaps apparent from our results and that future education programs should address. Then, Section 6 proposes concrete topics to be considered in those educational programmes and on-job training and collaborative activities in industry. **(On RQ1) Interests in sustainability are diverse and evolving. Currently, professionals are not able to understand and relate multiple aspects of sustainability and translate these relations into concrete business when needed. Educational programmes should enable this competency while being flexible and ready for changes.** Our data show that economics is the main driver for our interviewed organisations and their shareholders to invest in sustainability. This is not surprising since they must survive in the market. However, as shown in Figure 4, there is pressure from customers and stakeholders to push for social and environmental sustainability impacts. So, one must try to turn these aspects into economic profit; therefore, ideally, one must change the value proposition. The business vision of an organisation must be decided based on multiple factors, its own business', customers', shareholders', and stakeholders' interests. As such, to prepare a better workforce for IT organisations, future educational programmes should help professionals relate concerns belonging to different sustainability dimensions and be able to translate such relations into concrete business plans. Around 40% of the organisations mentioned that their interests in sustainability evolved due to new demands from customers and regulations. Such evolution might pose difficulties to organisations, so future educational programmes should be flexible and ready to adapt to changes. At the same time, 14% of the organisations are not yet concerned with sustainability. In this case, education plays a role in creating awareness within businesses about the relevance and opportunities of sustainability within IT, accelerating the IT industry's interest in sustainability. From the data, we identify that around 10% of IT organisations are interested in sustainability but do not know how to take it into account. Educational programmes and specific industry training can help. **(On RQ2) The IT workforce needs sustainability-related competencies and a deeper understanding of how sustainability impacts the development processes and the resulting products. Thus, educational programmes should provide them with the tools to develop sustainability-aware systems and inspire others across their network of collaborating organisations to embrace sustainability initiatives.** Sustainability is activating organisations to change internally (e.g., how development and decision-making processes are revisited to address sustainability goals) and externally (e.g., how organisations offer their customers new or improved products and services with respect to sustainability). At the same time, organisations encounter a number of difficulties (further analysed below) that designers of future educational programmes for IT students should take into account when making improvements. As summarised in Figure 5, lack of funds and sustainability understanding, as well as the need to change the internal culture to favour sustainability more, were the three major internal difficulties reported by the organisations. Financial challenges may force organisations to downgrade sustainability objectives to survive, even though sustainability is something most of them want. This shows a need to create more value for sustainability-aware software products, and this may require a change of regulations to support sustainability-aware systems. Half of our interviewed organisations complained that their IT staff or colleagues need a better understanding of sustainability. At the same time, we observe that 20% of the organisations consider current policies as insufficient for driving sustainable development, and 16% struggle to persuade customers to pay more for sustainable products. This shows that even though customers want to buy more sustainable products, they are not necessarily willing to pay more. This is where politics can play a vital role in enforcing incentives, such as reducing taxes on green products and putting in place laws and adequate regulations. While non-technological issues seem to worry organisations more, they also drew our attention to the need for sustainability-related improved metrics, design processes and tools. **(On RQ3) More sustainability skills and competencies are needed due to their rising importance in our daily life. Future educational programmes must be built upon three pillars: technology, soft skills, and sustainability knowledge.** As educators, we believe that IT and sustainability are disruptive forces in today's society and will increasingly converge [26]. Education programmes that give IT professionals strong soft- and sustainability skills will ease this process, both because it will set a basic common ground for collaboration among experts in both fields and because it will encourage every technical system to be built with essential sustainability characteristics in place. Many organisations have recognised the need for sustainability skills and competencies for their IT staff. As shown in Figure 6, the majority of our interviewed organisations agree that their IT workforce lacked sustainability knowledge and understanding (75%), and technical skills to implement sustainability (65%). Also, one-quarter of the interviewees pointed to the need for additional soft skills. Based on our observations, weaknesses regarding sustainability skills and competencies of the current IT workforce can lead to (1) difficulty in understanding sustainability in the business context, including sustainability strate Fig. 4: Reasons for the interest in sustainability Fig. 5: Internal difficulties encountered by organisations gies, approaches, and tools to support sustainable business models, (2) difficulty in translating business requirements into IT products and services with sustainability considerations, and (3) poor communication and soft skills, which is a classic problem with software engineers and programmers. ## 6 Discussion An increasing number of Education Institutions (HEI) recognise the need to integrate Sustainability into their existing courses. The integration process is challenging [27], but there is guidance available, such as the UK QAA/HEA guidance on incorporating Education for Sustainable Development into the curricula. However, it is hard to identify guidance on developing skills for IT courses. To help, we developed a classification of the topics that should be considered when designing a curriculum for future courses in sustainability. This classification is the result of the interview findings. The classification is not meant to be complete, but in our opinion, it points to the most important topics based on our years of research and teaching in the area. In the following, we describe the importance of each topic, how it relates to our interview data and suggestions for how to consider it in future education. #### Core sustainability knowledge Sustainability must be seen as a prerequisite for any IT product or service. We observed in our interviews that there was no clear consensus about what was meant by this term. If IT professionals do not adequately understand the main concepts of sustainability, they are likely to: keep looking at it as a trade-off or a nice-to-have; struggle to collaborate with sustainability experts, and might not fully see the value of it; shy away from public debate on technology and sustainability, not motivating policy changes; see it as a complementary skill to IT. Here, education could play a fundamental role in providing knowledge of the basic principles, concepts, and models, particularly, definitions and scoping of sustainability, clarification of persistent misconceptions and myths around sustainability facts, current statistics, fundamental concepts (e.g. dimensions of sustainability), as well as different models to explain sustainability in a certain context (the doughnut model [28], the nine planetary boundaries [29], and orders of impact [14].) #### Systems thinking Systems thinking is a fundamental perspective change for being able to grasp sustainability issues. The focus on the overall big picture with its relationships between the stakeholders and systems, and their dynamics and balances -- instead of traditional divide and conquer approaches that are taught in engineering -- allows for the necessary shift in looking at a situation. Systems thinking is a powerful tool and mindset shift that enables a holistic view of sustainability across its different dimensions. Through the study, we observed that there were just a few organisations possessing this kind of competency within their workforces; for example, Org. 16 champions the value of recycling by producing food for fish from waste. In that case, the organisation goes beyond its main operation's boundary to seek ways to positively affect the natural environment. As educators, we are aware that systems thinking is not easy to teach and practice since it requires a good deal of domain knowledge. Yet, the potential consequences if this matter is not taken into consideration can be much greater, leading to cascading (potentially negative) impacts in different dimensions of sustainability. For example, if a country aims at building more and larger hyper-scale data centers addressing the demand for IT products and services, we must also consider the related societal implications for, among others, the (competing) demand for water for citizens, of renewable energy resources for cities, of land for agriculture - hence going beyond the specific ICT sector. #### Soft skills Communication is an essential soft skill for the IT workforce since managers, peers, and clients have to exchange ideas on a daily basis. It is especially important to know how to communicate in a positive way, e.g. as in non-violent communication [30] or compassionate communication that seeks to understand the motivation of the communication partner before attempting to get the point across. We believe soft skills should play a much more significant role than today in software engineering education programmes. This is confirmed by 12 of our 28 interviewees, who emphasised the importance of soft skills within their IT workforce. Although the importance of soft skills is increasingly recognized in engineering degrees, educators often struggle to fit the topic properly into their classes. This difficulty may arise from several reasons. Sometimes, engineering teachers do not have the right knowledge and tools to effectively teach soft skills to students. On other occasions, they may feel that the time they have to cover the technical curricula is already too short, resulting in a weak integration of soft skills into their teaching (e.g. without giving the students the required time to reflect on and practice their soft skills). We suggest designers of engineering programs should collaborate with other courses where soft skills play a more critical role, such as management, leadership, and social psychology, to find more effective ways to integrate them into the curricula. A complementary activity could be to invite experts from industry to teach soft skills to IT students. This kind of collaboration will not only give students more practical perspectives but can also strengthen relationships between academia and industry and motivate students on the importance of such skills. Fig. 6: Skills and competencies missing in organisations’ IT workforce #### Technical sustainability Technical sustainability refers to the capacity of the software system to endure in changing environments [5]. Software systems are sustainable if they can be cost-efficiently maintained and evolved over their entire life-cycle [31]. Technical sustainability can be achieved through software architecture as it lays the foundation for the successful implementation, maintenance and evolution of sustainable software systems in a continually changing execution environment by providing a mechanism for reasoning about core software quality requirements that contribute to sustainability as a first-class, composite software quality. Addressing software sustainability at the architectural level, allows the inhibiting or enabling of systems quality attributes, reasoning about and managing change as the system evolves, predicting system qualities, as well as measuring architecturally significant requirements. However, the ability to determine sustainability as a core software quality of a software system from an architectural perspective remains an open research challenge, and existing architectural principles need to be adapted and novel architectural paradigms devised. In addition, there is a pressing need for new tooling to fit today's emergent and dynamic environments, where software systems are explicitly designed for continuous evolvability and adaptability without creating prohibitive architectural, technical debt. The skills needed to develop and estimate technical sustainability require training in software architecture, tools, and metrics to evaluate technical sustainability for the diversity of application domains. Whilst engineers may understand the importance of technical sustainability, metrics and dashboards of technical debt are less in evidence. For instance, Org. 18 mentions they use some metrics, but they lack automatic processes to evaluate the quality of systems. This is precisely one of the aspects in which technical sustainability can help to produce more sustainable systems. #### Building the business case for sustainability Sustainability and the SDGs provide enormous business opportunities to organisations [26]. For example, Org. 2 stated quite frankly _"Why we are so interested? [...] it's money."_ In the short- to mid-term, companies can benefit from creating new systems to exploit these business opportunities, or from making their own systems more sustainable or, at least, more environmentally aware, appealing to increasingly sustainability-conscious customers and business partners. For example, the Environmental, Social, and Governance (ESG) framework is now used by stakeholders to evaluate how organizations manage risks and opportunities that relate to sustainability issues. Therefore, IT professionals need to understand better what drives the businesses they work for, the opportunities that focusing on sustainability open to businesses in general, and the threats faced by businesses causing harm to the environment and society. Understanding this might help them to champion the idea of sustainability internally and justify it in terms of economic, environmental, and societal reasons. Furthermore, in the mid- and long-term, companies should aim to create a positive, or at least non-negative impact, on all sustainability dimensions, independently of the purpose of their systems. This evolution of companies provides many opportunities for educators. The traditional practice-based courses such as capstones and hackathons could use these real-world challenges companies have and, as an outcome, provide possible solutions for companies. The challenges could be approached on different levels, from setting the company values and objectives to the design of the actual internal (software) development processes. In general, educators may collaborate with companies to increase practitioners' awareness of the possible sustainability impacts of their products and activities. #### Sustainability Impacts and Measurements How companies improve their businesses by adopting sustainable practices requires specific Key Performance Indicators (KPIs) and metrics that can measure SDG targets or GRI indicators. While some domains (e.g. energy, transportation) count with standardized metrics to evaluate the sustainability of a solution, others do not. Consequently, many organizations have difficulties estimating sustainability and they need to define their metrics and sustainability indicators. To do so, IT staff and managers need to be trained on current metrics/KPIs and on how to create their own, such that organizations can be clear about their sustainability achievements. There are approaches and tools for assessing, quantitatively and qualitatively, sustainability impacts. They include techniques like scenario mapping, future backcasting, the SDG impact assessment tool [32], sustainability assessment tools like the SAF Toolkit [33], [34], and sustainability awareness frameworks like SusAF [35]. Universities are being increasingly ranked according to sustainability (e.g. Times Higher Education Impact Rankings), so as organisations adopt different sustainability goals they need different kinds of sustainability metrics13. Footnote 13: [https://www.timeshighereduction.com/impactrankings](https://www.timeshighereduction.com/impactrankings) From our analysis of companies, we found evidence of this lack. For instance, Org. 2 highlights the lack of such metrics to understand the direct impacts of the product/system adopting sustainable solutions. In addition, Org. 27 observes that there is a lack of awareness on whether the solutions adopted are sustainable enough, partly because they do not have metrics. These and other technical challenges aimed to calculate the carbon or energy footprint (e.g. Org. 4) of sustainable solutions are why they demand specific training on well-defined KPIs to estimate the impacts of different sustainability initiatives to justify the efforts and expenses. #### Values and Ethics Ultimately, values and ethics are fundamental concerns to making the world a fair and equitable place [36]. While ethics are culturally agreed-upon moral principles, "values make no moral judgment" [37, p. 113]. Currently, our society relies on software systems to communicate worldwide and operate utilities providing the basics for human life (e.g. complex medical machines, nuclear plants, and electrical grids). Such systems focus on functionality and quality attributes, such as usability, availability, and security, but they do not respect our core values, such as social justice, transparency, or diversity [38]. Sustainable systems, however, should be aligned with core human values. In this context, it is important that IT professionals are guided by a clear code of ethics, such as the ACM code of ethics14 to produce socially and environmentally responsible systems. We call for ethics to be a standard part of software engineering. Footnote 14: [https://ethics.acm.org/code-of-ethics/software-engineering-code/](https://ethics.acm.org/code-of-ethics/software-engineering-code/) In what concerns values, user-centred design, user-experience design, and values-sensitive design tackle more than typical software qualities, but they are still far from addressing core human values [39]. Value-driven methods, known in HCI and information systems, can be used in business analysis and requirements engineering, but they offer little guidance for the later stages of development. Some emerging works take a human-values view (e.g., GenderMag [40] used to discover gender bias in software; or Alidoosti et al. [41] incorporating ethical values in software design), but more is still required to address human values systematically. The good news is that software development methods could be adapted to handle human values. For example, the Karlskrona Manifesto on Sustainability Design15, or participatory design techniques can be taught to ensure that end-user values are taken into account. Over 57% of the interviewed organizations reveal that their customers and stakeholders want to protect the environment and almost 30% are interested in focusing on sustainability due to moral concerns and social matters, resulting in the need for sustainability-value alignment of their business. Footnote 15: [https://www.sustainabilitydesign.org/karlskrona-manifesto/](https://www.sustainabilitydesign.org/karlskrona-manifesto/) #### Standards and Legal aspects The topic of legal aspects includes standards that may be required to be taken into account or that allow for certification, like the ISO 14000 family on the environment or the ISO 26000 standard on corporate social responsibility, or the LEED standard for buildings. It also includes issues of compliance and compliance assessment, the development of sustainability cases (think safety cases but for sustainability), and the impacts of newer and upcoming laws from privacy and information safety to software liability and their impacts on sustainability. One of the reasons for the slow adoption of sustainability initiatives in business has been the lack of mandatory requirements for action or reporting. Some interviewees mentioned specific regulations such as Waste Electrical and Electronic Equipment (WEEE), General Data Protection Regulations (GDPR) and Employment law (Org. 9, 11, 20, 23). There is a need for educators to be clear about the difference between what is required by law/regulation in different jurisdictions such as the EU Green Taxonomy reporting or employment protection, and what is voluntary that businesses may choose to do such as using the Global Reporting Initiative framework for sustainability reporting. #### Advocacy and lobbying This topic raises the question of how impartial and neutral researchers and educators should be versus how much involved in advocacy and lobbying. We are in favour of taking a stance while allowing discussion space for all perspectives on an issue. One should not wait for regulation to start acting on sustainability. Regulation is often a late follower of social trends and is highly influenced by them. The last decades have witnessed several social and business movements towards sustainability, such as Corporate Social Responsibility, Fairtrade, Slow Food, Greenpeace, Natural Capitalism, B-corporations, etc. Education can play an important role in promoting and shaping movements such as the above. Universities should train future IT professionals to combine their technical and sustainability expertise to become strong advocates for sustainability. Therefore, curricula should also include tools for effective and positive advocacy in organizations, media and legislation, as well as lobbying. The importance of offering expertise to policymakers has been highlighted by one of our interviewees (Org. 9), who said: _"a lot of policymakers don't have a clue on digitalization matters (sic.), and because of that, they don't know what they re doing while writing the law."_ ## 7 Threats to validity We discuss our reasoning concerning the validity and limitations of this study by following the scheme described in [42]. **Construct validity.** This aspect is related to whether during the interviews we asked the right questions according to our investigation objectives. To mitigate this threat, we formulated the questions by leveraging the background knowledge of the involved researchers, which indeed have experience in these types of research in software engineering in general, for at least ten years, and in software engineering related to sustainability for at least five years. **Internal validity.** This is related to how we executed the study internally and analysed the results to avoid unknown factors that may influence the study's outcome. To mitigate this threat, we have taken the following actions. First, to improve the instrument (i.e., interview guide) used in the study, we spent time discussing the interview questions to ensure they covered our stated research questions and avoid leading questions. Our interviewees were interviewed on a voluntary basis, and confidentiality was emphasised to encourage them to respond to the interview questions in the most truthful way. Secondly, during the data analysis, we adopted the following procedures consisting of two steps. In the beginning, the researchers who were the interviewers of one interview session paired with another researcher and both performed data coding for the whole transcript obtained. After that, all the researchers involved in this study were divided into three groups, with each being responsible for each research question stated in 3.1 and having at least three members. All group members responsible for one research question validated the coded data related to their section in all the transcripts. At this stage, some re-codings happened in collaboration with original coders to extract more details from the data. **External validity.** This is concerned with the limitations of how much this study can generalise conclusions. There are a few limitations associated with this study. First, although we achieved quite a spread of geography (mainly in Europe), company sizes, and business domains, they are not representative of the entire European IT economy. Had we interviewed other organisations, it is likely that the results would have differed to some extent. Although we asked the companies about their interest in sustainability, it is hard to find a common pattern and reasons for the variation of sustainability among organizations, we cannot establish a connection between particular types of companies to what kind of interest in sustainability they have. Second, our results can not only be subject to the inherent limitations of the codebook and the coding process but also to the biases of those interviewed and to the ontological uncertainty of the future. In particular, the frequency a code has been mentioned, while possibly representative of the perceived relative relevance within the industry nowadays, may not represent the true importance of each topic, which might only become apparent in the future. **Reliability.** This aspect concerns to what extent the data and the analysis depend on specific researchers. Since the researchers of this study were responsible for conducting 1-3 interviews/focus group interviews, we prepared a presentation containing all the interview questions and showed it during the meetings with our interviewees to ensure all the discussions flowed consistently. In addition, we supply the codebook as supplementary materials for validation purposes, which are helpful for replication. ## 8 Related Work A number of studies have investigated how software engineering professionals understand sustainability. For example, Groher and Weinrich [43] report on a qualitative interview study with ten interviews in nine organisations in Austria. They aimed to understand how practitioners understood sustainability and its importance, the factors influencing sustainability in software development, sustainability-related deficiencies in their projects, and how they improve sustainability in such projects. The results show that while practitioners find the topic of sustainability important, they seem to have a narrow view of sustainability, focusing mainly on technical attributes such as maintainability and extensibility. In addition, practitioners were concerned with organisational and economic issues, but the environmental dimension was not addressed. Our study differs from this one in several aspects. First, we interviewed 28 organisations spread across 9 countries, instead of just one, which potentially provides a broader and less culturally biased view of sustainability in the ICT industry. Most importantly, our interviewees had different profiles. While Groher and Weinrich interviewed technical leaders of ICT projects, we talked to senior management and sustainability experts within the company. That can be observed in the different perceptions of sustainability. In Groher and Weinrich's work, most interviewees related sustainability to maintainability, extensibility, product development, and long-lived systems [43], while in our study, sustainability was more broadly understood, with the different dimensions mentioned. A remarkable difference is that the environment was very rarely mentioned in the previous study, while it was one of the main concerns shown in ours. However, both studies coincide in that the economic benefit is the greatest motivation for these companies. Both studies also looked into difficulties or deficiencies in sustainability. In Groher and Weinrich's, participants mainly pointed to a "lack of effective means for communication and knowledge sharing" and suggested strategies such as "knowledge distribution, avoiding specialists, and building teams that can work across products". Our study coincided with the lack of understanding of sustainability concepts and goals, yet it highlighted the trade-off between short-term financial profitability and long-term sustainability goals as a major difficulty. Our study also pointed to external difficulties, such as economic barriers and inadequate policies, which were not mentioned in the previous study. De Souza et al. [44] discuss software sustainability as perceived by nine software developers from a university in the UK, and suggest a set of recommendations on how to improve the sustainability of software. They used short semi-structured interviews, each lasting an average of about 10 minutes. The main result is the distinction between "Intrinsic Sustainability", referring to intrinsic characteristics software should have (e.g., be documented, be tested, or be modular), and "Extrinsic Sustainability", referring to the environment in which the software is developed or used (e.g., be open, be actively maintained, or be infrastructure-independent). To address this, the authors proposed a set of recommendations as good practices for software development and maintenance that directly emerge from the characteristics interviewees associated with 'intrinsic' or 'extrinsic' sustainability but remain exclusively in the realm of technical sustainability. Our study differs from De Souza et al.'s [44] significantly. They interviewed software developers within a single academic organization. That meant that their participants were mostly experienced with research projects, which are of a very different nature from those of our study. Unsurprisingly, participants' views in that study were very much more related to technical sustainability, than ours. Interestingly, their questions were open and neutral, not really biasing the answers to such limited views. Finally, the coverage of the research questions as well as the depth of the interviews was very different, with ours specifically asking about goals, barriers and skills to sustainability and theirs on the relation of sustainability with software systems. This difference in the depth can also be seen in the lengths of the interviews, which typically lasted around 10 minutes in the previous study and 1-2 hours in ours. Karita et al. [45] report on a study performed with ninety-nine companies from the software industry in Brazil to investigate their awareness in four sustainability dimensions (environmental, economic, social and technical). The results indicate that sustainability in the context of Software Engineering is a new subject for practitioners, that they find the topic relevant, and that sustainability should be treated as a quality attribute. In contrast, this study aims further and, more concretely, at companies with such awareness to retrieve their actual interests, difficulties and achievements, and skills they have in-house and those that they miss. In Chitchyan et. al. [46], thirteen requirements engineers from eight different countries (Austria, Brazil, Germany, Spain, Switzerland, Turkey, the UK, and the USA) have been interviewed. The study investigated the perception of engineering practitioners towards sustainability as well as obstacles and mitigation strategies regarding the application of sustainable design principles in their engineering work. The study shows that on an individual level, perceptions of sustainability tend to be narrow, organisations are not aware of the potential achievements and benefits coming along with sustainable design and the standards and norms in Software Engineering are not conducive to sustainability. In contrast to our study, the work focuses on what hampers the adoption of sustainable design principles based on [5] daily work practices, and not on the broader questions of industry sustainability-related interests and needs, their planned achievements, and the thus-required skills. Other published work is more loosely related to ours, with the following worth highlighting. Betz, Lammert, and Porras [47] investigated the role of perception of software engineers; more specifically it investigates the self-attribution of software engineers and whether they implement sustainability issues in their daily work. Their results suggest that software engineers perceive that they are insufficiently involved in the design process and that they do not sufficiently take on responsibility for the software and its sustainability impacts. The authors observed an evolution in terms of communication with interdisciplinary experts, yet their software engineers still see themselves as a "purely executive force" [47], who shy away from responsibility regarding sustainability. This perception varies greatly from the ones in our study, which does recognize the need for sustainability skills and competencies for IT staff. Additionally, a domain-specific study conducted by Kasurinen et al. [48], investigated - among other points such as the development processes used -- the extent to which game developers are concerned about sustainability issues and Green IT. The results show that their interviewed gaming companies were more unstructured than general software development ones, not really incorporating sustainability in their daily work practices. Yet, our studies coincide with regard to the lack of a broader understanding of sustainability by IT professionals. In the related field of Information Systems (IS), Cooper and Molla [49] investigated the notion of the "absorptive capacity" of IS to enable environmental sustainability and how organisations can enable IS changes to address environmental issues. They conducted a survey with 148 IS senior managers and provided different taxonomies to acquire knowledge about sustainable IS and to what extent IS sustainable technologies are assimilated by organisations. The role of "absorptive capacity" is also discussed in [50] where the authors provide a systematic literature review on competencies for environmental sustainability and managerial skills required for organisations to transform knowledge into environmental capabilities. The work suggests a connection between environmental competencies and capabilities, and they provide a taxonomy between management and environmental competencies. ## 9 Conclusions and future work Our study has uncovered how sustainability is viewed and practised in 28 organisations from nine countries. The findings of this work include (i) how sustainability is of interest to these organisations, (ii) sustainability goals they want to achieve, (iii) difficulties they have encountered so far, (iv) skills and competencies needed to achieve the established goals, and (v) current practices to fill the perceived sustainability skill gap. Identifying those current practices and especially the gaps, gives us an indication of possible improvements to current university education programs with respect to sustainability for IT and related fields. This study represents the first step to improving the computing curricula to better meet the demands of industry. To accelerate that process, we have proposed essential topics relevant to sustainability in IT that should be taken into account when developing the curricula, based on our years of experience in teaching courses in sustainability for IT students. We also highlighted several open research opportunities, directions, and challenges: First, while significant, the organisations we interviewed provide only a partial geographical perspective; our analysis should be performed globally. We thus plan to survey on a global scale to obtain a more comprehensive picture and to be able to conduct quantitative analyses, for example, regarding the variation of sustainability among organisations. As it is easier to include more companies in a survey than in an interview series, this will also allow the mapping of individual business domains and different sustainability perspectives. Finally, the ultimate goal should be the rapid development of software engineering and computer science curricula which include sustainability concepts at their very core. We presented first ideas in Section 6. These curricula should certainly be holistic and not aim exclusively at the skills needed by industry. However, given the growing sustainability interest of the industry and its immense transformational power, the curricula should definitely take its sustainability needs into consideration. ## Acknowledgments The authors would like to thank all the interviewees who took part in the study.
UN SDGs達成のための意識と行動のレベルは、持続可能性の課題に対処するための適切なレベルを必要とする。ソフトウェアシステムは、これらの目標に向かって進むための重要な役割を果たす。ソフトウェアシステムの開発を支援する必要があり、市民に対して持続可能なITサービスを提供する。学術機関の数が増加しているが、エンジニアリングとコンピュータサイエンスのカリキュラムに持続可能性教育が導入されている。しかし、IT専門家の能力とスキルに関する包括的な研究はまだ行われていない。この研究は、ソフトウェアエンジニアの視点から教育と訓練のための産業持続可能性のニーズを特定することを目的としている。28の組織からIT部門を持つ9カ国で、専門家とのインタビューとフォーカスグループを実施した。彼らの持続可能性に関する興味、目標、成果、そして達成するために必要なスキルと能力に関する理解を深めた。私たちの調査結果によると、組織は持続可能性に注目しており、理想的
2309.06392
A Fast Algorithm for Moderating Critical Nodes via Edge Removal
Critical nodes in networks are extremely vulnerable to malicious attacks to trigger negative cascading events such as the spread of misinformation and diseases. Therefore, effective moderation of critical nodes is very vital for mitigating the potential damages caused by such malicious diffusions. The current moderation methods are computationally expensive. Furthermore, they disregard the fundamental metric of information centrality, which measures the dissemination power of nodes. We investigate the problem of removing $k$ edges from a network to minimize the information centrality of a target node $\lea$ while preserving the network's connectivity. We prove that this problem is computationally challenging: it is NP-complete and its objective function is not supermodular. However, we propose three approximation greedy algorithms using novel techniques such as random walk-based Schur complement approximation and fast sum estimation. One of our algorithms runs in nearly linear time in the number of edges. To complement our theoretical analysis, we conduct a comprehensive set of experiments on synthetic and real networks with over one million nodes. Across various settings, the experimental results illustrate the effectiveness and efficiency of our proposed algorithms.
Changan Liu, Xiaotian Zhou, Ahad N. Zehmakan, Zhongzhi Zhang
2023-09-09T13:54:34
http://arxiv.org/abs/2309.06392v1
# A Fast Algorithm for Moderating Critical Nodes via Edge Removal ###### Abstract Critical nodes in networks are extremely vulnerable to malicious attacks to trigger negative cascading events such as the spread of misinformation and diseases. Therefore, effective moderation of critical nodes is very vital for mitigating the potential damages caused by such malicious diffusions. The current moderation methods are computationally expensive. Furthermore, they disregard the fundamental metric of information centrality, which measures the dissemination power of nodes. We investigate the problem of removing \(k\) edges from a network to minimize the information centrality of a target node \(v\) while preserving the network's connectivity. We prove that this problem is computationally challenging: it is NP-complete and its objective function is not supermodular. However, we propose three approximation greedy algorithms using novel techniques such as random walk-based Schur complement approximation and fast sum estimation. One of our algorithms runs in nearly linear time in the number of edges. To complement our theoretical analysis, we conduct a comprehensive set of experiments on synthetic and real networks with over one million nodes. Across various settings, the experimental results illustrate the effectiveness and efficiency of our proposed algorithms. Social networks, critical nodes, information diffusion, edge removal, combinatorial optimization. ## 1 Introduction Abroad range of dynamic processes on graphs have been analyzed to attain a more profound comprehension of diverse real-world phenomena, such as the spread of misinformation on online social media platforms, the proliferation of computer viruses over the internet, and the dissemination of diseases among individuals [1, 2, 3, 4]. As a result, there has been a burgeoning interest in investigating the influence of the underlying graph structure on various characteristics of these dynamic processes [5, 6, 7, 8]. Specifically, a considerable amount of attention has been focused on comprehending to what extent certain objectives can be achieved through the manipulation of the network structure. Examples of such manipulation strategies comprise eliminating nodes (such as blocking an account on an online social platform or administering a vaccine to an individual), adding edges (such as link recommendation in online social networks or constructing a physical link between two routers), or removing edges (such as restricting two individuals from meeting through quarantine measures or not exposing the posts from one user to another in an online platform) [7, 9, 10, 11]. Furthermore, the intended objective can vary widely, ranging from minimizing the number of nodes infected by a virus to maximizing the fairness of workload among routers across different internet service providers, or reducing the proportion of users exposed to misinformation [2, 4, 11, 12, 13]. The real-world networks exhibit heterogeneous nature with critical nodes being far more essential in network structure and function [7, 14], such as information diffusion and system stability [7, 15, 16]. This confers an unbridled degree of power to such nodes, which could potentially result in significant financial, sociological, and political damages. For instance, subsequent to the hack of The Associated Press Twitter account, a false report was disseminated claiming that "Breaking: Two Explosions in the White House and Barack Obama is injured". This rumor resulted in losses of 10 billion USD within a few hours and caused the United States stock market to crash within minutes [17]. As another example, infectious diseases cause 10 million deaths each year globally, accounting for 23% of the total disease related deaths, where critical nodes play a massive role in the extent of the spread [18]. Consequently, there has been an increasing interest in shedding light on managing and alleviating the impact of a set of target nodes, especially influential nodes [19, 20, 21, 22]. Another application space where the problem of mitigating some target nodes have gained substantial attention is the realm of privacy protection in networks. In this setup, the goal is to protect the privacy of users or conceal critical entities in the network by implementing structural anonymization [23, 24]. Please refer to Sections 5.1 and 5.2 for more details on these topics when appropriate. Two commonly employed graph operations to attain the aforementioned objectives are node and edge removal. Edge removal has garnered greater attention recently [25, 26, 22], since it is less intrusive (i.e., disrupts the original functionality and flow of the network less aggressively) and provides controlling power in a more granular level (note that usually removing a node is equivalent to removing all its adjacent edges). In this work, we focus on edge removal as well. Previous studies [23, 24, 27, 28] have investigated the problem of removing a fixed number of edges to achieve an objective with respect to a subset of target nodes. The objective could vary from minimizing the spreading power of the target nodes under a specific information diffusion model such as the Independent Cascade (with the goal of halting the propagation of a piece of misinformation) [12, 21, 22, 26, 29] to minimizing the centrality of the target nodes measured by degree [23] or closeness [24] (with the aim of concealment). Additionally, numerous algorithms, predominantly heuristics, have been put forth [12, 29]. The existing works have two major limitations. Firstly, despite a plethora of centrality indices proposed in the literature [14], these prior works do not consider information centrality [30, 31], in spite of its obvious advantages. For example, information centrality captures the way node-to-node transmission takes place (especially, on social networks) [32] and possesses higher discrimination power than others [9, 33]. Again for instance, information centrality has been applied to various fields, such as estimation from relative measurements [34] and leader selection for noisy consensus [35]. Secondly, they are computationally relatively expensive which renders them impractical in various real-world scenarios. Suppose a misinformation detection algorithm has spotted a potential rumor (as the one from above about White House) and the goal is to moderate the spreading power of the initiating node by temporarily removing some edges (i.e., not exposing the content from some users to some others), then having a very fast mitigation algorithm in place is very crucial. (Note that precautions are not usually a viable solution here, since these graphs undergo constant reformulation). Another example of this would be an internet service provider seeking to control the network traffic in response to a malfunctioning or malevolent router. In light of the above limitations, our study aims to fill this gap by devising an objective function capturing information centrality and provide an algorithm that can handle networks of million nodes. In our formulation of the problem of moderating critical nodes, the objective is to remove \(k\) edges to minimize the information centrality of a target node, while preserving network connectivity. (The constraint of network connectivity ensures the preservation of network's functionality [7, 16], a consideration also addressed in other works investigating problems related to edge removal [24, 28, 36].) We prove that this problem is NP-complete. Furthermore, while its objective function is monotonically decreasing, it is not supermodular, posing difficulties in solving it using traditional greedy algorithms. However, we still adopt the standard greedy algorithm in our setup since it has been frequently observed to deliver satisfactory results for many non-supermodular problems [37, 38]. Despite its simplicity and efficacy, it requires execution of matrix inversion operations, incurring a prohibitive computational cost and rendering it unfeasible for large networks. As the first step towards a faster algorithm, we use the random walk-based approximate Schur complement method [39] to present a faster algorithm, called ApproxiSC. To speed up the computation even further, we also leverage the sum estimation method [40, 41], which allows us to provide the algorithm FastICM which runs in nearly linear time in the number of edges, as our main contribution. The rest of our paper proceeds as follows. We first introduce some necessary preliminaries related to our work in Section 2. Then, we provide an exact formulation of our problem in Section 3 and give an overview of our main theoretical and experimental findings and the techniques used in Section 4. We discuss related works in Section 5. Then, we study the computational complexity of our problem and prove related properties of the corresponding objective function in Section 6. In Section 7, we present the deterministic greedy algorithm, followed by the fast greedy algorithms in Section 8. We report our performance experiments evaluating the efficiency and effectiveness of our algorithms in Section 9 and conclude the paper in Section 10. ## 2 Preliminaries In this section, we introduce some useful notations and tools to facilitate the description of our problem and algorithms. ### _Notations_ We use normal lowercase letters like \(a,b,c\) to denote scalars in \(\mathbb{R}\), normal uppercase letters like \(A,B,C\) to denote sets, bold lowercase letters like \(\mathbf{a},\mathbf{b},\mathbf{c}\) to denote vectors, and bold uppercase letters like \(\mathbf{A},\mathbf{B}\), \(\mathbf{C}\) to denote matrices. Let \(\mathbf{J}\) be the matrix of appropriate dimensions with all entries being ones. We use \(\mathbf{A}_{[S,F]}\) to denote the submatrix of \(\mathbf{A}\) with row indices in \(S\) and column indices in \(F\). We write \(\mathbf{A}_{ij}\) to denote the entry at row \(i\) and column \(j\) of \(\mathbf{A}\) and \(\mathbf{a}_{i}\) to denote the \(i\)-th element of vector \(\mathbf{a}\). We use \(\mathbf{A}_{-T}\) to denote the submatrix of \(\mathbf{A}\) obtained from \(\mathbf{A}\) by deleting rows and columns corresponding to elements in set \(T\), and use \(\mathbf{a}_{-T}\) to denote the vector obtained from \(\mathbf{a}\) by deleting elements in set \(T\). An \(n\times n\) matrix \(\mathbf{A}\) is positive semi-definite if \(\mathbf{x}^{\top}\mathbf{A}\mathbf{x}\geq 0\) holds for all \(\mathbf{x}\in\mathbb{R}^{n}\). For two positive semi-definite matrices \(\mathbf{A}\) and \(\mathbf{B}\), we use \(\mathbf{B}\preceq\mathbf{A}\) to denote that matrix \(\mathbf{A}-\mathbf{B}\) is a semi-definite matrix. Below, we introduce the notion of \(\epsilon\)-approximation. **Definition 2.1**.: _Given two positive semi-definite matrices \(\mathbf{A}\) and \(\mathbf{B}\) and a real number \(\epsilon\in(0,1)\), we say that \(\mathbf{B}\) is an \(\epsilon\)-approximation of \(\mathbf{A}\) (abbr. \(\mathbf{B}\approx_{\epsilon}\mathbf{A}\))_ \[(1-\epsilon)\mathbf{A}\preceq\mathbf{B}\preceq(1+\epsilon)\mathbf{A}.\] If \(\mathbf{A}\) and \(\mathbf{B}\) are degenerated to positive scalars \(a,b>0,b\) is called an \(\epsilon\)-approximation of \(a\) (abbr. \(b\approx_{\epsilon}a\)) if \((1-\epsilon)\,a\leq b\leq(1+\epsilon)\,a\). ### _Graphs and Related Matrices_ Consider a connected undirected graph \(\mathcal{G}=(V,E)\) where \(V\) is the set of nodes and \(E\subseteq V\times V\) is the set of edges. Let \(n=|V|\) and \(m=|E|\) denote the number of nodes and the number of edges, respectively. The Laplacian matrix of \(\mathcal{G}\) is the symmetric matrix \(\mathbf{L}=\mathbf{D}-\mathbf{A}\), where \(\mathbf{A}\) is the adjacency matrix whose entry \(\mathbf{A}_{ij}=1\) if node \(i\) and node \(j\) are adjacent, and \(\mathbf{A}_{ij}=0\) otherwise, and \(\mathbf{D}\) is the degree diagonal matrix \(\mathbf{D}=\text{diag}(\mathbf{d}_{1},\cdots,\mathbf{d}_{n})\) where \(\mathbf{d}_{i}\) is the degree of node \(i\). We write \(\mathbf{e}_{i}\) to denote the \(i\)-th standard basis vector. We fix an arbitrary orientation for all edges in \(\mathcal{G}\), and for each edge \(e=(u,v)\in E\), we define \(\mathbf{b}_{e}=\mathbf{b}_{uv}=\mathbf{e}_{u}-\mathbf{e}_{v}\), where \(u\) and \(v\) are head and tail of \(e_{r}\) respectively. Then, \(\mathbf{L}\) can be rewritten as \(\mathbf{L}=\sum_{e\in\mathbf{B}}\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\). Matrix \(\mathbf{L}\) is singular and positive semidefinite with its Moore-Penrose pseudoinverse being \(\mathbf{L}^{\dagger}=\left(\mathbf{L}+\frac{1}{n}\mathbf{J}\right)^{-1}-\frac{1}{n}\mathbf{J}\). The transition matrix is \(\mathbf{P}=\mathbf{D}^{-1}\mathbf{A}\), which is a row-stochastic matrix. For any non-empty node sets \(F\subset V\) and \(T=V\backslash F\), we can partition the Laplacian matrix \(\mathbf{L}\) into 4 blocks: \[\mathbf{L}:=\left[\begin{array}{cc}\mathbf{L}_{[F,F]}&\mathbf{L}_{[F,T]}\\ \mathbf{L}_{[T,F]}&\mathbf{L}_{[T,T]}\end{array}\right]\] Then, the _Schur complement_ of graph \(\mathcal{G}\) onto node set \(T\), denoted by \(\mathcal{S}(T)\), is the matrix in closed form as \[\mathcal{S}(T)=\mathbf{L}_{[T,T]}-\mathbf{L}_{[T,F]}\mathbf{L}_{[F,F]}^{-1}\mathbf{L}_{[F,T]}.\] \(\mathcal{S}(T)\) is a Laplacian matrix of a graph with node set \(T\), and we use \(\mathcal{G}(\mathcal{S}(T))\) to denote the corresponding graph of \(\mathcal{S}(T)\). ### _Information Centrality_ Given a graph \(\mathcal{G}=(V,E)\) and two vertices \(x,y\in V\), the effective resistance, which is a form of Euclidean distance [42], \(\mathcal{R}_{xy}^{\mathcal{G}}\) between nodes \(x\) and \(y\) is defined as \(\mathcal{R}_{xy}^{\mathcal{G}}=\mathbf{b}_{xy}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{xy}\). We refer to the maximum value of pairwise effective resistance in a graph as the effective resistance diameter, and denote it by \(\phi\). Based on the physical definition of the effective resistance [42], \(\phi\) is less than the diameter of the graph, which is often small in real-life networks [43]. For any node set \(T\subset V\), the _Schur complement_ onto \(T\) can be viewed as a vertex sparsifier that preserves pairwise effective resistance [39, 44], which means \[\mathcal{R}_{xy}^{\mathcal{G}}=\mathcal{R}_{xy}^{\mathcal{G}(S(T))}, \tag{1}\] holds for any pair of nodes \(x,y\in T\). For a network \(\mathcal{G}=(V,E)\) and a node \(v\in V\), we use \(\mathcal{R}_{v}^{\mathcal{G}}\) to denote the sum of effective resistances between \(v\) and all nodes in \(V\backslash\{v\}\) (we will refer to \(\mathcal{R}_{v}^{\mathcal{G}}\) as the resistance distance of node \(v\) throughout the paper), i.e., \(\mathcal{R}_{v}^{\mathcal{G}}=\sum_{u\in V\backslash\{v\}}\mathcal{R}_{uv}^{ \mathcal{G}},\) which can be restated [45] in a matrix form as \[\mathcal{R}_{v}^{\mathcal{G}}=n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{ \dagger}\right). \tag{2}\] The information centrality \(\mathcal{I}_{v}^{\mathcal{G}}\) correlates to the resistance distance \(\mathcal{R}_{v}^{\mathcal{G}}\)[30, 32], and can be expressed by: \[\mathcal{I}_{v}^{\mathcal{G}}=\frac{n}{\mathcal{R}_{v}^{\mathcal{G}}}=\frac{n} {n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)} \tag{3}\] which is defined on connected networks. We will remove the superscripts of \(\mathcal{R}_{xy}^{\mathcal{G}}\), \(\mathcal{R}_{v}^{\mathcal{G}}\) and \(\mathcal{I}_{v}^{\mathcal{G}}\) when \(\mathcal{G}\) is clear from the context. ### _Supermodular Optimization_ Let \(X\) be a finite set, and \(2^{X}\) be the set of all subsets of \(X\). Let \(f:2^{X}\rightarrow\mathbb{R}\) be a set function on \(X\). Then, \(f\) is called monotone decreasing if for any subsets \(S\subset H\subset X\), \(f(S)>f(H)\) holds. Furthermore, we say function \(f\) is supermodular if for any subsets \(S\subset H\subset X\) and any element \(a\in X\backslash H\), it satisfies \(f(S\cup\{a\})-f(S)\leq f(H\cup\{a\})-f(H)\). The standard greedy algorithm has proven to be an effective solution for the cardinality-constrained set function problem in supermodular optimization, with a guaranteed \((1-1/e)\) approximation ratio. However, many crucial set functions do not satisfy the supermodularity requirement. Nevertheless, the greedy algorithm still frequently produces desirable results for a broad range of non-supermodular applications [37, 38]. ## 3 Problem Formulation In this section, we formulate the problem of minimizing the information centrality of a target node by removing edges. Consider a connected unweighted undirected network \(\mathcal{G}=(V,E)\) and a target node \(v\). For any edge set \(P\subset E\), define \(\mathcal{G}\setminus P=(V,E\backslash P)\) as the remaining graph resulted by removing edges in edge set \(P\). Then we define the set function of the information centrality, \(\mathcal{I}_{v}(P)=\mathcal{I}_{v}(\mathcal{G}\setminus P)\). Similarly, we can define the set function of the sum of the effective resistance \(\mathcal{R}_{v}(P)\). These two definitions are valid whenever the removal of edges in set \(P\) maintains the connectivity of the network. Rayleigh's monotonicity law [42] asserts that the effective resistance between any pair of nodes will increase when an edge is removed; hence, the resistance distance \(\mathcal{R}_{v}(P)\) is monotonically increasing, and the information centrality \(\mathcal{I}_{v}(P)\) is monotonically decreasing. Then the following problem arises naturally: How to optimally remove a subset \(P\subset E\) from the network subject to a cardinality constraint \(k\) specifying the maximum number of edges that can be removed so that \(\mathcal{I}_{v}\) is minimized while preserving the connectivity of the graph. Mathematically, the information centrality minimization problem can be stated as follows. **Problem 1**.: _(Information Centrality Minimization, InforCen-Min) Given a connected undirected network \(\mathcal{G}=(V,E)\), a predefined target node \(v\), and an integer \(k\), we aim to find the edge set \(P\subset E\) with \(|P|=k\), so that the information centrality \(\mathcal{I}_{v}(P)\) is minimized while simultaneously retaining the connectivity of the network. This set optimization problem can be formulated as:_ \[P^{*}=\operatorname*{arg\,min}_{P\subset E,|P|=k,\mathcal{G}\setminus P\text{ is connected}}\mathcal{I}_{v}(P). \tag{4}\] Simply removing edges incident to the target node seems effective to reduce its information centrality. However, removing nonadjacent edges can provide greater benefits in some cases. For example, consider the Dolphin [46] network with 62 nodes, 159 edges, and targeting the green node in Fig. 1. We find that none of the top-10 edges (colored in red) leading to the largest reduction in the target node's information centrality are adjacent to it. This suggests that the scope of edge removal should not be simplified. ## 4 Our Contribution We investigate the InforCenMin problem, both theoretically and experimentally, where the goal is to minimize the information centrality of a target node given the possibility to remove \(k\) edges while keeping the underlying graph connected. This is a very natural and intuitive formulation of the problem of moderating critical nodes in a network, which in particular takes the very important parameter of information centrality into account. We first prove that the problem is NP-complete, building on a reduction from the Hamiltonian cycle problem [47]. Furthermore, we show that while the objective function of our problem is monotonically decreasing, it does not enjoy supermodularity property, by providing an explicit counterexample. As a result, the traditional greedy approaches would not facilitate us with any theoretical guarantees. However, since such greedy approaches have proven to be a useful base for devising effective and efficient algorithms in the past [37, 38] (even when super-modularity does not hold) we rely on them as a starting point too. We first propose a simple algorithm where edges are deleted following the classic greedy approach. This algorithm, called ExactSM, runs in \(O(n^{3})\), which makes it impractical for very large networks. As the first step towards a faster algorithm, we use the random walk-based approximate Schur complement method [39, 48] to present a faster algorithm. In this algorithm, called ApproxSC, after each edge removal, the new value of the resistance distance and the connectivity status can be determined very quickly by carefully modifying the set of random walks. To further speed up the computation, as the next step we also leverage the sum estimation method [40, 41], which allows us to provide the algorithm FastICM that runs in nearly linear time in the number of edges. The sum estimation method permits us to approximate the resistance distance of the target node by sampling limited pairs of effective resistances. Our theoretical analyses confirm that the combination of the above techniques is well-suited to our problem. Specifically, the absolute error between the approximate resistance distance upon the removal of any edge and the corresponding exact value is at most \(\alpha n\), for a small error parameter \(\alpha\). Besides our theoretical analyses, we conduct a comprehensive set of experiments on real-world networks from Network Repository [46] and SNAP [49] and on synthetic graph models, namely Barabasi-Albert (BA) [50] and Watts-Strogatz (WS) [43]. We compare our algorithms against several other algorithms and observe that our algorithms significantly outperform others while producing results very close to optimal solution. Furthermore, our linear time FastICM algorithm enjoys an extremely fast run time in practice as well. In particular, it completes the task for networks with more than one million nodes in a few hours on a standard 32G-Linux box. Therefore, our algorithms not only allow a rigorous theoretical analysis but also outperform other algorithms in practice, in both aspects of effectiveness and efficiency. ## 5 Related Work In this section, we review the literature related to ours, including minimizing spread of misinformation in social networks, privacy protection of networks, edge removal strategies, and edge centrality measures. Specifically, prior work provided in Sections 5.1 and 5.2 would let us place our contribution in a bigger picture and draw the connection to some adjacent topics, while the results in Sections 5.3 and 5.4 are more closely related. ### _Minimizing Spread of Misinformation_ Consider the setup where a rumor starts spreading in a social network from a known set of nodes and following a predefined spreading dynamics such as the Independent Cascade model [51]. Then, the goal is to contain the spread by blocking \(k\) edges [21, 22]. A greedy algorithm is developed in [22] where in each iteration an edge with maximum containment ability is selected, and some heuristics are proposed in [21]. In [26], considering the cost for blocking each edge, some budget constraints are defined for critical edge identification. Some heuristic algorithms are then proposed to solve the problem. Applying the maximum influence arborescence method [29], an approximation method is proposed in [12]. These algorithms have several shortcomings. Firstly, they are usually tailored for a particular rumor spreading model such as the Independent Cascade model rather using a more generic notion of information centrality. Secondly, they disregard the important constraint of connectivity, and they might produce disconnected networks. Furthermore, they often fail to cover large networks due to their time complexity. ### _Network Privacy Protection_ One area which is closely related to our work is privacy protection in networks. Most works in this area focus on protecting the privacy of users by structural anonymization. The goal is to modify graph structure to anonymize the underlying network, using various methodologies such as \(k\)-Anonymity, and differential privacy-based approaches [52, 53]. One important objective here is to anonymize the key nodes in a network by reducing their centrality. Previous studies have investigated the problem of removing edges to decrease the centrality of some target node with regard to degree centrality [23], closeness centrality [24], and the likelihood that the target node appears in an absorbing random walk [28]. However, the notion of information centrality has not been analyzed in this framework. ### _Edge Removal Strategies_ Admittedly, as a practical approach of graph edit, edge removal operation has been extensively used for different Fig. 1: The Dolphin network with the green target node. application purposes, such as controlling disease spreading [10, 11], minimizing the number of spanning trees [54], and optimizing eigenvalues of related matrices [55, 56]. In social networks, removing edges can correspond to unfriending, not exposing the posts/comments, or maintaining social distance. In computer networks, removing edges is similar to cutting a fiber or bringing down a physical/virtual link temporarily. Many studies on edge removal require the final graph to remain connected. For instance, in [36], the authors have studied the problem of decreasing the greatest eigenvalue of the adjacency matrix by link removal while preserving network connectivity. The authors of [57] have investigated the problem of expanding the network diameter by eliminating edges such that the resulting graph remains connected. This is because connectivity is usually essential for the network to preserve its main functionality. ### _Edge Centrality_ The drop in the information centrality of a node caused by deleting an edge can be used as a measure of its importance. There have been many metrics proposed in the literature to assess the importance of a single edge or a group of edges. Individual edge centrality measures comprise, among others, edge betweenness [58], spanning edge centrality [59], and biharmonic distance related edge centrality [60]. Additionally, the importance of an edge can be determined based on the centrality of its endpoints, such as the sum or product of the degree, closeness, and betweenness centrality of the endpoints [61]. Group edge centrality measures are typically designed to quantify the effect of deleting these edges on specific objective functions, such as the inverse geodesic length [62], the total pairwise connectivity [63], and the forest index [64]. These existing edge centrality measurements are tailored for distinct use cases. Our proposed metric is defined based on the reduction in the information centrality of a target node. ## 6 Complexity Challenges In this section, we study the computational complexity of the InforCenMin problem. We first consider the decision version of the problem and prove that it is NP-complete in Theorem 6.1. **Problem 2**.: _(Information Centrality Minimization, Decision Version, InforCenMinD) Given a connected undirected graph \(\mathcal{G}=(V,E)\) with \(n\) nodes, \(m\) edges, and node \(v\) being the target node, an integer \(k\in\mathbb{N}^{+}\), a real number \(x\in\mathbb{R}^{+}\), decide whether or not there is a set \(P\) of \(k\) edges to be removed from \(\mathcal{G}\) such that \(\mathcal{I}_{v}\) is at most \(x\) in the connected subgraph \(\mathcal{G}\setminus P\)?_ **Theorem 6.1**.: _The InforCenMinD problem is NP-complete._ **Proof.** We demonstrate a polynomial time reduction from the Hamiltonian cycle problem, which is NP-complete [47]. We presume that edge deletion reserves the network's connectivity. Given that we can guess the \(k\) edges to be removed and compute \(\mathcal{I}_{v}\) in polynomial time, it is apparent that the problem is in NP. The smallest \(\mathcal{I}_{v}\) that could possibly be assigned to a node in a connected undirected network with \(n\) nodes is \(\mathcal{I}_{v}^{\min}=n/\sum_{i=1}^{n-1}i=\frac{2}{(n-1)(n-2)}\) (e.g., node \(v\) in Fig. 2). Graph \(\mathcal{G}\) contains a Hamiltonian cycle if and only if \(\mathcal{G}\) has a connected subgraph \(\mathcal{G}^{\prime}\) with \(n\) nodes, \(n-1\) edges and the information centrality of its end node being \(\mathcal{I}_{v}^{\min}\). So by choosing \(k=m-n+1\), and \(x=\mathcal{I}_{v}^{\min}\), we have a reduction from the Hamiltonian cycle problem to the InforCenMinD problem, proving its NP-completeness. \(\square\) A very common technique to tackle NP-hard problems, such as InforCenMin, is to prove that the objective function enjoys both monotonicity and super-modularity, which consequently would provide us with a Hill Climbing algorithm with a constant approximation guarantee [37, 38]. While as stated in Lemma 6.2, our objective function is monotone, it does not possess super-modularity property, proven in Lemma 6.3. **Lemma 6.2**.: _(Monotonicity) For two subsets \(S\) and \(H\) of edges satisfying \(S\subset H\subset E\), and \(\mathcal{G}\setminus H\) is a connected graph, we have_ \[\mathcal{I}_{v}(H)<\mathcal{I}_{v}(S).\] **Lemma 6.3**.: _(Non-supermodularity) \(\mathcal{I}_{v}(\cdot)\) is not supermodular._ **Proof.** To exemplify the non-supermodularity of the objective function (4), consider the network in Fig. 3 (a), a \(5\)-node graph with node \(1\) being the target node and \(e_{1}\) and \(e_{2}\) being edges to delete. We define two edge sets, \(S=\emptyset\) and \(H=\{e_{1}\}\). Then, we have \(\mathcal{I}_{v}(S)=1.8\), \(\mathcal{I}_{v}(S\cup\{e_{2}\})=1.4\), \(\mathcal{I}_{v}(H)=1.3\), and \(\mathcal{I}_{v}(H\cup\{e_{2}\})=0.7\). Thus, we have \[\mathcal{I}_{v}(S)-\mathcal{I}_{v}(S\cup\{e_{2}\})=0.4<0.6=\mathcal{I}_{v}(H) -\mathcal{I}_{v}(H\cup\{e_{2}\}).\] This result clearly contradicts the definition of supermodularity. Consequently, the set function of the InforCenMin problem is not supermodular. \(\square\) ## 7 Deterministic Greedy Algorithm The InForCenMin problem is inherently combinatorial. Its optimal solution can be computed using the following naive brute-force approach. For each set \(P\) of the \(\binom{m}{k}\) possible subsets of edges, determine the connectivity of \(\mathcal{G}\setminus P\), and calculate \(\mathcal{I}_{v}(P)\) in the reduced graph by inverting the Laplacian matrix. Finally, output the subset \(P^{*}\) of \(k\) edges whose deletion leads to greatest decrease in \(\mathcal{I}_{v}\) while keeping the connectivity of the graph. Since inverting the Laplacian matrix could take \(\Omega(n^{3})\) time and there are \(\binom{m}{k}\) possible Fig. 3: Two \(5\)-node toy networks. Fig. 2: A \(5\)-node path graph targeting at node \(v\). subsets, the algorithm's time complexity is in \(\Omega\left({{m\choose k}n^{3}}\right)\). Thus, albeit its simplicity, this method is computationally unaffordable even for small networks due to its exponential time complexity in \(k\). To tackle this issue, one may consider the heuristic approach of picking the top-\(k\) edges with the greatest individual effect on reducing the information centrality of the target node. However, due to the interdependence of edges, the cumulative effect of removing a group of edges is often not equivalent to the sum of individual edge effects. For example, see the network in Fig. 3 (b), where node 2 is the target node, and edges \(e_{2}\) and \(e_{3}\) are the top-2 edges whose removal has the greatest individual effect on the information centrality of node 2. Surprisingly, removing these top-\(2\) edges reduces the information centrality of node 2 to 0.71 while removing edges \(e_{1}\) and \(e_{2}\) would reduce it to \(0.56\). An alternative heuristic is the standard greedy algorithm which starts with an empty set \(P\). Then, in each iteration \(i\in\{1,2,\ldots,k\}\), it adds the edge that results in the largest decrease in information centrality of the target node, while preserving connectivity. However, this greedy approach is computationally expensive because of two obstacles present during each iteration: 1) determining whether removal of a certain edge would disconnect the graph could take \(\Omega(n)\) time; 2) the computation of the new information centrality through inversion of the matrix might take \(\Omega(n^{3})\) time. As a result, the total running time could amount to \(\Omega(kmn^{3})\), which is unaffordable for large networks. To overcome the first obstacle, some researches focusing on the dynamic connectivity problem resolve connectivity queries in \(O(\log n/\log\log n)\) time [65, 66]. However, these works remain in the realm of theory and can hardly be applied in practice. Moreover, the computational bottleneck caused by 2) is larger; thus, we first focus on reducing the time complexity of updating information centrality. Let \(\mathcal{I}_{v}^{\Delta}(e)=\mathcal{I}_{v}(\{e\})-\mathcal{I}_{v}(\emptyset)\) denote the margin gain of the information centrality by removing edge \(e\). We provide an efficient method for computing \(\mathcal{I}_{v}^{\Delta}(e)\) in the following lemma. **Lemma 7.1**.: _Let \(\mathcal{G}=(V,E)\) be a connected graph with Laplacian matrix \(\mathbf{L}\). Let \(e=(x,y)\in E\) be a candidate edge satisfying that \(\mathcal{G}\setminus\{e\}\) is connected. Then,_ \[\mathcal{I}_{v}^{\Delta}(e)=\frac{-(nb+n^{2}c)}{\left(na\mathbf{L}_{vv}^{\dagger}+ nc+a\mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)+b\right)\left(n\mathbf{L}_{vv}^{\dagger}+ \mathrm{Tr}\left(\mathbf{L}^{\dagger}\right)\right)},\] _where \(a=1-\mathbf{b}_{e}^{\dagger}\mathbf{L}^{\dagger}\mathbf{b}_{e}\), \(b=\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{e}\) and \(c=\left(\mathbf{L}^{\dagger}\mathbf{b}_{e}\right)_{v}^{2}\)._ **Proof.** By definition, we have \[\mathcal{I}_{v}(\{e\})=\frac{n}{n(\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})_{vv}^{ \dagger}+\mathrm{Tr}\left((\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger} \right)}.\] By Sherman-Morrison formula [67], we have \[(\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger}=\mathbf{L}^{\dagger}+\frac{\mathbf{L}^ {\dagger}\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}}{1-\mathbf{b}_{e}^{\top}\mathbf{ L}^{\dagger}\mathbf{b}_{e}}. \tag{5}\] Note that \(\mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}\mathbf{b}_{e}\) equals the effective resistance between nodes \(x\) and \(y\), whose value is 1 whenever removing this edge partitions the graph into two components and is less than 1 otherwise. Thus, the gain in information centrality can be written as \[\mathcal{I}_{v}^{\Delta(e)}= \frac{n}{n\big{(}\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top}\big{)}_{vv}^{ \dagger}+\mathrm{Tr}\left((\mathbf{L}-\mathbf{b}_{e}\mathbf{b}_{e}^{\top})^{\dagger} \right)}\] \[-\frac{n}{n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr}\left(\mathbf{L}^{\dagger }\right)}\] \[= \frac{-(nb+n^{2}c)}{\left(na\mathbf{L}_{vv}^{\dagger}+nc+a\mathrm{Tr} \left(\mathbf{L}^{\dagger}\right)+b\right)\left(n\mathbf{L}_{vv}^{\dagger}+\mathrm{Tr} \left(\mathbf{L}^{\dagger}\right)\right)},\] completing the proof. \(\Box\) According to Lemma 7.1, if \(\mathbf{L}^{\dagger}\) is known, we can efficiently compute the marginal gain of the information centrality for one edge by rank-1 update in \(O(n)\) time. Then, we propose a deterministic greedy algorithm \(\textsc{ExactSM}(\mathcal{G},v,k)\). As outlined in Algorithm 1, the first step of this algorithm is to set the result edge set \(P\) to empty and compute \(\mathbf{L}^{\dagger}\) in \(O(n^{3})\) time (Line 1). Then we add \(k\) edges to \(P\) iteratively (Lines 2-11). In each iteration, for each candidate edge \(e\in E\), we determine the connectivity of the graph \(\mathcal{G}\setminus\{e\}\) in \(O(n)\) time (Line 4) and compute the marginal gain of the information centrality in \(O(n)\) time (Line 7). After obtaining \(\mathcal{I}_{v}^{\Delta}(\cdot)\) for each candidate edge, we select the edge that leads to the smallest marginal gain (Line 8), update the solution (Line 9) and graph (Line 10), and update \(\mathbf{L}^{\dagger}\) according to Equation (5) in \(O(n^{2})\) time. In summary, the total running time of Algorithm 1 is \(O(n^{3}+kmn+kn^{2})\). ``` Input : A connected graph \(\mathcal{G}=(V,E)\); a target node \(v\in V\); an integer \(k\leq m\) Output : A subset of \(P\subset E\) with \(|P|=k\) 1 Set \(P\leftarrow\emptyset\); compute \(\mathbf{L}^{\dagger}\) for\(i=1\) to \(k\)do 2for\(e\in E\)do 3if\(\mathcal{G}\setminus\{e\}\) is not connectedthen 4 Set \(\mathcal{I}_{v}^{\Delta}(e)=0\) 5else 6 Compute \(\mathcal{I}_{v}^{\Delta}(e)\) by Lemma 7.1 7 8 Select \(e_{i}\) s.t. \(e_{i}\leftarrow\operatorname*{arg\,min}_{e\in E}\mathcal{I}_{v}^{\Delta}(e)\) Update solution \(P\gets P\cup\{e_{i}\}\) Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\) Update \(\mathbf{L}^{\dagger}\leftarrow\mathbf{L}^{\dagger}+\frac{\mathbf{L}^{\dagger}\mathbf{b}_{e_{i}} \mathbf{b}_{e}^{\top}\mathbf{L}^{\dagger}}{1-\mathbf{b}_{e_{i}}^{\top}\mathbf{L}^{\dagger}\mathbf{b} _{e_{i}}}\) 9 10 11return\(P\) ``` **Algorithm 1**\(\textsc{ExactSM}(\mathcal{G},v,k)\) ## 8 Fast Randomized Greedy Algorithm The deterministic greedy algorithm, while faster than the brute-force approach, is not feasible for large networks due to the high computational cost of determining the information centrality marginal gain and graph connectivity. On the positive side, information centrality and the resistance distance are shown to be correlated in Equation (3). This permits us to leverage random walk-based approximate Schur complement method [39, 48] to present a faster algorithm, called ApproxiSC in Section 8.1. To speed up the computation even further, we then utilize the sum estimation method [40, 41], which allows us to present the algorithm FastICM in Section 8.2 which runs in nearly linear time in the number of edges. ### _A Simple Sampling Algorithm_ To efficiently calculate and update the information centrality, the main computational bottleneck is the fast calculation of effective resistance Equation (3). Several approximation methods have been proposed to estimate pairwise effective resistance in sublinear time [39, 68]. However, a naive approach would need to approximate \(n\) distinct pairwise effective resistance and then update them for \(m\) potential candidate edge. This seems to be computationally expensive. To address this issue, we approach the problem from a different angle, which facilitates us with a much faster mechanism to approximate the effective resistances. As mentioned in Section 2.3, the Schur complement can be considered as a vertex sparsifier preserving pairwise effective resistance. This motivates us that if we can efficiently compute and update the Schur complement, the aforementioned challenges could be resolved. However, calculating the Schur complement directly is time-consuming. Building upon the ideas of sparsifying random walk polynomials [69] and Schur complement [48, 70], we approximate it using a collection of random walks that can be maintained upon edge removal with a reasonable time complexity. The following lemma, borrowed from [39, 48, 69, 70], asserts that these walks provide an accurate estimate of the Schur complement. **Lemma 8.1**.: _[_39_]_ _Let \(\mathcal{G}=(V,E)\) be an undirected unweighted graph with a subset of nodes \(T\subset V\). Assume \(\epsilon\in(0,1)\), and let \(\rho=O\left((\log n)\epsilon^{-2}\right)\) be some sampling concentration parameter. Suppose that \(\mathcal{H}\) is an initially empty graph. For every edge \(e=(i,j)\in E\), repeat the following procedure \(\rho\) times:_ 1. _Simulate a random walk_ \(w_{1}\) _starting from node_ \(i\) _until it first hits_ \(T\) _at some node_ \(t_{1}\)_._ 2. _Simulate a random walk_ \(w_{2}\) _starting from node_ \(j\) _until it first hits_ \(T\) _at some node_ \(t_{2}\)_._ 3. _Combine these two walks (including e) as two sides to get a walk_ \(w=(t_{1}=u_{0},\cdots,u_{l}=t_{2})\)_, where_ \(\tilde{l}\) _is the length of the combined walk._ 4. _Add the edge_ \((t_{1},t_{2})\) _to graph_ \(\mathcal{H}\) _with weight_ \(1/\left(\rho\tilde{l}\right)\)_._ _Then, the Laplacian matrix \(\boldsymbol{L}_{\mathcal{H}}\) of the resulting graph \(\mathcal{H}\) satisfies \(\boldsymbol{L}_{\mathcal{H}}\approx_{\epsilon}\mathcal{S}(T)\) with probability of at least \(1-O(1/n)\)._ Based on Lemma 8.1, we can approximate the Schur complement for an arbitrary terminal set \(T\subseteq V\) and obtain a graph \(\mathcal{H}\) satisfying \(\boldsymbol{L}_{\mathcal{H}}\approx_{\epsilon}\mathcal{S}(T)\). Let \(\tilde{\mathcal{R}}_{xy}\) be the effective resistance between any pair of nodes \(x,y\in T\) on graph \(\mathcal{H}\), then based on the fact that matrix approximations also preserve approximations of their quadratic forms, we have \[\tilde{\mathcal{R}}_{xy}\approx_{\epsilon}\mathcal{R}_{xy}^{\mathcal{G}( \mathcal{S}(T))}=\mathcal{R}_{xy}^{\mathcal{G}}. \tag{6}\] #### 8.1.1 Approximation of Effective Resistance To approximate \(\mathcal{R}_{uv}\), a direct way is to set \(T=\{u,v\}\), and then estimate \(\mathcal{R}_{uv}\) as the reciprocal of the weight between \(u\) and \(v\) in the resulting graph \(\mathcal{H}\) from Lemma 8.1. However, even if the target node \(v\) is fixed, the approximation of the effective resistance between \(v\) and all other nodes is expensive as it may result in redundant walk sampling. To address this issue, we carefully modify the sampling steps in Lemma 8.1. First, we set \(T=\{v\}\), and sample an initial collection of walks using steps (1)-(3) in Lemma 8.1. For each node \(u\), we set \(T_{2}=\{u,v\}\), and traverse and shorten all the walks at the first position they hit \(T_{2}\), then add the weight of edge \((u,v)\) to the two-node graph \(\mathcal{H}\). This yields a new collection of random walks and an approximation of \(\mathcal{S}(T_{2})\). However, repeating this process for every node takes at least \(\Omega(mn(\log n)/\epsilon^{2})\) time, which is computationally infeasible for large networks. We notice that most nodes may appear in a small fraction of the sampled walks; thus, we do not necessarily need to traverse all the walks for each node \(u\). So we propose an approach where we traverse each sampled walk exactly once. More precisely, for any walk \(w\), we traverse it once and keep track of which nodes appear accompanied by the positions they first appear on both sides. If a node \(u\) is only encountered on one side of the walk, setting \(T_{2}=\{u,v\}\) will contribute an edge weight to the resulting graph \(\mathcal{H}\). For example, consider the walk in Fig. 4(a)-(b) which starts from the red edge and initially stops at \(v\). By setting \(T_{2}=\{u,v\}\), this walk contributes a weight of \(\frac{1}{8\epsilon}\) to edge \((u,v)\) in \(\mathcal{H}\). After summing up the weights contributed by all the walks, we can approximate \(\mathcal{R}_{uv}\) as the reciprocal of the weight of edge \((u,v)\) in \(\mathcal{H}\). In summary, we sample random walks according to Lemma 8.1 by setting \(T=\{v\}\), and approximate the effective resistances between node \(v\) and all nodes \(u\in V\backslash\{v\}\) by traversing the sampled walks once. According to Equation (6), we can approximate the resistance distance \(\mathcal{R}_{v}\) by \(\tilde{\mathcal{R}}_{v}=\sum_{u\in V\backslash\{v\}}\tilde{\mathcal{R}}_{uv}\), which satisfies \(\left|\mathcal{R}_{v}-\tilde{\mathcal{R}}_{v}\right|\leq\varepsilon\mathcal{ R}_{v}\leq n\epsilon\phi\), where \(\phi\) is the effective resistance diameter of the network [71]. Following Lemma 8.1, we need to sample \(O(m(\log n)/\epsilon^{2})\) walks. Another critical factor that we need to take into account is the length of sampled walks with an average value of \(l_{\text{avg}}=\sum_{u\in V\backslash\{v\}}2\boldsymbol{d}_{u}F_{u,v}/m\), where \(F_{u,v}\) represents the hitting time from node \(u\) to node \(v\). However, some walks may be excessively long, making our computations expensive. To address this issue, we adopt the \(l\)_-truncated random walk_ concept [72], where walks are accepted if they shorter than \(l\), and disregarded otherwise. Of course, this would result in less accurate solutions. However, we pursue to balance accuracy and efficiency in the choice of \(l\). We should stress that this extension is based on the observation that a walk's contribution to a related edge in \(\mathcal{H}\) decreases as its length increases, with a walk longer than \(l\) contributing less than \(1/(\rho l)\) to the corresponding edge. The following lemma ensures that the expected ratio of invalid walks can be made arbitrarily small for a suitably chosen value of \(l\). **Lemma 8.2**.: _[_73_]_ _Given a connected graph \(\mathcal{G}=(V,E)\), a node set \(T=\{v\}\), and a ratio of invalid walks \(\gamma>0\), if the maximum length \(l\) of random walks satisfies \(l=\log(m\gamma/\sqrt{n-1}\left\|\boldsymbol{d}_{-T}\right\|_{2})/\log(\lambda)\), where \(\lambda\) is the spectral radius of matrix \(\boldsymbol{P}_{-T}\), then the expected ratio of invalid walks is less than \(\gamma\)._ Based on above analysis, we propose an algorithm Initialization which returns an array \(\mathcal{R}\) containing the effective resistances between \(v\) and all nodes in a given set \(Q\) (with \(Q=V\backslash\{v\}\) in this section), together with a collection of random walks \(W\) for future computations. The outline of Initialization is presented in Algorithm 2. #### 8.1.2 Updating Effective Resistance The removal of an edge alters the effective resistances. Recalculating them by resampling walks from scratch is computationally inefficient. To address this issue, we propose an efficient algorithm, called DeleteEdge, which updates effective resistances by modifying the existing walks upon edge removal, as outlined in Algorithm 3. Before discussing the algorithm in more detail, we introduce a node-walk mapping data structure, which maps nodes to the walks that contain them to facilitate the subsequent computation of effective resistances, and is constructed as follows. * First, assign two pristine arrays to each node: the walk array and the position array, aimed at capturing the walks that the node participates in and their corresponding positions, respectively. * Systematically traverse any walk \(w\in W\), and scrutinize any node \(u\) that is encountered for the first time at position \(p\) on either side of the walk. Then append \(w\) and \(p\) to the walk array and the position array corresponding to \(u\). The following example illustrates how this data structure works. **Example.** Fig. 5 demonstrates two instances of the node-walk map. Specifically, node \(x\) is successively encountered at positions \(p_{1}\) and \(p_{2}\) on the left side of walk \(w_{1}\), so we append \(w_{1}\) and \(p_{1}\) to the walk array and the position array corresponding to \(x\). For walk \(w_{2}\), node \(x\) appears on both sides, leading us to record \(p_{2}\) and \(p_{3}\). In turn, for walk \(w_{3}\), we document position \(p_{4}\), where node \(x\) is first encountered on the right side. Similar steps can be taken to fill the walk array and position array for node \(y\). Utilizing the node-walk mapping data structure, we present an efficient method for updating the effective resistances upon removal of an edge \(e=(x,y)\). The core idea is to reduce the edge removal operation to adding nodes to set \(T\) as shown in Fig. 4 (c)-(d). This reduction is feasible because if nodes \(x,y\in T\), then removing edge \(e\) in the original graph \(\mathcal{G}\) equates to a decrement in edge \(e\)'s weight from the Schur complement graph \(\mathcal{G}(\mathcal{S}(T))\) by \(1\). In the following analysis, we fix node \(u\). We first sample the random walks with \(T=\{v\}\) and then set \(T_{4}=\{u,v,x,y\}\) and denote the resulting approximate Schur complement graph as \(\mathcal{H}_{u}\). To modify these walks, we utilize the node-walk map to expeditiously determine the walks and positions of nodes \(u,x,y\). The corresponding walks are then shortened and the edge weights in graph \(\mathcal{H}_{u}\) are updated. The final step involves subtracting the edge weight between \(x\) and \(y\) by \(1\). The sufficient condition for the reduced graph being Fig. 4: Pipeline of our algorithm. (a) Set \(T=\{v\}\) and sample an initial walk (colored in blue) starting from the red edge. (b) For an initial approximation of the effective resistance \(\mathcal{R}_{uv}\), set \(T_{2}=\{u,v\}\) and shortcut the initial walk at the first positions it hit \(T\) such that it contributes a weight of \(\frac{1}{8p}\) to edge \((u,v)\) (colored in yellow) in the approximate graph. (c) When trying to remove edge \(e=(x,y)\), set \(T_{4}=\{u,v,x,y\}\) and shortcut the walk at the first positions it hit \(T\). (d) After modifying all walks, reduce the weight of edge \((x,y)\) in the approximate graph by \(1\). Fig. 5: Illustration of the node-walk mapping data structure. connected is that node \(x\) remains accessible from node \(y\), which suffices to demonstrate that \(v\) is reachable from both \(x\) and \(y\), which can be readily assessed using the updated four-node graph. Consequently, the connectivity of the reduced graph and the new effective resistance between nodes \(u\) and \(v\) can be easily determined. Although this method is simple, it still could take \(\Omega(mn)\) time for updating all pairs of effective resistances for any edge. However, thanks to the initialization process, we store the information of the Schur complement onto each two-node set \(T_{2}=\{u,v\}\) for each \(u\in V\backslash\{v\}\). This would facilitate us to update the four-node graph efficiently. Specifically, we use the node-walk mapping data structure to locate the positions of nodes \(x\) and \(y\). Then, we shorten walks and update edge weights for all possible four-node sets \(T_{4}=\{q,v,x,y\}\) where \(q\) is a node co-occurring with \(x\) or \(y\) on any walk, and update the edge weights of \(\mathcal{H}_{q}\). More accurately, for edges \((q,v),(x,v),(y,v)\), the weights are derived from subtracting the weight of \(\mathcal{G}(\mathcal{S}(T_{2}))\) by the weight reduction due to walk truncation, and for edges \((q,x),(q,y),(x,y)\), the weights are computed by the newly specified walk segments. Finally, we subtract the edge weight between \(x\) and \(y\) by 1, check the connectivity and compute the effective resistance on the updated four-node graph. We note that the average occurrence of each node in all sampled random walks is \(O(mn^{-1}l(\log n)\epsilon^{-2})\), thus the overall time complexity is reduced to \(O(m^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\). To visualize the process of modifying walks, see the walk in Fig. 4, which is affected by the removal of edge \((x,y)\). This walk no longer contributes weight to edge \((u,v)\), but contributes weight to edge \((x,y)\). #### 8.1.3 A Sampling Algorithm for InforCenMin By integrating the techniques for initializing and modifying random paths to approximate effective resistances, we offer a fast algorithm ApproxiSC to address the problem InforCenMin. The pseudocode of this algorithm is summarized in Algorithm 4. Besides the input graph \(\mathcal{G}\), the cardinality constraint \(k\) and the target node \(v\), its input also contains an integer \(l\) that constrains the maximum length of random walks and an error parameter \(\epsilon\). We first set \(P\leftarrow\emptyset\). Then in each iteration, we initialize the random walks for the current graph by making a call to Initialization. Then, we use DeleteEdge to obtain a set of tuples consisting of an edge and the corresponding \(\tilde{\mathcal{R}}_{v}\) in the reduced graph. The edge that results in the largest margin gain of \(\tilde{\mathcal{R}}_{v}\) while preserving graph connectivity will be selected. This process is repeated until \(k\) edges have been added to \(P\). ``` Input : A connected graph \(\mathcal{G}=(V,E)\); an integer \(k<m\); a node \(v\in V\); the length constraint \(l\); an error parameter \(\epsilon\in(0,1)\) Output : An edge set \(P\subset E\) satisfying constraint \(k\) 1 Set \(P\leftarrow\emptyset\)for\(i=1\) to \(k\)do 2\(\mathcal{R},W\leftarrow\textsc{Initialization}(\mathcal{G},v,V,l,\epsilon)\)\(\{(e,\mathcal{R}_{v}(P\cup\{e\}))|e\in E\}\leftarrow\)DeleteEdge\((\mathcal{G},W,v,V,P)\) 3 Select \(e_{i}\) s.t. \(e_{i}\leftarrow\operatorname*{arg\,max}_{e\in E}\tilde{\mathcal{R}}_{v}(P\cup\{e\})\) Update solution \(P\gets P\cup\{e_{i}\}\) Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\)return\(P\) ``` **Algorithm 4**ApproxiSC(\(\mathcal{G},k,v,l,\epsilon\)) We next analyze the time complexity of ApproxiSC, which consists of two parts: approximating the effective resistances and updating them upon edge removal for all existing edges. In the updating process, we also need to check the connectivity of the resulting graph. Generating the random walks takes \(O(ml(\log n)\epsilon^{-2})\) time, initializing the effective resistances takes \(O(ml(\log n)\epsilon^{-2})\) time, and updating them takes \(O(m^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\) time. Furthermore, the connectivity can be checked relatively quickly using the derived graph \(\mathcal{H}\). In summary, the overall time complexity of our proposed algorithm ApproxiSC is in \(O(kml(\log n)\epsilon^{-2}+km^{2}n^{-1}l^{2}(\log n)\epsilon^{-2})\). ### _A More Efficient Algorithm_ The simple sampling algorithm ApproxiSC is much faster than the original deterministic algorithm we began with. Nevertheless, in this section we further reduce its time complexity by proposing a faster sampling algorithm, with a similar accuracy guarantee. #### 8.2.1 Fast Simulating Random Walks Each iteration of ApproxiSC involves simulating \(m\lceil(\log n)/\epsilon^{2}\rceil\)\(l\)-truncated random walks, which is computationally expensive as \(k\) grows. However, we observe adding an edge \(e\) to \(P\) only impacts walks that go through it. Hence, we can speed up the process by only modifying a small fraction of walks when \(e\) is deleted, while reusing the rest for subsequent approximations. Next, we elaborate on this more efficient approach. First, we sample random walks for initialization. Then in each iteration, after selecting an edge \(e\), we modify the affected walks. To efficiently locate these walks, we set up an edge-walk mapping data structure. This data structure, similar to the node-walk mapping data structure shown in Fig. 5, records the walks and the positions where \(e\) first appears, which can be built once the walks are sampled. For each affected walk \(w\), we truncate it at the first appearance of edge \(e\), and extend it with a new random walk from the truncated position to \(v\), or until the total length reaches \(l\). Both edge-walk and node-walk map, as well as the effective resistance array \(\mathcal{R}\), are then updated. Since the average number of walks that include edge \(e\) is \(O(mn^{-1}l(\log n)\epsilon^{-2})\), updating the walks is efficient. #### 8.2.2 Fast Approximation of the Resistance Distance ApproxiSC approximates effective resistances between the target node \(v\) and all other nodes in the network. However, it is only the sum of these resistances, \(\mathcal{R}_{v}\), that is of interest, rather than individual ones. Next, we show that evaluating \(\mathcal{R}_{v}\) can be achieved through computing the effective resistances between \(v\) and a smaller subset \(V\subseteq V\). Based on the techniques developed in previous sum estimation works [40, 41], we show that the sum of \(n\) bounded elements can be approximated by a sample of them in the following lemma. **Lemma 8.3**.: _Given \(n\) bounded elements \(x_{1},x_{2},\ldots,x_{n}\in[0,a]\), an error parameter \(\beta>an^{-1/2}\log^{1/2}n\), we randomly select \(t=O(a\sqrt{n(\log n)}/\beta)\) elements \(x_{c_{1}},x_{c_{2}},\ldots,x_{c_{k}}\), by Bernoulli trials with success probability \(p=an^{-1/2}\log^{1/2}n/\beta\) satisfying \(0<p<1\). We have \(\bar{x}=\sum_{i=1}^{t}x_{c_{i}}/p\) as an approximation of the sum of the original \(n\) elements \(x=\sum_{i=1}^{n}x_{i}\), satisfying \(|x-\bar{x}|\leq n\beta\)._ Based on the above lemma, let \(\phi\) be the effective resistance diameter, \(\alpha\) be an error parameter, \(\underline{\beta}=\frac{\alpha}{2}\), and \(\epsilon=\frac{\alpha}{2\phi}\). Let \(t=O(\phi\sqrt{n(\log n)}/\beta)\), \(\tilde{V}=\{x_{1},x_{2},\ldots,x_{t}\}\) be a randomly selected subset, and \(\hat{\mathcal{R}}_{v}=\sum_{w\in\hat{V}}\hat{\mathcal{R}}_{w}n/(\phi\sqrt{n( \log n)}/\beta)\), \(\hat{\mathcal{R}}_{v}\) can be approximated by \(\hat{\mathcal{R}}_{v}\), satisfying \[|\hat{\mathcal{R}}_{v}-\hat{\mathcal{R}}_{v}|\leq n\beta, \tag{7}\] and further \(\mathcal{R}_{v}\) can be approximated by \(\hat{\mathcal{R}}_{v}\) satisfying \[|\mathcal{R}_{v}-\hat{\mathcal{R}}_{v}|\leq n\alpha. \tag{8}\] By setting \(Q=\tilde{V}\), Initialization and DeleteEdge can be simplified by just approximating and updating effective resistances between node \(v\) and nodes in set \(\tilde{V}\). #### 8.2.3 Fast Algorithm for InforCenMin Equipped with the techniques of fast sampling of random walks and efficient computing of the sum of effective resistances, we are now prepared to propose a faster algorithm FastICM for Problem 1, outlined in Algorithm 5. FastICM performs \(k\) rounds (Lines 5-9) for iteratively selecting \(k\) edges. Given a network \(\mathcal{G}=(V,E)\) with an effective resistance diameter \(\phi\), we randomly select a node set \(\tilde{V}\subset V\) of \(O(\phi\sqrt{n(\log n)}/\beta)\) nodes, simulate \(m[\phi^{2}(\log n)/\alpha^{2}]\)\(l\)-truncated random walks, and approximate the effective resistances. It takes \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3})\) time for the three operations. Then, the algorithm takes \(O(km^{2}n^{-3/2}l^{2}\log^{3/2}n\alpha^{-3}\phi^{3})\) time to update the sum of effective resistances for each candidate edge in all \(k\) rounds. Thus, the overall time complexity of FastICM is \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3}(1+ kml/n))\). Theorem 8.4 characterizes the performance of FastICM. **Theorem 8.4**.: _For any \(k>0\), an error parameter \(\alpha\in(0,1)\), a maximum length \(l\) of walks, and the effective resistance diameter \(\phi\), FastICM runs in \(O(ml(\log n)\phi^{2}\alpha^{-2}+mn^{-1/2}l\log^{3/2}n\alpha^{-3}\phi^{3}(1+ kml/n))\) time, and outputs a solution set \(P\) by greedily selecting \(k\) edges, such that for the edge selected in each iteration, the information centrality of the target node is maximally decreased._ ``` Input : A connected graph \(\mathcal{G}=(V,E)\); an integer \(k<m\); a node \(v\in V\); the maximum length of random walks \(l\); a real number \(\alpha\in(0,1)\); the effective resistance diameter \(\phi\) Output : An edge set \(\mathcal{P}\subset E\) satisfying constraint \(k\) 1 Set \(\beta=\frac{\alpha}{2}\) and \(\epsilon=\frac{\alpha}{2\phi}\); \(P\leftarrow\emptyset\) 2 Sample the node set \(\tilde{V}\subset V\) of \(t=O(\phi\sqrt{n(\log n)}/\beta)\)\(\mathcal{R},W\leftarrow\textsc{Initialization}(\mathcal{G},v,\tilde{V},l,\epsilon)\)for\(i=1\) to \(k\)do 3\(\{(e,\hat{\mathcal{R}}_{v}(P\cup\{e\}))|e\in E\}\leftarrow\)\(\{\) DeleteEdge\((\mathcal{G},W,v,\tilde{V},P)\) 4 Select \(e_{i}\). set \(e_{i}\leftarrow\arg\max_{e\in E}\hat{\mathcal{R}}_{v}(P\cup\{e\})\) 5 Update solution \(P\gets P\cup\{e_{i}\}\) 6 Update the graph \(\mathcal{G}\leftarrow(V,E\backslash\{e_{i}\})\) 7 Update the collection of walks \(W\) following the method stated in Section 8.2.1 8 return\(P\) ``` **Algorithm 5**FastICM\((\mathcal{G},k,v,l,\alpha,\phi)\) ## 9 Experiment In this section, we evaluate the efficiency and effectiveness of the proposed algorithms through experiments on diverse real-world and synthetic networks of varying types and sizes. ### _Experimental Setup_ In this section, we present the basic experimental settings, which encompass the machine configuration, datasets, baseline algorithms, and choices of parameters. **Machine Configuration.** Experiments, implemented in _Julia_, are conducted on a Linux server with 32G RAM and 4.2 GHz Intel i7-7700 CPU and with a single thread. The source code is publicly available on [https://github.com/hahaabc/fasticm](https://github.com/hahaabc/fasticm). **Datasets.** We test our algorithms on both real-world networks, from publically available datasets of Network Repository [46] and SNAP [49], and synthetic graphs. The name and some statistics of the experimented real-life networks are presented in Table I, sorted by the number of nodes. Various synthetic graph models have been introduced to mimic the real-world networks by capturing fundamental properties consistently observed in such networks, such as small diameter and scale-free degree distribution [50]. We use the well-established and popular BA [50] and WS [43] models. The parameters for these graphs have been chosen such that the average degree closely matches that of real-world networks of comparable size, as in Table I. (All experiments are conducted on the largest component of these graphs due to the connectivity requirement in our problem.) **Baselines.** To better illustrate the effectiveness of our two algorithms, we compare them against the following algorithms. We should stress that all these algorithms are enforced to satisfy the connectivity constraint. * Optimum: find an optimal edge set of size \(k\) using brute-force search. * Random: randomly choose \(k\) edges. * Betweenness: select \(k\) edges with the highest betweenness centrality. The betweenness of an edge here accounts for the number of shortest paths between the target node and other nodes which pass through that edge. * Spanning: select \(k\) edges with the highest spanning edge centrality [59], which is defined as the fraction of spanning trees of a graph that contain a certain edge. * Solver: pick \(k\) edges using Algorithm 1 equipped with the approximation technique in [9] plus the connectivity verification process. **Parameters.** For algorithms ApproxiSC and FastICM, we need to set some parameters. Firstly, smaller values of error parameter \(\epsilon\) in ApproxiSC and error parameter \(\alpha\) in FastICM provide more accurate results, while larger values result in higher efficiency. We set \(\epsilon=0.005\) and \(\alpha=0.05\) in our experiments because as it will be observed in our experiments in Section 9.4, they provide a suitable trade-off between accuracy and efficiency. The parameter \(\phi\) in FastICM is the effective resistance diameter of the graph. Since it is inherently difficult to determine its exact value, we instead use the diameter of the graph (reported in Table I), which gives an upper bound on \(\phi\). The length \(l\) of sampled walks can be determined by Lemma 8.2, which involves the spectral radius of matrix \(\mathbf{P}_{-\{v\}}\), and the ratio of invalid walks \(\gamma\). We set the ratio of invalid walks to a relatively small value, namely \(0.1\%\). (According to our experiments, \(0.1\%\) achieves a good trade-off between effectiveness and efficiency. Otherwise, there is nothing particularly unique about this value.) Computing the spectral radius takes much time for large networks, so we generously estimate it as \(\lambda=0.95\). Unless otherwise specified, the values of the other parameters are set by default when varying a parameter. ### _Effectiveness of Proposed Algorithms_ To evaluate the performance of our algorithms, we compare the quality of the returned edges by the Optimum and Random strategies against ours. The comparison is conducted on four small networks, consisting of two random graphs BA and WS with 50 nodes and two real-world networks, namely Karate and Dolphins. Due to the small size of these networks, we can compare the outcome of our algorithms against the optimal solution (obtained using brute-force search). We select 10 random target nodes. For every network and every \(k\) from \(1\) to \(5\), we run each algorithm 10 times, each time for one of the selected target nodes, and calculate the average final information centrality of target nodes. The lower the obtained value, the better the performance of the algorithm. The results, depicted in Fig. 6, demonstrate that the performance of the algorithms is consistent across all datasets and that our algorithms perform almost equally to the optimal solutions, reaching parity in the case of the Karate network. On the other hand, the Random scheme consistently underperforms, while the performance of Solver is similar to that of ours. As further evidence of the effectiveness of our algorithms, we test them on four larger real-world networks. Since finding optimal solutions through brute-force searching is not feasible due to computational limitations, we compare the results of our algorithms against Random, Betweenness, Spanning, and Solver algorithms. Similar to above, we randomly select 10 distinct target nodes to mitigate the impact of node positions on the results. We first determine the initial information centrality of each target node and then degrade it by removing up to \(k=10\) edges using our greedy algorithms and four others. After each edge removal, we calculate and record the updated information centrality. Finally, we average over the information centrality of all target nodes in each round and exhibit the results in Fig. 7. Our algorithms outperform the other algorithms on all tested networks. \begin{table} \begin{tabular}{c c c c|c|c c c} \hline \hline Network & \(n\) & \(m\) & \(Dim.\) & Network & \(n\) & \(m\) & \(Dim.\) \\ \hline karate & 34 & 78 & 5 & Erdos & 6,927 & 11,850 & 4 \\ Dolphins & 62 & 159 & 8 & Oregon & 10,900 & 31,180 & 9 \\ Bomb Train & 64 & 243 & 6 & ca-HepPh & 11,204 & 117,619 & 13 \\ Polbooks & 105 & 441 & 7 & Caida & 26,475 & 53,381 & 17 \\ Hamster & 921 & 4,032 & 8 & Twitter & 404,719 & 713,319 & 8 \\ Virgili & 1,133 & 5,451 & 8 & Delicious & 536,108 & 1,365,961 & 14 \\ ca-GrQc & 5,242 & 14,496 & 17 & FourSquare & 639,014 & 3,214,986 & 4 \\ as20000102 & 6,474 & 13,895 & 9 & YoutubeSnap & 1,134,890 & 2,987,624 & 20 \\ \hline \hline \end{tabular} \end{table} TABLE I: Some statistics of the experimented real-world networks. We denote the number of nodes and edges in the largest connected component by \(n\) and \(m\), respectively, and use \(Dim.\) to represent the diameter of a network. Fig. 6: Average information centrality of target nodes following edge removals, returned by different algorithms on four networks: BA (a), WS (b), Karate (c), and Dolphins (d). ### _Efficiency of Proposed Algorithms_ In the previous section, we observed that our algorithms consistently outperform other algorithms and produce near optimal solutions. Here, we focus on the run time analysis of these algorithms on different networks. For each network, 20 target nodes are selected randomly, for each of which, \(k=10\) edges are removed to minimize its information centrality using these algorithms. The average value of final information centrality for the targeted nodes and the average run time are reported in Table II for all four algorithms. As expected, the value of information centrality is pretty close for all three algorithms. However, ApproxiSC and FastICM are extremely more efficient than the other two algorithms, especially on larger networks. As explained before, Solver is slower than ApproxiSC and FastICM mainly due to its additional connectivity verification cost. FastICM can handle massive networks, such as FourSquare (with 639,014 nodes) and YoutubeSnap (with 1,134,890 nodes), in only a few hours while ExactSM and Solver fail to end before our cut-off time of 24 hours. ### _Impact of Error Parameters_ Recall that ApproxiSC and FastICM have the error parameters \(\epsilon\) and \(\alpha\), respectively. These error parameters balance efficiency and approximation guarantee. We examine their effect on two exemplary datasets, namely Hamster and Virgili. Intuitively speaking, increasing error parameters yields a looser approximation guarantee, which as a result may impact the quality of the selected edge set. Let \(\Delta\) be the difference between the final information centrality derived by ApproxiSC (analogously, FastICM) and ExactSM after edge removal. We report in Fig. 8 the resulting \(\Delta\) after removing 10 edges and the corresponding time consumed for selecting these edges. As expected, smaller \(\epsilon\) (analogously, \(\alpha\)) yield more accurate results, while larger values demonstrate greater efficiency. The results in Fig. 8 should also justify our default choices of \(\epsilon=0.005\) and \(\alpha=0.05\) in our experiments since they provide an acceptable balance between the time cost and approximation guarantee. (In these experiments, we have again used 10 randomly chosen target nodes.) ## 10 Conclusion and Future Work Inspired by the imperative of possessing effective mechanisms to mitigate the potential deleterious effects incurred by a compromised/malicious node within a network, we have delved into the problem of moderating critical nodes. We investigated the setup where the goal is to minimize information centrality of a target node by removing \(k\) edges while maintaining the network's connectivity. We proved the problem to be NP-complete and its objective function to be monotonically decreasing but non-supermodular. By advancement and development of several novel proof techniques such as random walk-based Schur complement approximation, we provided two fast approximation algorithms and their theoretical analysis. Furthermore, our extensive experiments on various real-world and synthetic networks demonstrated that our proposed algorithms provide solutions very close to optimal in most cases. One of our algorithms, which has a nearly linear run time, can cover networks with over one million nodes on a single machine. Therefore, our algorithms not only permit for a rigorous theoretical analysis, but also perform effectively and efficiently in practice. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Network} & \multicolumn{4}{c}{Information Centrality} & \multicolumn{4}{c}{Time (seconds)} \\ \cline{2-9} & FIM & ASC & ESM & SOL & FIM & ASC & ESM & SOL \\ \hline Polbooks & 2.667 & 2.661 & 2.603 & 2.621 & 3 & 6 & 8 & 20 \\ Hamster & 2.172 & 2.145 & 2.145 & 2.145 & 4 & 6 & 9 & 42 \\ Virgili & 2.704 & 2.704 & 2.686 & 2.687 & 4 & 9 & 16 & 73 \\ ca-GrQc & 1.614 & 1.600 & 1.598 & 1.600 & 70 & 83 & 517 & 371 \\ as2OOU2 & 1.349 & 1.330 & 1.336 & 1.339 & 144 & 194 & 31.75 & 298 \\ Erdős & 1.115 & 1.099 & 1.086 & 1.086 & 281 & 319 & 6,336 & 350 \\ Oregon & 1.612 & 1.612 & 1.586 & 1.603 & 814 & 912 & 19,357 & 1,568 \\ ca-HepPh & 2.621 & 2.610 & 2.605 & 2.610 & 412 & 621 & 7,570 & 1,075 \\ Calda & 1.379 & 1.364 & - & 1.374 & 2.047 & 2.580 & - & 4,789 \\ Twitter & 1.932 & 1.911 & - & - & 3,358 & 8,981 & - & - \\ Delicious & 2.012 & 1.931 & - & - & 7,287 & 25,823 & - & - \\ FourSquare & 1.491 & 1.401 & - & - & 19,461 & 68,981 & - & - \\ YoutubeSnap & 1.194 & 1.111 & - & - & 11,454 & 57,401 & - & - \\ \hline \hline \end{tabular} \end{table} TABLE II: The average final information centrality of 20 randomly chosen target nodes and running times for removing \(k=10\) edges using FastICM (FIM), ApproxiSC (ASC), ExactSM (ESM) and Solver (SOL) algorithms on several real-world networks. Fig. 8: Impact of error parameters \(\epsilon\) and \(\alpha\) in ApproxiSC and FastICM algorithms tested on two networks, Hamster(a), and Virgili (b). Fig. 7: Average information centrality of target nodes following edge removals for different algorithms on four networks: Bomb train (a), Virgili (b), ca-GrQc (c), and Erdős (d). We aspire this work to create the necessary grounds for further studies of moderating critical nodes in networks and to be the starting point for a long line of research on this topic. Below, we propose some potential avenues for future research to tackle the limitations of the present work. * **Connectivity.** We considered the constraint of keeping the underlying network connected while removing edges. Motivated by various real-world applications, it would be important to investigate various notations of connectivity, e.g., \(t\)-connectivity (for \(t\geq 2\)), conductance, or algebraic definitions of connectivity. * **Edge Costs.** In our setup, the cost of removing all edges is the same. However, in the real world, removing some edges might be more costly than the others. Therefore, it would be interesting to investigate the problem when each edge has a cost assigned to it. * **Multiple Target Nodes.** A natural generalization of the moderation problem is to analysis the setup where one aims to minimize the overall information centrality of several target nodes at once. * **Weighted Networks.** In practice, edges have weights associated with them. For example, in a social network, an edge weight corresponds to the strength of the social tie between the corresponding two nodes and in an internet network, it could correspond to the bandwidth of the link between two routers. We believe to generalize our algorithms to cover the weighted setup, the introduction of novel ideas and advancement of existing techniques are required. * **Correlation with diffusion models.** In the future, we will try to find how information centrality correlates with specific diffusion models (cascades, thresholds, etc.)
ネットワークの重要なノードは、悪意のある攻撃に非常に脆弱であり、誤情報や病気の拡散などの負の cascading イベントを引き起こす可能性があります。したがって、重要なノードの有効な管理は、そのような悪意のある拡散による潜在的な被害を軽減するために非常に重要です。現在の管理方法は計算コストが高く、さらに、情報中心性という基本的な指標を無視しています。これは、ノードの拡散力を測定する指標です。この問題に取り組み、ネットワークから$k$本のエッジを削除することによって、ターゲットノードの情報を中心性の最小限度を保ちつつネットワークの接続性を維持する問題に取り組みます。この問題が計算的に困難であることを証明します。それはNP完全であり、その目的関数は超モジュラーではありません。しかし、ランダムウォークベースのSchur補完近似と高速サンプリングを用いた3つの近似 greedy アルゴリズムを提案しました
2309.05843
Optimizing Audio Augmentations for Contrastive Learning of Health-Related Acoustic Signals
Health-related acoustic signals, such as cough and breathing sounds, are relevant for medical diagnosis and continuous health monitoring. Most existing machine learning approaches for health acoustics are trained and evaluated on specific tasks, limiting their generalizability across various healthcare applications. In this paper, we leverage a self-supervised learning framework, SimCLR with a Slowfast NFNet backbone, for contrastive learning of health acoustics. A crucial aspect of optimizing Slowfast NFNet for this application lies in identifying effective audio augmentations. We conduct an in-depth analysis of various audio augmentation strategies and demonstrate that an appropriate augmentation strategy enhances the performance of the Slowfast NFNet audio encoder across a diverse set of health acoustic tasks. Our findings reveal that when augmentations are combined, they can produce synergistic effects that exceed the benefits seen when each is applied individually.
Louis Blankemeier, Sebastien Baur, Wei-Hung Weng, Jake Garrison, Yossi Matias, Shruthi Prabhakara, Diego Ardila, Zaid Nabulsi
2023-09-11T22:03:34
http://arxiv.org/abs/2309.05843v1
# Optimizing Audio Augmentations for Contrastive Learning of Health-Related Acoustic Signals ###### Abstract Health-related acoustic signals, such as cough and breathing sounds, are relevant for medical diagnosis and continuous health monitoring. Most existing machine learning approaches for health acoustics are trained and evaluated on specific tasks, limiting their generalizability across various healthcare applications. In this paper, we leverage a self-supervised learning framework, SimCLR with a Slowfast NFNet backbone, for contrastive learning of health acoustics. A crucial aspect of optimizing Slowfast NFNet for this application lies in identifying effective audio augmentations. We conduct an in-depth analysis of various audio augmentation strategies and demonstrate that an appropriate augmentation strategy enhances the performance of the Slowfast NFNet audio encoder across a diverse set of health acoustic tasks. Our findings reveal that when augmentations are combined, they can produce synergistic effects that exceed the benefits seen when each is applied individually. **Keywords:** health acoustics, audio augmentation, contrastive learning ## 1 Introduction Non-speech, non-semantic sounds, like coughing and breathing, can provide information for doctors to detect various respiratory diseases, cardiovascular diseases and neurological diseases (Boschi et al., 2017; Zimmer et al., 2022). Advances in deep learning-based machine learning (ML) allow us to develop medical assistants and continuous health monitoring applications by learning effective acoustic data representations (Alqudahi et al., 2021). Current approaches for learning health acoustic representations are mostly trained and evaluated on specific tasks. For example, Botha et al. (2018); Larson et al. (2012); Tracey et al. (2011); Pahar et al. (2021) trained models to detect tuberculosis using cough sounds via supervised learning. However, it can be challenging to adopt these models directly for other health acoustic tasks. Retraining task specific health acoustic models requires manual data collection and labeling by clinical experts, which can be time consuming and costly. Researchers within the ML community have explored various self-supervised strategies to learn general purpose data representations that overcome the limitations of domain-specific representations (Balestriero et al., 2023). Among these approaches, contrastive learning has proven effective for generating robust representations across multiple data modalities, including images, videos, speech, audio, and periodic data (Chen et al., 2020; Jiang et al., 2020; Qian et al., 2021; Oord et al., 2018; Yang et al., 2022). Selecting appropriate data augmentations is crucial for performant contrasting learning algorithms (Chen et al., 2020) (see Related Works for details). Consequently, significant research has been conducted on the utility of various augmentations for images (Chen et al., 2020), videos (Qian et al., 2021), and speech/audio (Al-Tahan and Mohsenzadeh, 2021; Jiang et al., 2020). However, the unique characteristics of health-related acoustic signals, such as coughs and breathing sounds, which differ in pitch and tone from speech and music, raise questions about the applicability of existing contrastive learning and augmentation strategies in this specialized domain. To address this research gap, our study systematically explores eight distinct audio augmentation techniques and their combinations in the context of health acoustic representation learning. We employ the self-supervised contrastive learning framework, SimCLR (Chen et al., 2020), with a Slowfast NFNet backbone (Wang et al., 2022). After identifying the best combination of augmentations, we compare the performance of the resulting Slowfast NFNet against other state-of-the-art off-the-shelf audio encoders on 21 unique binary classification tasks across five datasets. This work offers two major contributions: (1) we identify augmentation parameters that work best when applied to health acoustics, and (2) we investigate the synergistic effects of combining audio augmentations for enhancing health acoustic representations using SimCLR. ## 2 Related Works In ML, data augmentation serves as a regularization technique to mitigate the risk of model overfitting (Zhang et al., 2021). Within the framework of contrastive learning, the objective is to learn data representations that minimize the distance between representations of semantically similar inputs and maximize the distance between representations of semantically dissimilar inputs. Data augmentations are critical for contrastive learning-based self-supervised learning (SSL), and eliminates the need for labeled data for representation learning. By applying a variety of augmentations to a single input, semantically consistent but distinct variations, commonly referred to as views, are generated (Von Kugelgen et al., 2021). The task then becomes pulling these related views closer together in the representational space, while concurrently pushing views derived from different, unrelated inputs farther apart, via a contrastive loss, such as InfoNCE in SimCLR (Chen et al., 2020). This approach establishes a form of invariance in the model, rendering it robust to the augmentations applied during the training process. Augmentations have been widely explored as part of contrastive learning-based SSL methods such as SimCLR, BYOL (Grill et al., 2020), MoCo (Chen et al., 2020), and SwAV (Caron et al., 2020). Data augmentations also enhance the performance of SSL methods broadly across different data modalities, including images (Chen et al., 2020), videos (Qian et al., 2021), audio (Al-Tahan and Mohsenzadeh, 2021; Niizumi et al., 2021), speech (Jiang et al., 2020), and 1-dimensional signals (e.g., human physiological signals) (Yang et al., 2022). In this study, we turn our attention toward a relatively underexplored domain: the application of data augmentations strategies for contrastive learning of health acoustic signals. The most closely related area of research to our focus on health acoustics is the research investigating augmentation strategies for speech and audio data. Early research by Ko et al. (2015) explored creating two augmented speech signals with speeds relative to the original of 0.9 and 1.1. This yielded performance improvements across four speech recognition tasks. Jansen et al. (2018) expanded upon this by introducing a triplet loss for audio representation learning, incorporating random noise, time/frequency translation, example mixing, and temporal proximity augmentations. Jiang et al. (2020) employed an adaptation of SimCLR for speech data, termed Speech SimCLR, where they applied a diverse set of augmentations: random pitch shift, speed perturbation, room reverberation and additive noise to the original waveform, as well as time and frequency masking to the spectrogram. Niizumi et al. (2021) developed a comprehensive audio augmentation module including pre-normalization, foreground acoustic event mixup, random resize cropping and post-normalization. Fonseca et al. (2021) investigated a multi-modal approach by adopting augmentations from both vision and audio domains, including random resized cropping, random time/frequency shifts, compression, SpecAugment (Park et al., 2019), Gaussian noise addition, and Gaussian blurring. They also used sound separation techniques for sound event detection to enable targeted data augmentations (Fonseca et al., 2021). Shi et al. (2022) explored the impact of noise injection as an augmentation strategy to bolster the robustness of speech models. CLAR identified six augmentation operations: pitch shift, noise injection in frequency domain, and fade in/out, time masking, time shift, time stretching in the temporal domain, and explored their utility for audio contrastive learning (Al-Tahan and Mohsenzadeh, 2021). In this study, we build upon these ideas to systematically investigate the optimal combination and sequence of augmentation strategies, with a specific focus on developing robust representations for health acoustics. ## 3 Methods The study is structured into three phases. The first phase consists of finding the best parameters for each augmentation that we consider for use with SimCLR. In the second phase, we investigate various combinations of augmentations, where we apply one or two successive augmentations to create each view of the input. Here, we use the augmentation parameters that we select in the first phase. In the third phase, we compare the results of our best performing model to other state-of-the-art audio encoder models on the validation set used for comparing augmentations. We choose to hold out the test sets due to ongoing model development and these results may thus be optimistic. This evaluation involves 21 unique downstream tasks across five datasets and we investigate the quality of embeddings generated from each audio encoder using linear probing (Kohn, 2015). Our study employs SimCLR with a 63 million parameter SlowFast NFNet-F0 as the neural network backbone (Chen et al., 2020; Wang et al., 2022). Audio AugmentationsWe investigate eight augmentations (Figure 1). These include the following time-domain augmentations: crop and pad, noising, Brownian tape speed (Weng et al., 2023), scaling, pitch shift, time stretch, and circular time shift. Additionally, we experiment with SpecAugment which is applied after the transformation of audio inputs into spectrograms (Park et al., 2019). A description of each augmentation strategy is provided in Appendix Table A1. Each of the augmentations offers a tunable parameter space to allow for varying degrees of transformational intensity. To identify the optimal hyperparameters for each specific augmentation, we first conduct an exhaustive grid search. After we determine the best augmentation parameters, we explore the potential synergistic effects from the sequential application of either one or two successive augmentations. Since we include 8 augmentations, experimenting with every permutation of one or two augmentations would result in 64 experiments. However, in this work, SpecAugment was only applied after the time domain augmentations which reduced the number of 2-step augmentations to 57. DatasetsFor this study, we curate a training dataset, YT-NS (YouTube Non-Semantic), consisting of two-second long audio clips extracted from one billion non-copyrighted YouTube videos, totalling about 255 million 2s clips or 142k hours. We apply a convolutional neural network-based health acoustic detector model, trained on two public health acoustic AudioSet derivatives, FSD50K and Flusense, as well as another health acoustic dataset. We use this model to filter two-second audio clips from these one billion videos for the following health acoustic signals: coughs, speech, laughing, throat-clearing, baby coughs, and breathing. Estimated numbers of each of these clips is provided in Appendix Table A2. The Slowfast NFNet encoder is trained solely using this dataset. For evaluation, we use five publicly available datasets, FSD50K (Fonseca et al., 2021), Flusense (Al Hossain et al., 2020), PSG (Korompili et al., 2021), CoughVID (Orlandic et al., 2021), and Coswara (Bhattacharya et al., 2023). We describe evaluation datasets in Appendix Table A3. Evaluation21 unique downstream binary classification tasks across five datasets are leveraged to evaluate the quality of health acoustic representations generated from the learned audio encoders, including 13 human acoustic event classifications, five sleep apnea-specific tasks, and three cough relevant tasks. The cough tasks include COVID detection, sex classification, and smoking status classification. Figure 1: Mel spectrograms generated from various augmentations applied to the same health acoustic sample. One two-second example from the CoughVID dataset (Orlandic et al., 2021) is acquired and modified by each augmentation method. For phases 1 and 2 of our study where we identify the best parameters for each augmentation, as well as the best combination of augmentations, we develop a composite score that aggregates performance across the various downstream tasks. The PSG, CoughVid, and Coswara datasets are segmented into two-second clips. For Flusense, we preprocess the data by segmenting variable length clips using the labeled timestamps. For FSD50K and Flusense, we adopt a lightweight evaluation strategy where we randomly sample a single two second long clip from each longer clip. We take the average area under the receiver operating characteristic curve (AUROC) across these tasks and use this composite measure to rank augmentation strategies. For phase 3, we segment the PSG data into 10 second clips, and for FSD50K and Flusense, we crop or zero pad each clip to 10 seconds. We adopt a sliding window approach for FSD50K, Flusense, and PSG, where embeddings are generated for two-second windows with a step size of one second. We apply mean pooling to the resulting embeddings to generate our final output embedding. For all phases, we use linear probing to evaluate the quality of the generated representations. We use logistic regression with cross-validated ridge penalty, which is trained to predict binary labels from the frozen precomputed embeddings (Kohn, 2015). We report AUROC for all tasks and use the DeLong method to compute the 95% confidence intervals (CIs) (DeLong et al., 1988). Baseline ModelsFor comparative evaluation, we consider several off-the-shelf audio encoders, each trained on semantic or non-semantic speech data. Specifically, our baseline models include TRILL (Shor et al., 2020), which is a publicly available ResNet50 architecture trained on an AudioSet subset that is enriched with speech labels. FRILL (Peplinski et al., 2020) is a light-weight MobileNet-based encoder distilled from TRILL. BigSSL-CAP12 (Shor et al., 2022) leverages a Conformer-based architecture, trained on YouTube and LibriLight. ## 4 Results Optimal augmentation parametersIn Appendix Table A1, we display the optimal parameters for each augmentation derived from the associated grid searches. We find that up to a certain threshold, generally more intense augmentation parameters yield better performance. Comparing augmentationsComparing the left and right panels of Figure 2 shows that many augmentations perform better in combination than individually. Our analysis indicates that the most effective single augmentation strategy is SpecAugment (left panel in Figure 2). The most effective 2-step augmentation strategy involves applying circular time shift, followed by time stretch, as depicted in Figure 2. Interestingly, circular time shift does not perform well on its own and each of these augmentations individually underperform SpecAugment. However, circular time shift and time stretch are synergistic when applied together. The right panel of Figure 2 shows that on average, time stretch is the most useful first augmentation, excluding SpecAugment which is always applied second or alone. SpecAugment is the most useful second augmentation on average. Comparing to baselinesAppendix Tables 4, 5 demonstrate performance of the best SimCLR model versus the baseline models on the validation set used for the comparison of augmentations. Overall, the performance of the SimCLR model is similar to BigSSL-CAP12, despite training on about 10x less hours of data and using a model that is nearly 10x smaller, and outperforms off-the-shelf audio encoders. Figure 2: Evaluation performance for comparing augmentation combinations. (Left) from single augmentations. (Right) two augmentations applied where rows represent the first augmentation and columns represent the second augmentation. ## 5 Discussion and Conclusion We investigated a comprehensive list of augmentations for use in the health acoustic domain. We demonstrated the synergistic benefit of the circular time shift and time stretch augmentations. Circular time shift and time-stretching may synergistically improve model generalizability by introducing a diverse range of temporal patterns for the same sound. There are few limitations worth noting. We decided to keep our test sets held out for ongoing model development, thus our comparisons to baselines may be optimistic. We also confined our analysis to a single Slowfast NFNet architecture. This leaves open the possibility that different architectures could yield varying results. Future research may focus on other augmentations, including frequency domain augmentations, as well as augmentations that better leverage health acoustic inductive biases. Additionally, incorporating labels during training (Khosla et al., 2020), such as health signal type, may further improve the learned representations. ## Acknowledgments We thank Yun Liu from Google Research for his critical feedback, Shao-Po Ma for his preliminary work on the PSG dataset, CoughVID and Project Coswara teams for making the datasets publicly available, and the Google Research team for software and hardware infrastructure support. PSG, CoughVID and Coswara are licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License and follow the Disclaimer of Warranties and Limitation of Liability in the license.
健康に関する音響信号、例えば咳や呼吸音が、医療診断と継続的な健康管理に重要です。医療音響学における多くの既存の機械学習手法は、特定のタスクでトレーニングされ評価されています。これにより、さまざまな医療アプリケーションへの汎用性が制限されています。本論文では、自己対照学習フレームワーク、SimCLRとSlowfast NFNetの骨格を用いて、健康音響のコン tractive 学習を行います。Slowfast NFNetをこのアプリケーションに最適化するための重要な要素は、効果的な音声拡張の特定です。私たちは、さまざまな音声拡張戦略の検討を通じて、Slowfast NFNetの音声エンコーダの性能を向上させるための適切な拡張戦略を突き止めました。私たちの発見は、拡張が組み合わせられた場合、個々の拡張が適用される場合に比べて、協調的な効果を生み出すことを示しています。
2309.04585
Asynchronous Distributed Optimization via ADMM with Efficient Communication
In this paper, we focus on an asynchronous distributed optimization problem. In our problem, each node is endowed with a convex local cost function, and is able to communicate with its neighbors over a directed communication network. Furthermore, we assume that the communication channels between nodes have limited bandwidth, and each node suffers from processing delays. We present a distributed algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. In our proposed algorithm, nodes exchange quantized valued messages and operate in an asynchronous fashion. More specifically, during every iteration of our algorithm each node (i) solves a local convex optimization problem (for the one of its primal variables), and (ii) utilizes a finite-time quantized averaging algorithm to obtain the value of the second primal variable (since the cost function for the second primal variable is not decomposable). We show that our algorithm converges to the optimal solution at a rate of $O(1/k)$ (where $k$ is the number of time steps) for the case where the local cost function of every node is convex and not-necessarily differentiable. Finally, we demonstrate the operational advantages of our algorithm against other algorithms from the literature.
Apostolos I. Rikos, Wei Jiang, Themistoklis Charalambous, Karl H. Johansson
2023-09-08T20:27:42
http://arxiv.org/abs/2309.04585v1
# Asynchronous Distributed Optimization via ADMM ###### Abstract In this paper, we focus on an asynchronous distributed optimization problem. In our problem, each node is endowed with a convex local cost function, and is able to communicate with its neighbors over a directed communication network. Furthermore, we assume that the communication channels between nodes have limited bandwidth, and each node suffers from processing delays. We present a distributed algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. In our proposed algorithm, nodes exchange quantized valued messages and operate in an asynchronous fashion. More specifically, during every iteration of our algorithm each node (i) solves a local convex optimization problem (for the one of its primal variables), and (ii) utilizes a finite-time quantized averaging algorithm to obtain the value of the second primal variable (since the cost function for the second primal variable is not decomposable). We show that our algorithm converges to the optimal solution at a rate of \(O(1/k)\) (where \(k\) is the number of time steps) for the case where the local cost function of every node is convex and not-necessarily differentiable. Finally, we demonstrate the operational advantages of our algorithm against other algorithms from the literature. ## I Introduction The problem of distributed optimization has received extensive attention in recent years. Due to the rise of large-scale machine learning [1], control [2], and other data-driven applications [3], there is a growing need to solve optimization problems that involve massive amounts of data. Solving these problems in a centralized way is proven to be infeasible since it is difficult or impossible to store and process large amounts of data on a single node. Distributed optimization is a method that distributes data across multiple nodes. Each node performs computations on its stored data and collaborates with others to solve the optimization problem collectively. This approach optimizes a global objective function by combining each node's local objective function and coordinating with the network. The advantage is reducing computational and storage requirements for individual nodes. However, frequent communication with neighboring nodes is necessary to update optimization variables. This can become a bottleneck with increasing nodes or data. To address this issue, recent attention from the scientific community focuses on developing optimization algorithms with efficient communication. This leads to enhancements on scalability and operational efficiency, while mitigating issues like network congestion, latency, and bandwidth limitations. **Existing Literature.** Most works in the literature assume that nodes can process and exchange real values. This may result in communication overhead, especially for algorithms requiring frequent and complex communication (see, e.g., [4, 5, 6, 7, 8, 9, 10]). In practical applications, nodes must exchange quantized messages to efficiently utilize network resources like energy and processing power. For this reason, recent research focuses on communication-efficient algorithms (e.g., [6, 7, 11, 12, 13, 14, 15]), but they often assume perfectly synchronized nodes or bidirectional communication, limiting their applicability. Addressing communication overhead remains a key challenge, necessitating the development of communication-efficient algorithms that can operate over directed networks asynchronously. Therefore, continued research in this area is crucial to overcoming this bottleneck and enhancing the performance of distributed optimization methods. **Main Contributions.** Existing algorithms in the literature often assume that nodes can exchange precise values of their optimization variables and operate synchronously. However, transmitting exact values (often irrational numbers) necessitates an infinite number of bits and becomes infeasible. Moreover, synchronizing nodes within a distributed network involves costly protocols, time-consuming to execute. In this paper, we present a distributed optimization algorithm, which aims to address these challenges. More specifically, we make the following contributions. **A.** We present a distributed optimization algorithm that leverages the advantages of the ADMM optimization strategy and operates over a directed communication graph. Our algorithm allows nodes to operate in an asynchronous fashion, and enables efficient communication as nodes communicate with quantized messages; see Algorithm 1. **B.** We prove that our algorithm converges to the optimal solution at a rate of \(O(1/k)\) even for non-differentiable and convex local cost functions (as it is the case for similar algorithms with real-valued states). This rate is justified in our simulations in which our algorithm exhibits com parable performance with real-valued communication algorithms while guaranteeing efficient (quantized) communication among nodes; see Section VI. Furthermore, we show that the optimal solution is calculated within an error bound that depends on the quantization level; see Theorem 1. ## II Notation and Preliminaries **Notation.** The sets of real, rational, integer and natural numbers are denoted by \(\mathds{R},\mathds{Q},\mathds{Z}\) and \(\mathds{I}\mathds{N}\), respectively. The symbol \(\mathds{Z}_{\geq 0}\) (\(\mathds{Z}_{>0}\)) denotes the set of nonnegative (positive) integer numbers. The symbol \(\mathds{R}_{\geq 0}\) (\(\mathds{R}_{>0}\)) denotes the set of nonnegative (positive) real numbers. The symbol \(\mathds{R}_{\geq 0}^{n}\) denotes the nonnegative orthant of the \(n\)-dimensional real space \(\mathds{R}^{n}\). Matrices are denoted with capital letters (e.g., \(A\)), and vectors with small letters (e.g., \(x\)). The transpose of matrix \(A\) and vector \(x\) are denoted as \(A^{\top}\), \(x^{\top}\), respectively. For any real number \(a\in\mathds{R}\), the floor \(\lfloor a\rfloor\) denotes the greatest integer less than or equal to \(a\) while the ceiling \(\lceil a\rceil\) denotes the least integer greater than or equal to \(a\). For any matrix \(A\in\mathds{R}^{n\times n}\), the \(a_{ij}\) denotes the entry in row \(i\) and column \(j\). By \(\mathds{1}\) and \(\mathds{I}\) we denote the all-ones vector and the identity matrix of appropriate dimensions, respectively. By \(\|\cdot\|\), we denote the Euclidean norm of a vector. **Graph Theory.** The communication network is captured by a directed graph (digraph) defined as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). This digraph consists of \(n\) (\(n\geq 2\)) agents communicating only with their immediate neighbors, and is static (i.e., it does not change over time). In \(\mathcal{G}\), the set of nodes is denoted as \(\mathcal{V}=\{v_{1},v_{2},...,v_{n}\}\), and the set of edges as \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\cup\{(v_{i},v_{i})\mid v_{i} \in\mathcal{V}\}\) (note that each agent has also a virtual self-edge). The cardinality of the sets of nodes, edges are denoted as \(|\mathcal{V}|=N\), \(|\mathcal{E}|=m\), respectively. A directed edge from node \(v_{i}\) to node \(v_{l}\) is denoted by \((v_{l},v_{i})\in\mathcal{E}\), and captures the fact that node \(v_{l}\) can receive information from node \(v_{i}\) at time step \(k\) (but not the other way around). The subset of nodes that can directly transmit information to node \(v_{i}\) is called the set of in-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{-}=\{v_{j}\in\mathcal{V}\mid(v_{i},v_{j})\in\mathcal{E}\}\). The subset of nodes that can directly receive information from node \(v_{i}\) is called the set of out-neighbors of \(v_{i}\) and is represented by \(\mathcal{N}_{i}^{+}=\{v_{l}\in\mathcal{V}\mid(v_{l},v_{i})\in\mathcal{E}\}\). The _in-degree_, and _out-degree_ of \(v_{j}\) are denoted by \(\mathcal{D}_{i}^{-}=|\mathcal{N}_{i}^{-}|\), \(\mathcal{D}_{i}^{+}=|\mathcal{N}_{i}^{+}|\), respectively. The diameter \(D\) of a digraph is the longest shortest path between any two nodes \(v_{l},v_{i}\in\mathcal{V}\). A directed _path_ from \(v_{i}\) to \(v_{l}\) of length \(t\) exists if we can find a sequence of agents \(i\equiv l_{0},l_{1},\ldots,l_{t}\equiv l\) such that \((l_{r+1},l_{r})\in\mathcal{E}\) for \(\tau=0,1,\ldots,t-1\). A digraph is _strongly connected_ if there exists a directed path from every node \(v_{i}\) to every node \(v_{l}\), for every \(v_{i},v_{l}\in\mathcal{V}\). **ADMM Algorithm.** The standard ADMM algorithm [16] is designed to solve the following problem: \[\min_{x\in\mathds{R}^{p},z\in\mathds{R}^{m}} f(x)+g(x),\] (1) s.t. \[Ax+Bz=c,\] where \(A\in\mathds{R}^{q\times p}\), \(B\in\mathds{R}^{q\times m}\) and \(c\in\mathds{R}^{q}\) (for \(q,p,m\in\mathds{N}\)). In order to solve (1), the augmented Lagrangian is: \[L_{\rho}(x,z,\lambda)=f(x)+g(x) +\lambda(Ax+Bz-c)\] \[+\frac{\rho}{2}\|Ax+Bz-c\|^{2}, \tag{2}\] where \(\lambda\in\mathds{R}\) is the Lagrange multiplier, and \(\rho\in\mathds{R}\) is the positive penalty parameter. The primary variables \(x\), \(z\) and the Lagrangian multiplier \(\lambda\) are initialized as \([x,z,\lambda]^{\top}=[x^{[0]},z^{[0]},\lambda^{[0]}]^{\top}\). Then, during every ADMM time step, the \(x\), \(z\) and \(\lambda\) are updated as: \[x^{[k+1]}= \operatorname*{arg\,min}_{x\in\mathds{R}^{p}}L_{\rho}(x,z^{[k]}, \lambda^{[k]}), \tag{3}\] \[z^{[k+1]}= \operatorname*{arg\,min}_{z\in\mathds{R}^{m}}L_{\rho}(x^{[k+1]},z,\lambda^{[k]}),\] (4) \[\lambda^{[k+1]}= \lambda^{[k]}+\rho(Ax^{[k+1]}+Bz^{[k+1]}-c), \tag{5}\] where \(\rho\) in (5) is the penalty parameter from (2). **Asymmetric Quantizers.** Quantization is a strategy that lessens the number of bits needed to represent information. This reduces the required communication bandwidth and increases power and computation efficiency. Quantization is mainly used to describe communication constraints and imperfect information exchanges between nodes [17]. The three main types of quantizers are (i) asymmetric, (ii) uniform, and (iii) logarithmic. In this paper, we rely on asymmetric quantizers to reduce the required communication bandwidth. Note that the results of this paper are transferable to other quantizer types (e.g., logarithmic or uniform). Asymmetric quantizers are defined as \[q_{\Delta}^{a}(\xi)=\Big{\lfloor}\frac{\xi}{\Delta}\Big{\rfloor}, \tag{6}\] where \(\Delta\in\mathds{Q}\) is the quantization level, \(\xi\in\mathds{R}\) is the value to be quantized, and \(q_{\Delta}^{a}(\xi)\in\mathds{Q}\) is the quantized version of \(\xi\) with quantization level \(\Delta\) (note that the superscript "\(a\)" indicates that the quantizer is asymmetric). ## III Problem Formulation **Problem Statement.** Let us consider a distributed network modeled as a digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with \(n=|\mathcal{V}|\) nodes. In our network \(\mathcal{G}\), we assume that the communication channels among nodes have limited bandwidth. Each node \(v_{i}\) is endowed with a scalar local cost function \(f_{i}(x):\mathds{R}^{p}\mapsto\mathds{R}\) only known to node \(v_{i}\). In this paper we aim to develop a distributed algorithm which allows nodes to cooperatively solve the following optimization problem \[\min_{x\in\mathds{R}^{p}} \sum_{i=1}^{n}f_{i}(x), \tag{7}\] where \(x\in\mathds{R}^{p}\) is the global optimization variable (or common decision variable). We will solve (7) via the distributed ADMM strategy. Furthermore, in our solution we guarantee efficient communication between nodes (due to communication channels of limited bandwidth in the network). **Modification of the Optimization Problem.** In order to solve (7) via the ADMM and guarantee efficient communication between nodes, we introduce (i) the variable \(x_{i}\) for every node \(v_{i}\), (ii) the constraint \(|x_{i}-x_{j}|\leq\epsilon\) for every \(v_{i},v_{j}\in\mathcal{V}\) (where \(\epsilon\in\mathds{R}\) is an error tolerance which is predefined), and (iii) the constraint that nodes communicate with quantized values. The second constraint is introduced to allow an asynchronous implementation of the distributed ADMM strategy, and the third constraint to guarantee efficient communication between nodes. Considering the aforementioned constraints (i), (ii) and (iii), (7) becomes: \[\min_{x_{i}} \sum_{i=1}^{n}f_{i}(x_{i}),i=1,...,n\] (8) s.t. \[|x_{i}-x_{j}|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V},\] (9) nodes communicate with quantized values. (10) Let us now define a closed nonempty convex set \(\mathcal{C}\) as \[\mathcal{C}=\left\{\begin{bmatrix}x_{1}^{\mathrm{ T}}&x_{2}^{\mathrm{ T}}&\ldots&x_{n}^{\mathrm{ T}}\end{bmatrix}^{\mathrm{ T}}\in\mathds{R}^{np}\,:\,\left\|x_{i}-x_{j}\right\|\leq\epsilon\right\}. \tag{11}\] Furthermore, denote \(X\coloneqq\begin{bmatrix}x_{1}^{\mathrm{ T}}&x_{2}^{\mathrm{ T}}&\ldots&x_{n}^{\mathrm{ T}}\end{bmatrix}^{\mathrm{ T}}\) and its copy variable \(z\in\mathds{R}^{np}\). This means that (9) and (11) become \[X=z,\,\forall z\in\mathcal{C}. \tag{12}\] Now let us define the indicator function \(g(z)\) of set \(\mathcal{C}\) as \[g(z)=\left\{\begin{array}{ll}0,&\mbox{if}\,z\in\mathcal{C},\\ \infty,&\mbox{otherwise}.\end{array}\right. \tag{13}\] Incorporating (12) and (13) into (8), we have that (8) becomes \[\min_{z,x_{i}} \left\{\sum_{i=1}^{n}f_{i}(x_{i})+g(z)\right\},i=1,\ldots,n\] (14) s.t. \[X-z=0,\,\forall z\in\mathcal{C},\] nodes communicate with quantized values. As a result, in this paper, we aim to design a distributed algorithm that solves (14) via the distributed ADMM strategy. ## IV Preliminaries on Distributed Coordination We now present a definition of asynchrony (borrowed from [8]) that defines the operation of nodes in the network. Furthermore, we present a distributed coordination algorithm that operates with quantized values and is necessary for our subsequent development. ### _Definition of Asynchronous Operation_ During their optimization operation, nodes aim to coordinate in an asynchronous fashion. Specifically, let us assume that the iterations for the optimization operation start at time step \(t(0)\in\mathds{R}_{+}\). Furthermore, we assume that one (or more) nodes transmit values to their out-neighbors at a set of time instances \(\mathcal{T}=\{t(1),t(2),t(3),\ldots\}\). During the nodes' asynchronous operation, a message that is received at time step \(t(\eta_{1})\) from node \(v_{i}\), is processed at time step \(t(\eta_{2})\) where \(\eta_{2}>\eta_{1}\). This means that the message received at time step \(t(\eta_{1})\) suffers from a processing delay of \(t(\eta_{2})-t(\eta_{1})\) time steps. An example of how processing delays affecting transmissions is shown in Fig. 1 (that is borrowed from [8]). Note here that the nodes states at time step \(t(\eta)\) are indexed by \(\eta\). This means that the state of node \(v_{i}\) at time step \(t(\eta)\) is denoted as \(x_{i}^{\eta}\in\mathds{R}^{p}\). We now present the following assumption which is necessary for the asynchronous operation of every node. **Assumption 1**: _The number of time steps required for a node \(v_{i}\) to process the information received from its in-neighbors is upper bounded by \(\mathcal{B}\in\mathds{N}\). Furthermore, the actual time (in seconds) required for a node \(v_{i}\) to process the information received from its in-neighbors is upper bounded by \(T\in\mathds{R}_{\geq 0}\)._ Assumption 1 states that there exists a finite number of steps \(\mathcal{B}\) before which all nodes have updated their states and proceed to perform transmissions to their neighboring nodes. The upper bound \(\mathcal{B}\) is translated to an upper bound of \(T\) in actual time (in seconds). This is mainly because it is not possible for nodes to count the number of time steps elapsed in the network (and understand when \(\mathcal{B}\) time steps have passed. The value \(T\) can be counted by each node individually. ### _Asynchronous \(\max\)/\(\min\) - Consensus_ In asynchronous max/\(\min\) consensus (see [18]), the update rule for every node \(v_{i}\in\mathcal{V}\) is: \[x_{i}^{[k+\theta_{i}^{[k]}]}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}} \{x_{j}^{[k+\theta_{i_{j}}^{[k]}]}\}, \tag{15}\] where \(\theta_{i}^{[k]}\) is the update instance of node \(v_{i}\), \(x_{j}^{[k+\theta_{ij}^{[k]}]}\) are the states of the in-neighbors \(v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}\) during the time instant of \(v_{i}\)'s update, \(\theta_{ij}^{[k]}\) are the asynchronous state updates of the in-neighbors of node \(v_{i}\) that occur between two consecutive updates of node \(v_{i}\)'s state. The asynchronous max/\(\min\) consensus in (15) converges to the maximum value among all nodes in a finite number of steps \(s^{\prime}\leq D\mathcal{B}\) (see [18]), where \(D\) is the diameter of the network, and \(\mathcal{B}\) is the upper bound on the number of time steps required for a node \(v_{j}\) to process the information received from its in-neighbors. ## V Distributed Asynchronous Optimization via ADMM with Efficient Communication In this section we present a distributed algorithm which solves problem (14). Before presenting the operation of the Fig. 1: Example of how processing and transmission delays affect the operation of nodes \(v_{1}\), \(v_{2}\), \(v_{3}\). Blue dots indicate the iterations and blue arrows indicate the transmissions. Transmissions occur at time steps \(t_{i}(\eta)\), and \(t_{i}(\eta+1)-t_{i}(\eta)\) is the processing delay, where \(i\in\{1,2,3\}\), \(\eta\in\mathds{Z}_{\geq 0}\). The time difference from the blue dot to the blue arrow is the transmission delay [8]. proposed algorithm, we analyze the ADMM operation over the problem (14). In (14), let us denote \(F(X)\coloneqq\sum_{i=1}^{n}f_{i}(x_{i})\). This means that the Lagrangian function is equal to \[L(X,z,\lambda)=F(X)+g(z)+\lambda^{\mathrm{T}}(X-z), \tag{16}\] where \(\lambda\in\mathds{R}^{np}\) is the Lagrange multiplier. We now make the following assumptions to solve the problem (14). **Assumption 2**: _Every cost function \(f_{i}:\mathds{R}^{p}\to\mathds{R}\) is closed, proper and convex._ **Assumption 3**: _The Lagrangian \(L(X,z,\lambda)\) has a saddle point. This means that there exists \((X^{*},z^{*},\lambda^{*})\), for which_ \[L(X^{*},z^{*},\lambda)\leq L(X^{*},z^{*},\lambda^{*})\leq L(X,z,\lambda^{*}), \tag{17}\] _for all \(X\in\mathds{R}^{np}\), \(z\in\mathds{R}^{np}\), and \(\lambda\in\mathds{R}^{np}\)._ Assumption 2 means that the local cost function \(f_{i}\) of every node \(v_{i}\) can be non-differentiable (see [19]). Furthermore, Assumptions 2 and 3 mean that \(L(X,z,\lambda^{*})\) is convex in \((X,z)\) and \((X^{*},z^{*})\) is a solution to problem (14) (see [19, 20]). Note that this is also based on the definition of \(g(z)\) in (13). Note here that our results extend naturally to strongly convex cost functions, since strong convexity implies convexity. Let us now focus on the Lagrangian of the problem in (14). At time step \(k\), the augmented Lagrangian of (14) is \[L_{\rho}(X^{[k]},z^{[k]},\lambda^{[k]}) \tag{18}\] \[= \sum_{i=1}^{n}f_{i}(x_{i}^{[k]})+g(z^{[k]})+\lambda^{[k]^{\mathrm{ T}}}(X^{[k]}-z^{[k]})\] \[+ \frac{\rho}{2}\|X^{[k]}-z^{[k]}\|^{2}\] \[= \sum_{i=1}^{n}\Big{(}f_{i}(x_{i}^{[k]})+\lambda_{i}^{[k]^{\mathrm{ T}}}(x_{i}^{[k]}-z_{i}^{[k]})+\frac{\rho}{2}\|x_{i}^{[k]}-z_{i}^{[k]}\|^{2} \Big{)}\] \[+ g(z^{[k]}),\] where \(z_{i}\in\mathds{R}^{p}\) is the \(i^{th}\) element of vector \(z\). In (3)-(5) we ignore terms that are independent of the optimization variables such as \(x_{i},z\) for node \(v_{i}\). This means that (3)-(5) become: \[x_{i}^{[k+1]}= \operatorname*{argmin}_{x_{i}}f_{i}(x_{i})+\lambda_{i}^{[k]^{ \mathrm{T}}}x_{i}+\frac{\rho}{2}\|x_{i}-z_{i}^{[k]}\|^{2}, \tag{19}\] \[z^{[k+1]}= \operatorname*{argmin}_{z}g(z)+\lambda^{[k]^{\mathrm{T}}}(X^{[k+1 ]}-z)+\frac{\rho}{2}\|X^{[k+1]}-z\|^{2}\] \[= \operatorname*{argmin}_{z}g(z)+\frac{\rho}{2}\|X^{[k+1]}-z+\frac{1 }{\rho}\lambda^{[k]}\|^{2},\] (20) \[\lambda_{i}^{[k+1]}= \lambda_{i}^{[k]}+\rho(x_{i}^{[k+1]}-z_{i}^{[k+1]}). \tag{21}\] Note that for (20) we use the identity \(2a^{T}b+b^{2}=(a+b)^{2}-a^{2}\) for \(a=\lambda^{[k]}/\rho\) and \(b=X^{[k+1]}-z\). Equations (19), (21) can be executed independently by node \(v_{i}\) in a parallel fashion. Specifically, node \(v_{i}\) can solve (19) for \(x_{i}^{[k+1]}\) by a classical method (e.g., a proximity operator [19, Section 4]), and implement trivially (21) for \(\lambda_{i}^{[k+1]}\). In (13), \(g(z)\) is the indicator function of the closed nonempty convex set \(\mathcal{C}\). This means that (20) becomes \[z^{[k+1]}=\Pi_{\mathcal{C}}(X^{[k+1]}+\lambda^{[k]}/\rho), \tag{22}\] where \(\Pi_{\mathcal{C}}\) is the projection (in the Euclidean norm) onto \(\mathcal{C}\). It is important to note here that the elements of \(z\) (i.e., \(z_{1},z_{2},\ldots,z_{n}\)) should belong into the set \(\mathcal{C}\) in finite time. This is due to the definition of \(g(z)\) in (13). Specifically, if the elements of \(z\) do not belong in \(\mathcal{C}\) then \(g(z)=\infty\) (thus (20) cannot be executed). Therefore, we need to adjust the values of the elements of \(z\) so that they belong in the set \(\mathcal{C}\) in finite time (i.e., we need to set the elements of \(z\) such that \(\|z_{i}-z_{j}\|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V}\)). Note that if \(z_{i}-z_{j}=0,\forall v_{i},v_{j}\in\mathcal{V}\), then every node \(v_{i}\in\mathcal{V}\) has reached consensus. Specifically, we can have that in finite time the state \(z_{i}\) becomes \[z_{i}=\frac{1}{n}\sum_{l=1}^{n}z_{l}^{[0]},\forall v_{i}\in\mathcal{V}, \tag{23}\] where \(z_{l}^{[0]}=x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho\). Furthermore, \(\|z_{i}-z_{j}\|\leq\epsilon,\forall v_{i},v_{j}\in\mathcal{V}\) means that \[z_{i}\in[\frac{1}{n}\sum_{l=1}^{n}z_{l}^{[0]}-\frac{\epsilon}{2},\frac{1}{n} \sum_{l=1}^{n}z_{l}^{[0]}+\frac{\epsilon}{2}],\forall v_{i}\in\mathcal{V}, \tag{24}\] where \(z_{l}^{[0]}=x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho\). This means that for every node \(v_{i}\), \(z_{i}\) enters a circle with its center at \(\frac{1}{n}\sum_{l=1}^{n}(x_{l}^{[k+1]}+\lambda_{l}^{[k]}/\rho)\) and its radius as \(\epsilon/2\). Finally, from (14), we have that each node in the network needs to communicate with its neighbors in an efficient manner. For this reason, we aim to allow each node \(v_{i}\) coordinate with its neighboring nodes by exchanging quantized values in order to fulfil (24). ### _Distributed Optimization Algorithm_ We now present our distributed optimization algorithm. The algorithm is detailed below as Algorithm 1 and allows each node in the network to solve the problem presented in (14). The operation of the proposed algorithm is based on two parts. During these parts, each node \(v_{i}\) (i) calculates \(x_{i}^{[k+1]}\), \(z_{i}^{[k+1]}\), \(\lambda_{i}^{[k+1]}\) according to (19)-(21) (see Algorithm 1), and (ii) coordinates with other nodes in a communication efficient manner in order to calculate \(z_{i}^{[k+1]}\) that belongs in \(\mathcal{C}\) in (11) (see Algorithm 2). Note that Algorithm 2 is a finite time coordination algorithm with quantized communication and is executed as a step of Algorithm 1. Note that during Algorithm 1, nodes operate in an asynchronous fashion. Synchronous operation requires synchronization among nodes or the existence of a global clock so that all nodes to agree on their update time. In our setting, asynchronous operation arises when each node (i) starts calculating \(x_{i}^{[k+1]}\), \(z_{i}^{[k+1]}\), \(\lambda_{i}^{[k+1]}\) according to (19)-(21) in Algorithm 1, and (ii) calculates \(z_{i}^{[k+1]}\) that belongs in \(\mathcal{C}\) in (11) in Algorithm 2. This can be achieved by making the internal clocks of all nodes have similar pacing, which will allow them to execute the optimization step at roughly the same time [21]. Furthermore, making the internal clocks of all nodes have similar pacing does not mean that we have to synchronize the clocks of the nodes (or their time-zones). Note that this is a common procedure in most modern computers as the clock pacing specification is defined within the Advanced Configuration and Power Interface (ACPI) specification [22]. We now make the following assumption which is important for the operation of our algorithm. **Assumption 4**: _The diameter \(D\) (or an upper bound) is known to every node \(v_{i}\) in the network._ Assumption 4 is necessary so that each node \(v_{i}\) is able to determine whether calculation of \(z_{i}\) that belongs in \(\mathcal{C}\) in (11) has been achieved in a distributed manner. We now present the details of Algorithm 1. **Input:** Strongly connected \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), parameter \(\rho\), diameter \(D\), error tolerance \(\epsilon\in\mathbb{Q}\), upper bound on processing delays \(\mathcal{B}\). Assumptions 1, 2, 3, 4 hold. \(k_{\text{max}}\) (ADMM maximum number of iterations). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) sets randomly \(x^{[0]},z^{[0]},\lambda^{[0]}\), and sets \(\Delta=\epsilon/3\). **Iteration:** For \(k=0,1,2,\ldots,k_{\text{max}}\), each node \(v_{i}\in\mathcal{V}\) does the following: 1. Calculate \(x_{i}^{[k+1]}\) via (19); 2. Calculate \(z_{i}^{[k+1]}\) = Algorithm \(2(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B})\); 3. Calculate \(\lambda_{i}^{[k+1]}\) via (21). **Output:** Each node \(v_{i}\in\mathcal{V}\) calculates \(x_{i}^{*}\) which solves problem (14) in Section III. **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) does the following: 1. Assigns probability \(b_{li}\) to each out-neigbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\), as follows \[b_{li}=\left\{\begin{array}{ll}\frac{1}{1+\mathcal{D}_{i}^{i}},&\text{if $l=i$ or $v_{l}\in\mathcal{N}_{i}^{+}$},\\ 0,&\text{if $l\neq i$ and $v_{l}\notin\mathcal{N}_{i}^{+}$};\end{array}\right.\] 2. flag\({}_{i}=0\), \(\xi_{i}=2\), \(y_{i}=2\)\(q_{\Delta}^{a}(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho)\) (see (6)); **Iteration:** For \(\eta=1,2,\ldots\), each node \(v_{i}\in\mathcal{V}\), does: 1. **if**\(\eta\mod(D\mathcal{B})=1\)**then** sets \(M_{i}=\lceil y_{i}/\xi_{i}\rceil\), \(m_{i}=\lfloor y_{i}/\xi_{i}\rfloor\); 2. broadcasts \(M_{i}\), \(m_{i}\) to every \(v_{l}\in\mathcal{N}_{i}^{+}\); receives \(M_{j}\), \(m_{j}\) from every \(v_{j}\in\mathcal{N}_{i}^{-}\); sets \(M_{i}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}M_{j}\), \(m_{i}=\min_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}m_{j}\); 3. sets \(d_{i}=\xi_{i}\); 4. **while**\(d_{i}>1\)**do \(4.1)**\(c_{i}^{[n]}=\lfloor y_{i}\ /\ \xi_{i}\rfloor\); 5. sets \(y_{i}=y_{i}-c_{i}^{[n]}\), \(\xi_{i}=\xi_{i}-1\), and \(d_{i}=d_{i}-1\); 6. transmits \(c_{i}^{[\eta]}\) to randomly chosen out-neighbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\) according to \(b_{li}\); 7. receives \(c_{j}^{[n]}\) from \(v_{j}\in\mathcal{N}_{i}^{-}\) and sets \[y_{i}=y_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (25) \[\xi_{i}=\xi_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (26) where \(w_{\eta-r,ij}^{[r]}=1\) when the processing time of node \(v_{i}\) is equal to \(r\) at time step \(\eta-r\), so that node \(v_{i}\) receives \(c_{i}^{[\eta]}\), \(1\) from \(v_{j}\) at time step \(\eta\) (otherwise \(w_{\eta-r,ij}^{[r]}=0\) and \(v_{i}\) receives no message at time step \(\eta\) from \(v_{j}\)); 8. **if**\(\eta\mod(D\mathcal{B})=0\)**and**\(M_{i}-m_{i}\leq 1\)**then** sets \(z_{i}^{[k+1]}=m_{i}\Delta\) and stops operation. **Output:**\(z_{i}^{[k+1]}\). **Algorithm 2** QuAsAvCo - Quantized Asynchronous Average Consensus **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node \(v_{i}\in\mathcal{V}\) does the following: 1. Assigns probability \(b_{li}\) to each out-neigbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\), as follows \[b_{li}=\left\{\begin{array}{ll}\frac{1}{1+\mathcal{D}_{i}^{i}},&\text{if $l=i$ or $v_{l}\in\mathcal{N}_{i}^{+}$},\\ 0,&\text{if $l\neq i$ and $v_{l}\notin\mathcal{N}_{i}^{+}$};\end{array}\right.\] 2. flag\({}_{i}=0\), \(\xi_{i}=2\), \(y_{i}=2\)\(q_{\Delta}^{a}(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho)\) (see (6)); **Iteration:** For \(\eta=1,2,\ldots\), each node \(v_{i}\in\mathcal{V}\), does: 1. **if**\(\eta\mod(D\mathcal{B})=1\)**then** sets \(M_{i}=\lceil y_{i}/\xi_{i}\rceil\), \(m_{i}=\lfloor y_{i}/\xi_{i}\rfloor\); 2. broadcasts \(M_{i}\), \(m_{i}\) to every \(v_{l}\in\mathcal{N}_{i}^{+}\); receives \(M_{j}\), \(m_{j}\) from every \(v_{j}\in\mathcal{N}_{i}^{-}\); sets \(M_{i}=\max_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}M_{j}\), \(m_{i}=\min_{v_{j}\in\mathcal{N}_{i}^{-}\cup\{v_{i}\}}m_{j}\); 3. sets \(d_{i}=\xi_{i}\); 4. **while**\(d_{i}>1\)**do \(4.1)**\(c_{i}^{[n]}=\lfloor y_{i}\ /\ \xi_{i}\rfloor\); 5. sets \(y_{i}=y_{i}-c_{i}^{[n]}\), \(\xi_{i}=\xi_{i}-1\), and \(d_{i}=d_{i}-1\); 6. transmits \(c_{i}^{[n]}\) to randomly chosen out-neighbor \(v_{l}\in\mathcal{N}_{i}^{+}\cup\{v_{i}\}\) according to \(b_{li}\); 7. receives \(c_{j}^{[n]}\) from \(v_{j}\in\mathcal{N}_{i}^{-}\) and sets \[y_{i}=y_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (25) \[\xi_{i}=\xi_{i}+\sum_{j=1}^{n}\sum_{r=0}^{B}w_{\eta-r,ij}^{[r]}\,\] (26) where \(w_{\eta-r,ij}^{[r]}=1\) when the processing time of node \(v_{i}\) is equal to \(r\) at time step \(\eta-r\), so that node \(v_{i}\) receives \(c_{i}^{[n]}\), \(1\) from \(v_{j}\) at time step \(\eta\) (otherwise \(w_{\eta-r,ij}^{[r]}=0\) and \(v_{i}\) receives no message at time step \(\eta\) from \(v_{j}\)); 8. **if**\(\eta\mod(D\mathcal{B})=0\)**and**\(M_{i}-m_{i}\leq 1\)**then** sets \(z_{i}^{[k+1]}=m_{i}\Delta\) and stops operation. **Output:**\(z_{i}^{[k+1]}\). **Algorithm 2** QuAsAvCo - Quantized Asynchronous Average Consensus **Input:**\(x_{i}^{[k+1]}+\lambda_{i}^{[k]}/\rho,D,\Delta,\mathcal{B}\). **Initialization:** Each node **Remark 1**: _It is important to note here that during the initialization of Algorithm 1, the error tolerance \(\epsilon\) is chosen to be a rational number (i.e., \(\epsilon\in\mathbb{Q}\)). This is not a limitation for the ADMM optimization process in Algorithm 1. The real-valued \(\epsilon\) can be chosen such that it can be represented as a rational value. Furthermore, this choice facilitates the operation of Algorithm 2. Specifically, a rational value for \(\epsilon\) facilitates the choice of a suitable quantization level \(\Delta\) (since \(\Delta=\epsilon/3\)). During the execution of Algorithm 2 nodes quantize their states, thus an error \(e_{q_{1}}\leq\Delta\) is imposed to every state. Then, Algorithm 2 converges to the quantized average thus, the final states of the nodes have an error \(e_{q_{2}}\leq\Delta\). This means that after executing Algorithm 2, we have \(|z_{i}-z_{j}|\leq 2\Delta<\epsilon\), and thus we have \(z_{i}^{[k+1]}\in\mathcal{C}\) in (11), \(\forall v_{i}\in\mathcal{V}\). For this reason, any choice of \(\Delta\) for which \(\Delta<\epsilon/2\) is suitable for the operation of our algorithm for a given error tolerance \(\epsilon\)._ **Remark 2**: _In practical applications, nodes do not know the value of \(\mathcal{B}\). However, \(B\) time-steps (which is its upper bound) is guaranteed to be executed within \(T\) seconds (see Assumption 1). As noted previously, consistent pacing of each node's clock ensures that the check for convergence at each node will happen at roughly the same time (see [21]). Therefore, at every \(DT\) seconds, each node checks whether Algorithm 2 can be terminated._ ### _Convergence of Algorithm 1_ We now analyze the convergence time of Algorithm 1 via the following theorem. Our theorem is inspired from [8] but is adjusted to the quantized nature of Algorithm 1. However, due to space limitations we omit the proof (we will include it at an extended version of our paper). **Theorem 1**: _Let us consider a strongly connected digraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Each node \(v_{i}\in\mathcal{V}\), is endowed with a scalar local cost function \(f_{i}(x):\mathds{R}^{p}\mapsto\mathds{R}\), and Assumptions 1-4 hold. Furthermore, every node \(v_{i}\) has knowledge of a parameter \(\rho\), the network diameter \(D\), an error tolerance \(\epsilon\in\mathds{Q}\), and an upper bound on processing delays \(\mathcal{B}\). During the operation of Algorithm 1, let us consider the variables \(\{X^{[k]},z^{[k]},\lambda^{[k]}\}\), where \(X^{[k]}=[x_{1}^{[k]^{\intercal}},x_{2}^{[k]^{\intercal}},\ldots,x_{n}^{[k]^{ \intercal}}]^{\intercal}\) and \(\lambda^{[k]}=[\lambda_{1}^{[k]^{\intercal}},\lambda_{2}^{[k]^{\intercal}}, \ldots,\lambda_{n}^{[k]^{\intercal}}]^{\intercal}\); then, define \(\bar{X}^{[k]}=\frac{1}{k}\sum_{s=0}^{k-1}X^{[s+1]},\bar{z}^{[k]}=\frac{1}{k} \sum_{s=0}^{k-1}z^{[s+1]}\). During the operation of Algorithm 1 we have \[0 \leq L(\bar{X}^{[k]},\bar{z}^{[k]},\lambda^{*})-L(X^{*},z^{*}, \lambda^{*}) \tag{27}\] \[\leq\frac{1}{k}\left(\frac{1}{2\rho}\|\lambda^{*}-\lambda^{[0]}\| ^{2}+\frac{\rho}{2}\|X^{*}-z^{[0]}\|^{2}\right)+\mathcal{O}(2\Delta\sqrt{n}),\] for every time step \(k\), where \(\Delta\) is the quantization level for calculating \(z_{i}\in\mathcal{C}\) in (11) during the operation of Algorithm 2. It is important to note that in Theorem 1 we focus on the convergence of the optimization steps, i.e., the steps executed during the operation of Algorithm 1. Due to the operation of Algorithm 2 we have that in (27) an additional term \(\mathcal{O}(2\Delta\sqrt{n})\) appears. This term (as will be seen later in Section VI) affects the precision according to which the optimal solution is calculated. However, we can adjust Algorithm 2 to operate with a dynamically refined quantization level \(\Delta\). For example, we can initially set \(\Delta=\epsilon/3\) (where \(\epsilon\in\mathds{Q}\)). Then, execute Algorithm 2 during every time step \(k\) with quantization level \(\Delta^{\prime}=\frac{\Delta}{10(k+1)}\). Since we have \(\frac{\Delta}{10(k+1)}<\frac{\Delta}{10(k)}\) for every \(k\), then, Algorithm 2 will lead to a reduction of the error on the optimal solution that depends on the quantization level (i.e., the term \(\mathcal{O}(2\Delta\sqrt{n})\) in (27) will be reduced after every execution of Algorithm 2). However, please note that this analysis is outside of the scope of this paper and will be considered in an extended version. ## VI Simulation Results In this section, we present simulation results in order to demonstrate the operation of Algorithm 1 and its advantages. Furthermore, we compare Algorithm 1 against existing algorithms and emphasize on the introduced improvements. In Fig. 2, we focus on a network comprised of \(100\) nodes modelled as a directed graph. Each node \(v_{i}\) is endowed with a scalar local cost function \(f_{i}(x)=0.5x^{\top}P_{i}x+q_{i}^{\top}x+r_{i}\). This cost function is quadratic and convex. Furthermore, for \(f_{i}(x)\) we have that (i) \(P_{i}\) was initialized as the square of a randomly generated symmetric matrix \(A_{i}\) (ensuring it is positive definite), (ii) \(q_{i}\) is initialized as the negation of the product of the transpose of \(A_{i}\) and a randomly generated vector \(b_{i}\) (i.e., it is a linear term), (iii) and \(r_{i}\) is initialized as half of the squared norm of the randomly generated vector \(b_{i}\) (i.e., it is a scalar constant). We execute Algorithm 1 and we show how the nodes' states converge to the optimal solution for \(\epsilon=0.03,0.003,0.0003\), and \(\Delta=0.01,0.001,0.0001\), respectively. We plot the error \(e^{[k]}\) defined as \[e^{[k]}=\frac{\sqrt{\sum_{j=1}^{n}(x_{j}^{[k]}-x^{*})^{2}}}{\sqrt{\sum_{j=1}^{ n}(x_{j}^{[0]}-x^{*})^{2}}}, \tag{28}\] where \(x^{*}\) is the optimal solution of the optimization problem in (14). Note that from Remark 1, we have that any \(\Delta<\epsilon/2\) is suitable for the operation of Algorithm 1 for a given \(\epsilon\). In Fig. 2, we execute Algorithm 1 for \(\Delta=\epsilon/3\). We can see that Algorithm 1 converges to the optimal solution for the three different values of \(\epsilon\). However, Algorithm 1 is able to approximate the optimal solution with precision that depends on the quantization level (i.e., during Algorithm 1, nodes are able to calculate a neighborhood of the optimal solution). Reducing the quantization level \(\Delta\) allows calculation of the optimal solution with higher precision. Furthermore, we can see that after calculating the optimal solution our algorithm exhibits an oscillatory behavior due to quantized communication. This means quantized communication introduces nonlinearities to the consensus calculation which in turn affect the values of other parameters such as \(x\) and \(z\), and \(\lambda\) (see iteration steps \(1\), \(2\), \(3\)), and for this reason we have this oscillatory behavior. Finally, we can see that Algorithm 1 exhibits comparable performance with [24] (which is plotted until optimization step \(14\)) until the neighborhood of the optimal solution is calculated. However, in [24] nodes are able to exchange real-valued messages. Specifically, in [24] nodes are required to form the Hankel matrix and perform additional computations when the matrix loses rank. This requires nodes to exchange the exact values of their states. Therefore, the main advantage of Algorithm 1 compared to [24], is that it exhibits comparable performance while guaranteeing efficient (quantized) communication among nodes. ## VII Conclusions and Future Directions In this paper, we presented an asynchronous distributed optimization algorithm which combines the Alternating Direction Method of Multipliers (ADMM) strategy with a finite time quantized averaging algorithm. We showed that our proposed algorithm is able to calculate the optimal solution while operating over directed communication networks in an asynchronous fashion, and guaranteeing efficient (quantized) communication between nodes. We analyzed the operation of our algorithm and showed that it converges to a neighborhood of the optimal solution (that depends on the quantization level) at a rate of \(O(1/k)\). Finally, we demonstrated the operation of our algorithm and compared it against other algorithms from the literature. In the future, we aim to enhance the operation of our algorithm to avoid the oscillatory behavior after calculating the optimal solution. Furthermore, we plan to develop strategies that allow calculation of the _exact_ optimal solution while guaranteeing efficient communication among nodes. Finally, we will focus on designing efficient communication strategies for non-convex distributed optimization problems.
この論文では、非同期分散最適化問題に焦点を当てています。この問題では、各ノードは凸のローカルコスト関数が与えられ、隣接ノードと通信できることが可能です。また、ノード間の通信経路は限られた帯域幅をもち、各ノードは処理遅延に苦しみます。私たちは、Alternating Direction Method of Multipliers(ADMM)戦略と有限時間量子化平均化アルゴリズムを組み合わせた分散アルゴリズムを提案しました。この提案アルゴリズムでは、ノードは量子化された値のメッセージを交換し、非同期的に動作します。具体的には、アルゴリズムの各回ごとに、ノード (i) はローカルの凸最適化問題を解きます (そのプライマリ変数の1つに対する)、そして (ii) finite-time quantized averagingアルゴリズムを使用して、第二のプライマリ変数の値を求めます (なぜなら
2310.20550
CapsFusion: Rethinking Image-Text Data at Scale
Large multimodal models demonstrate remarkable generalist ability to perform diverse multimodal tasks in a zero-shot manner. Large-scale web-based image-text pairs contribute fundamentally to this success, but suffer from excessive noise. Recent studies use alternative captions synthesized by captioning models and have achieved notable benchmark performance. However, our experiments reveal significant Scalability Deficiency and World Knowledge Loss issues in models trained with synthetic captions, which have been largely obscured by their initial benchmark success. Upon closer examination, we identify the root cause as the overly-simplified language structure and lack of knowledge details in existing synthetic captions. To provide higher-quality and more scalable multimodal pretraining data, we propose CapsFusion, an advanced framework that leverages large language models to consolidate and refine information from both web-based image-text pairs and synthetic captions. Extensive experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance (e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample efficiency (requiring 11-16 times less computation than baselines), world knowledge depth, and scalability. These effectiveness, efficiency and scalability advantages position CapsFusion as a promising candidate for future scaling of LMM training.
Qiying Yu, Quan Sun, Xiaosong Zhang, Yufeng Cui, Fan Zhang, Yue Cao, Xinlong Wang, Jingjing Liu
2023-10-31T15:31:39
http://arxiv.org/abs/2310.20550v3
# CapsFusion: Rethinking Image-Text Data at Scale ###### Abstract Large multimodal models demonstrate remarkable generalist ability to perform diverse multimodal tasks in a zero-shot manner. Large-scale web-based image-text pairs contribute fundamentally to this success, but suffer from excessive noise. Recent studies use alternative captions synthesized by captioning models and have achieved notable benchmark performance. However, our experiments reveal significant Scalability Deficiency and World Knowledge Loss issues in models trained with synthetic captions, which have been largely obscured by their initial benchmark success. Upon closer examination, we identify the root cause as the overly-simplified language structure and lack of knowledge details in existing synthetic captions. To provide higher-quality and more scalable multimodal pretraining data, we propose CapsFusion, an advanced framework that leverages large language models to consolidate and refine information from both web-based image-text pairs and synthetic captions. Extensive experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance (e.g., 18.8 and 18.3 improvements in CIDEr score on COCO and NoCaps), sample efficiency (requiring 11-16 times less computation than baselines), world knowledge depth, and scalability. These effectiveness, efficiency and scalability advantages position CapsFusion as a promising candidate for future scaling of LMM training. ## 1 Introduction Large Multimodal Models [3, 36, 50] (LMMs), which as versatile multimodal generalists bridge powerful pretrained large language models [51, 52] and vision encoders [43, 49], have garnered significant success in zero-shot multimodal tasks such as image captioning and image/video question answering. Although image-text pairs harvested directly from the web [44] contribute instrumentally to the success of current LMMs, such web-scale data tend to be noisy and sub-optimal for model training [27, 34]. Thus, clever strategies have been devised to harness synthetic captions generated by image captioning model [34], which has augmented model performance notably [5, 14, 15, 22, 35, 50] by adopting large-scale synthetic caption datasets such as LAION-COCO [1] and BLIP-LAION [34]. Although achieving promising performance on classic benchmarks such as COCO Caption [12], further evaluations on more recent benchmarks such as SEED-Bench [32] Figure 1: Training process of models trained on different captions. Figure 2: (a) Comparison of raw and synthetic captions for training. (b) Data processing of Conceptual Captions [46], where real-world information is substituted with generic concepts. reveal that training LMMs with synthetic captions alone is inadequate. We conduct a closer examination of the large-scale training process of LMMs and observe that model training on synthetic captions rapidly reaches a saturation point, beyond which the model performance may even degrade (as illustrated by the green lines in Fig. 1). While this severe _Scalability Deficiency_ may not be readily apparent on traditional benchmarks such as COCO caption (Fig. 1-a), it becomes notably pronounced (Fig. 1-b) on the new benchmark SEED-Bench, which supports a much more comprehensive assessment of LMMs than COCO. We conduct further analysis on the generated outputs from different models trained with captions of varying quality. Fig. 3 illustrates system responses trained on Raw captions (M1), Synthetic captions (M2), and our captions (M3). These examples demonstrate that the outputs from M2, in particular, suffer from severe _World Knowledge Loss_, constituting only high-level concepts while missing all the details about well-known people, locations, events, etc. The generated sentences by M3 (trained on our captions) are more natural and semantically richer than those from M1 and M2. Through examining the differences between raw caption data and synthetic data used in training, we observe that the simplistic syntactic and semantic structures in synthetic captions (Fig. 2-a) may have potentially attributed to the _Scalability Deficiency_ and _World Knowledge Loss_ issues, which so far have been obscured by their initial benchmark success. The root cause is that currently used captioning models (_e.g._ BLIP [34] used in LAION-COCO [1]) for generating synthetic captions heavily rely on academic datasets such as COCO and Conceptual Captions [46] for training. These datasets replace specific details (_e.g._ people's names, locations, landmarks) with more generic _conceptual_ placeholders (_e.g._ 'person', 'city') in the data collection process (Fig. 2-b). Although this eases the training of captioning models, it inevitably results in the loss of a substantial reservoir of valuable real-world information in the trained model, which learns an overly-simplified language structure with basic semantics. Consequently, LMMs trained on the synthetically simplified datasets generated by these captioning models suffer from a deficiency in language complexity and knowledge depth. Therefore, to train a scalable LMM with abundant real-world knowledge, it is crucial to develop an effective strategy to better synthesize caption data while distilling real-world knowledge from raw web-based image-text pairs. There have been some recent attempts to leverage both raw and synthetic captions straightforwardly, by simply mixing them with a fixed hand-tuned ratio [16, 19, 39]. In this work, we propose CapsFusion, a more advanced pipeline that leverages large language models (LLMs) to enhance the quality of large-scale image-text data. CapsFusion first uses a captioning model [34] (following [1, 34]) to generate synthetic captions for images. Then, it utilizes Chat-GPT [45] that follows instructions to organically integrate raw and synthetic captions, by extracting real-world knowledge from the structure-flawed raw captions while merging with structured but syntactically simplified synthetic captions. Our evaluations show that ChatGPT excels in this task, but is non-scalable due to restrictive access of its API. To overcome this limitation, we use the caption output generated by ChatGPT as training data to finetune an open-sourced LLaMA [52]. Evaluation of this finetuned, task-specific LLM demonstrates that it performs _on par with_ ChatGPT and consistently produces high-quality consolidated captions, while easy to scale up. The trained model is then employed for large-scale caption fusion (examples are presented in Fig. 4, which clearly demonstrate the advantages of CapsFusion). Extensive experiments show that CapsFusion captions Figure 3: Outputs of models trained with different caption datasets. Models trained on raw and CapsFusion captions (M1 and 3) possess strong world knowledge (in blue ), while the model trained on synthetic captions (M2) can only generate generic concepts (in red). demonstrate remarkable all-around superiority, as a better substitute for both synthetic and raw captions in the training of LMMs. In terms of _model performance_, CapsFusion captions clearly outperform synthetic captions by substantial margins, with an improvement of 18.8, 18.3, 19.7, and 15.6 in CIDEr score on COCO, NoCaps, TextCaps, and Flickr30K datasets, respectively. This compelling advantage extends to _sample efficiency_ as well. Refined captions from CapsFusion require 11-16 times less computation to achieve high performance similar to synthetic captions. Furthermore, our investigation unveils that CapsFusion captions surpass raw captions by a considerable margin when evaluated on _world knowledge_. Also importantly, CapsFusion captions demonstrate greater _scalability_, meaning that model performance continues to improve with an increased volume of training samples. This scalability advantage, critical for the training of large-scale models, positions CapsFusion as a promising candidate for further scaling efforts in LMM training. ## 2 Related Work Image-text Data EnhancementLaCLIP [16] utilizes LLM to rewrite raw captions, whose performance can be limited due to severe hallucination, because of limited visual information and low-quality raw captions. [19, 39] investigate how to filter and then mix raw and synthetic captions to induce a better CLIP model [43]. Our concurrent work VeCLIP [31] proposes to use LLM to combine information from raw and synthetic captions. The difference is that they directly use an existing LLM for inference, while we finetune a state-of-the-art open-source LLM with training data generated by ChatGPT. In addition, they have no explicit instructions such as extracting world knowledge present in raw captions and referring sentence structure of synthetic captions, which we use to help LLMs make informed decisions during the caption fusion process. All these studies focus on training CLIP models. We instead investigate LMMs and derive insights from a new perspective, such as mixing raw and synthetic captions [16, 31, 39] induces no improvement than separate captions. Large Multimodal ModelsWith the success of large language models [8, 52] (LLMs), recent studies explore building large multimodal models [9, 10, 11, 13, 20, 23, 26, 33, 40, 55, 59, 60, 61, 64, 65, 66, 67, 68] (LMMs) on LLMs with pretrained vision encoders [43, 49, 54]. Most existing works commonly use the prediction of the next text token as the ob Figure 4: Examples of 1 raw captions (from LAION-2B), 2 synthetic captions (from LAION-COCO, generated by BLIP), and their corresponding 3 CapsFusion captions. Knowledge from raw captions (in \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, jective [14, 25, 35, 36]. Another type of LMMs learns to predict both image and text tokens [15, 21, 50, 62], endowing models with more versatile abilities of processing both text and image generation tasks, while maintaining image-to-text performance comparable to LMMs trained with only text token supervision. ## 3 CapsFusion Large Multimodal Models [3, 28, 35, 48, 56, 65, 70] serve as a powerful generalist for diverse multimodal tasks. Typical LMM generalist unifies image-to-text tasks only (_e.g_. image captioning and visual question answering). Recent studies such as Emu [50] further enhance the capabilities of multimodal generalist by enabling it to perform both image-to-text and text-to-image tasks in a zero-shot manner [22, 29, 30, 57, 69]. Learning Objective of LMMThe LMM generalist ability originates from a GPT-style auto-regressive training objective [42], wherein the model learns to predict the next token in a sequence. As a result of this training paradigm, during inference, the model exhibits a remarkable capability to generate appropriate completions for a wide range of tasks. Image-text pairs are the most commonly used multimodal pretraining data for learning vision-language alignment. Specifically, given a dataset \(\mathcal{D}\) consisting of image-text pairs \((I,T)\), where \(I\) represents the image and \(T\) represents the text represented by a sequence of text tokens \(T=\{t_{1},t_{2},\dots,t_{n}\}\). The typical training objective is maximizing the conditional likelihood of text tokens \(T\) given \(I\) in an auto-regressive manner: \[\max_{\theta}\frac{1}{|\mathcal{D}|}\sum_{(I,T)\in\mathcal{D}}\sum_{i=1}^{n} \log P(t_{i}|t_{1},\dots,t_{i-1},I;\theta) \tag{1}\] Under this training objective, the presence of noisy captions can lead the model to generate extraneous words or visual signals. Conversely, if the captions are overly simplistic in nature, the model may learn a simplified output style, resulting in a loss of language/image complexity. Therefore, high-quality image-text pairs are in urgent need to power new-generation LMMs. Caption GenerationGiven raw image-text pairs, CapsFusion first generates synthetic captions using image captioning models following [1, 34]. In our preliminary experimental analysis (Figs. 1 to 3), we find that raw captions contain a wealth of real-world knowledge but are noisy, while synthetic captions have clean structures but lack in-depth real-world knowledge, which exhibits severe scalability issues. Building upon these observations, our objective is to develop a scalable framework to organically integrate information from both raw and synthetic captions, by training a model that learns to absorb the essence of both to create a comprehensive refined image-text dataset. Caption Fusion via ChatGPTChatGPT exhibits impressive ability in following human instructions to accomplish tasks. In CapsFusion, we use ChatGPT to fuse raw and synthetic captions given a prompt. Each prompt comprises three components: the task instruction, a raw caption, and a synthetic caption. The task instruction, serving as the Figure 5: Illustration of the scalable CapsFusion pipeline for generating high-quality large-scale image-text data. guidance, is structured in three key elements: the task description, caption property, and the desired output specifications. Specifically, we first include a task description that conveys the following objective to ChatGPT: _Please merge the information from two provided sentences._ This main task description serves as the primary reference point for ChatGPT. Furthermore, we provide the distinct properties of the two captions involved, with the following contextual guidance: _Raw captions offer detailed real-world information, yet it suffers from flaws in sentence structure and grammar. Synthetic captions exhibit impeccable sentence structure but often lack in-depth real-world details and may contain false information._ This nuanced description helps ChatGPT make informed decisions during the fusion process. Finally, we outline our expectations for the output captions with the following directive: _Ensure a well-structured sentence while retaining the detailed real-world information provided in the raw caption._ This guideline succinctly encapsulates the desired characteristics of the generated captions. In the course of our experimentation, we observe that in a portion of samples, ChatGPT resorts to a straightforward concatenation of the raw and synthetic captions for fusion. To address this, we explicitly instruct ChatGPT to _avoid simply concatenating two sentences_, a directive we have found highly effective in mitigating this issue. The full instruction template is presented in Sec. 7. During human evaluation, ChatGPT is shown to be exceptionally effective at this caption fusion task. Examples are provided in the fourth row of Fig. 6. We acquired 1 million fused captions using the gpt-3.5-turbo API (running for 3 days). Refinement Model with Fused CaptionAlthough ChatGPT is effective, time and computational costs are prohibitive. For scaling, we opt to employ LLaMA-2 [52], a state-of-the-art open-source LLM. We finetune the 13B version of LLaMA-2 specifically for the task of caption fusion, using triplets obtained from ChatGPT. These triplets consist of raw and synthetic captions as inputs, with CapsFusion captions as the target outputs. Training hyperparameters can be found in Sec. 8. The finetuned model, referred to as CapsFus-LLaMA, is rigorously evaluated through human evaluation on 100 validation cases. The evaluation results are presented in Tab. 2, revealing that the performance Figure 6: Comparison among CapsFusion-LLaMA, ChatGPT, and LaCLIP. CapsFusion-LLaMA performs on par with ChatGPT on the caption fusion task, while LaCLIP suffers severe hallucination because only raw text is considered (hallucinations are highlighted in red in image 1 and 2). LaCLIP also fails when the raw caption is too noisy, while CapsFusion-LLaMA and ChatGPT can extract useful information from noise (image 3). of the finetuned CapsFus-LLaMA performs on par with ChatGPT, with 80 out of 100 samples performing equally or better. LaCLIP [16] also leverages LLM for enhancing image-text captions, but simply asks LLM to rewrite raw captions. Qualitative comparisons among LaCLIP, CapsFusion, and ChatGPT are illustrated in Figure 6. Notably, LaCLIP tends to hallucinate information not present in the associated image, due to the absence of detailed visual information represented in the raw captions. On the other hand, CapsFus-LLaMA exhibits outputs similar to ChatGPT and delivers exceptional performance. Large-scale Caption FusionThe trained CapsFus-LLaMA, being as effective as ChatGPT, now possesses the ability to organically fuse and harness raw and synthetic captions in a manner that is both scalable and highly effective. We randomly select a subset containing 127,897,754 image-text pairs from LAION-COCO [1], which contains both raw captions from the web and synthetic captions generated by BLIP [34]. Subsequently, we apply CapsFus-LLaMA to organically integrate the captions of these image-text pairs, employing the same prompt as ChatGPT. This process costs about 12 days using 128 A100-40G GPUs. After filtering with heuristic rules, we retain a total of 120,724,312 image-text pairs, which we term as the CapsFus-120M dataset. CapsFus-120M DatasetTab. 3 provides a comparison of CapsFus-120M with existing image-text datasets. We compute the number of unique trigrams and the average length of these captions (word instead of token as unit) in each dataset. Notably, CapsFus-120M exhibits the highest count of unique trigrams and the longest average sentence length, underscoring superb diversity within its captions. In contrast, synthetic captions (LAION-COCO) exhibit the shortest average sentence length and a considerably lower number of trigrams, signifying a notable lack of language complexity. ## 4 Experiments We present a comprehensive analysis of different caption datasets. Extensive experiments show that CapsFusion exhibits all-around superiority over existing image-text pair datasets, in terms of effectiveness, efficiency, world knowledge depth, and scalability. \begin{table} \begin{tabular}{l l c c c c c c c c} \hline \hline \multirow{2}{*}{Scale} & \multirow{2}{*}{Captions} & \multicolumn{2}{c}{COCO} & \multicolumn{2}{c}{NoCaps} & \multicolumn{2}{c}{TextCaps} & \multicolumn{2}{c}{Flickr30K} \\ \cline{3-10} & & SPICE & CIDEr & SPICE & CIDEr & SPICE & CIDEr & SPICE & CIDEr \\ \hline \multirow{3}{*}{10M} & Raw [44] & 15.5 & 75.1 & 9.0 & 64.0 & 10.5 & 46.4 & 13.6 & 54.4 \\ & Synthetic [1] & 19.8 & 102.5 & 11.7 & 84.2 & 12.7 & 42.3 & 15.0 & 63.9 \\ & Language Rewrites [16] & 14.6 & 71.6 & 8.6 & 59.0 & 9.3 & 38.3 & 11.6 & 49.0 \\ & Mixing (Raw \& Syn). [16, 39] & 17.9 & 90.5 & 10.6 & 76.7 & 12.4 & 51.7 & 15.1 & 64.0 \\ & Mixing (Raw \& LR) [16] & 15.0 & 72.6 & 9.0 & 61.1 & 10.3 & 44.6 & 12.2 & 51.7 \\ & CapsFusion & **20.7**(+-0.9) & **107.7**(-0.52) & **12.6**(+0.49) & **92.4**(-0.2) & **13.9**(+-1.2) & **56.3**(-0.46) & **15.9**(-0.8) & **68.4**(+-0.4) \\ \hline \multirow{3}{*}{50M} & Raw [44] & 16.4 & 81.0 & 9.7 & 68.4 & 11.7 & 55.2 & 14.3 & 60.3 \\ & Synthetic [1] & 19.2 & 100.9 & 11.5 & 82.5 & 13.2 & 46.7 & 14.3 & 60.2 \\ & Mixing (Raw \& Syn). [16, 39] & 18.5 & 93.3 & 10.9 & 79.7 & 12.7 & 55.5 & 15.1 & 64.6 \\ & CapsFusion & **21.3**(+-1.1) & **112.4**(+11.5) & **13.6**(+-2.1) & **99.2**(+15.7) & **14.9**(+1.2) & **62.7**(+1.2) & **16.9**(+1.5) & **74.5**(+-0.9) \\ \hline \multirow{3}{*}{100M} & Raw [44] & 17.1 & 85.5 & 10.1 & 72.8 & 12.3 & 59.6 & 14.6 & 62.2 \\ & Synthetic [1] & 18.5 & 96.9 & 11.0 & 81.6 & 13.1 & 46.5 & 13.7 & 57.4 \\ \cline{1-1} & Mixing (Raw \& Syn). [16, 39] & 18.0 & 95.0 & 10.5 & 77.9 & 12.3 & 55.1 & 15.0 & 66.5 \\ \cline{1-1} & CapsFusion & **21.7**(+-3.2) & **115.7**(+18.9) & **13.5**(+-2.5) & **99.9**(+18.3) & **15.2**(+-2.1) & **66.2**(+11.9) & **16.8**(+-1.8) & **73.0**(+-0.4) \\ \hline \hline \end{tabular} \end{table} Table 1: Zero-shot evaluation of models trained with different caption datasets on a broad range of image captioning benchmarks. \begin{table} \begin{tabular}{l c c} \hline \hline Datasets & \# Unique Trigrams & Avg. Length \\ \hline LAION-2B & 5.51 \(\times 10^{7}\) & 10.95 \\ LAION-COCO & 1.00 \(\times 10^{7}\) & 8.99 \\ La-CLIP & 5.46 \(\times 10^{7}\) & 14.63 \\ CapsFus-120M & 7.13 \(\times 10^{7}\) & 22.74 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of different caption datasets (on a randomly selected 10 million subset of CapsFus-120M images). ### Setup For a fair comparison, we compare CapsFusion with other caption datasets under the same set of images from LAION-COCO [1], isolating caption quality as the only varying factor influencing model performance. Experiments are conducted across three scales: 10, 50 and 100 million image-text pairs. Model ArchitectureWe adopt the most prevalent LMM architecture, which consists of three components: an LLM, a vision encoder, and a vision-language bridging module. We use LLaMA-2-7B [52] and EVA-01-CLIP-g [17, 49] to initialize the LLM and vision encoder modules, respectively. For the bridging module, we follow Emu [50] to use a randomly initialized Causal Transformer to bridge the vision and language modalities. This module transforms bi-directional image representations from the vision encoder into a causal sequence that aligns better to the nature of LLMs, which excel at modeling causal sequences in an autoregressive fashion. The LLM and vision encoder are frozen during training to save computation cost following [35], and only the bridging module is tuned. Training ScheduleThe training schedule is set as the same for all compared captions. For each evaluation scale, we train the model for 1 epoch. This practice follows Datacomp [19], a benchmark for evaluating image-text pair datasets on CLIP training. The peak learning rate is 3e-4, with the initial 2,000 (100M) / 1,000 (50M) / 500 (10M) steps as warm-up, after which the learning rate decreases to 3e-5 with a cosine learning rate decay schedule. Batch size is set to 8192 for all scales. Detailed training hyperparameters can be found in Sec. 8. The 100M scale training costs 40 hours with 16 A800-80G GPUs. BaselinesWe establish two baselines using raw captions from LAION-2B [44] and synthetic captions from LAION-COCO [1]. Additionally, two state-of-the-art methods for improving image-text pairs in CLIP training are evaluated: language rewrites (LaCLIP [16]) and random mixing [16, 39]. For LaCLIP [16], we adopt their in-context strategy and employ LLaMA-2-7B to rewrite 10 million captions for comparison, which takes 30 hours with 8 A100-40G GPUs. For random mixing [16, 39], we set the mixing ratio of two types of captions as 1:1 [16] and do not tune this ratio as in [39]. EvaluationWe comprehensively assess the performance of LMMs across a wide range of evaluation benchmarks. These benchmarks encompass both traditional benchmarks and recently introduced assessments, including COCO [12], NoCaps [2], TextCaps [47], Flickr30k [41], and SEEDBench [32]. For image captioning tasks, we employ SPICE [4] and CIDEr [53] metrics. For the comprehensive SEED-Bench in the form of multiple-choice questions, we evaluate LMMs using accuracy. ### Model Performance The performances of models trained with different captions on COCO, NoCaps, TextCaps, and Flickr30K benchmarks are presented in Tab. 1. We observe that CapsFusion outperforms all baseline captions in all settings by a large margin, across all datasets evaluated. For example, on the 100M scale, CapsFusion surpasses the best baseline by a substantial margin, achieving 18.8 and 18.3 CIDEr score improvements on COCO and NoCaps, respectively. Rewriting Captions Fails at Image Captioning.On the 10M scale, our examination reveals that Language Rewrites captions [16], generated through the process of rewriting raw captions, fail to achieve decent performance. This can be attributed to the severe hallucination issue we observed in the rewrites captions (Fig. 6), which introduces extraneous text that is irrelevant to the content depicted in the accompanying images. The underlying cause of the hallucination phenomenon can be traced back to the input data, which consists solely of noisy raw captions, providing a suboptimal starting point for the rewriting process. Mixing Captions does not Bring Consistent Gains.Another notable observation is that mixing captions cannot Figure 7: Comparison of scalability and sample efficiency across different datasets. yield better performance. For instance, on the 10M-scale over COCO benchmark, mixing raw and LR captions (72.62 CIDEr and 15.01 SPICE scores) achieves a median performance between Raw (75.13 CIDEr and 15.48 SPICE) and LR (71.61 CIDEr, 14.6 SPICE) captions. This finding is contrarian to the observation in CLIP training [16, 31, 39], where mixing raw and generated captions has proven to be a strong strategy for enhancing CLIP performance, with raw captions being identified as an indispensable component [16, 31]. In contrast, our experiments demonstrate that when training LMMs, the exclusive use of a single caption type (CapsFusion) can outperform both raw and synthetic captions. Synthetic Captions About at Small Scale.A noteworthy observation is that synthetic caption demonstrates exceptional results on the 10M dataset (102.5 COCO CIDEr), while exhibiting inferior performance (96.93 COCO CIDEr) on the larger-scale 100M dataset. This aligns with our earlier observation of the _scalability deficiency_ issue in synthetic captions, a potential threat to the effective training of LMMs. Even at smaller scales, it is worth noting that the effectiveness of synthetic captions consistently falls behind that of CapsFusion across all datasets. ### Sample Efficiency In addition to comparing performance across different dataset scales, we probe deeper into training sample efficiency. In Tab. 1, we find that with only 10M image-text pairs, CapsFusion captions outperform other captions with much larger scale (50M and 100M), demonstrating exceptional sample efficiency. We visualize the updates of evaluation metrics on NoCaps, TextCaps, Flickr30K, and COCO benchmarks when the number of seen training samples increases from 0 to 100 million image-text pairs, presented in Fig. 7. The horizontal grey dashed lines approximately represent the best-saturated performance of baseline captions when trained with 100 million image-text pairs. The vertical dashed line reveals the number of samples employed by CapsFusion to achieve a similar level of performance as the best-performing baseline captions. It is worth noting that CapsFusion attains the same level of performance as the best baseline captions with only 6M, 9M, 6M, and 8M samples for NoCaps, TextCaps, Flickr30K, and COCO captions, respectively. This achievement underscores CapsFusion's ability of 11-16 times speedup and demonstrates its superior sample efficiency. ### Scalability Analysis Scalability stands as a crucial attribute in large model training. We further inspect the scalability of image-text pairs commonly employed in model training. Our investigation reveals that synthetic captions, among all the caption types considered, exhibit the worst scalability. This can be observed from Fig. 7 (a), (b), and (d), wherein the blue lines exhibit early saturation with a mere 30 million image-text pairs. Subsequently, their performance gradually deteriorates. In contrast, raw caption (orange lines) displays commendable scalability, with its performance showing a consistent upward trajectory as more training samples are involved. However, the inherent high noise level in raw caption hampers its ability to achieve strong performance. CapsFusion caption (red lines) exhibits remarkable scalability on all datasets, outperforming both synthetic and raw captions by a substantial margin throughout the entire scale. Our investigation reveals that synthetic captions have severe scalability limitations and are typically saturate with only 30 million pairs, after which _more computation imposes an adverse impact on model performance_. However, current synthetic caption datasets are typically much larger in scale (600M in LAION-COCO). We hope our findings raise concerns about the efficiency issue in training LMMs with such massive synthetic caption datasets. ### Further Evaluation on SEED-Bench Recently, there are new comprehensive benchmarks proposed for more thorough evaluations of LMMs on granular functionalities [6, 7, 18, 37, 58, 63]. We evaluate our proposed model on a representative benchmark, SEED-Bench [32], over its 9 image-text tasks (dataset details can be found in Sec. 9.). Results are presented in Tab. 4. We find CapsFusion outperforms raw and synthetic captions in 7 out of 9 evaluated tasks, which underscores the remarkable capabilities of CapsFusion in instance counting, instance interaction, scene understanding and other multi-modal functionalities. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline Captions & Scene U & Inst Iden & Inst Loc & Inst Attr & Inst cnt & Spatial Rel & Inst Inter & Vis Reason & Text Rec & Total \\ \hline Raw [44] & 57.9 & 51.2 & 39.8 & 47.7 & 44.6 & 35.3 & 47.4 & **48.6** & **34.1** & 48.7 \\ Synthetic [1] & 52.7 & 48.9 & 36.7 & 42.2 & 35.7 & 34.5 & 48.4 & 35.0 & 12.9 & 43.2 \\ CapsFusion & **58.8** & **52.7** & **41.0** & **48.0** & **46.3** & **35.9** & **57.7** & 47.1 & 20.0 & **49.8** \\ \hline \hline \end{tabular} \end{table} Table 4: Zero-shot evaluation of models trained with different caption datasets on SEED-Bench. ### Qualitative Evaluation on World Knowledge In Fig. 3 and Fig. 9 (Appendix), we provide a qualitative evaluation on the outputs generated by models trained with different datasets. The first row is the input image with text prompt, the lower three rows show the outputs from models trained on raw, synthetic, and CapsFusion captions. We observe that models trained on raw and CapsFusion captions exhibit rich real-world knowledge, able to identify celebrities (Fig. 3 image 1 and 2), recognize famous artworks (Fig. 9 image 2), attribute literature works to their authors (Fig. 3 image 2), and pinpoint the location where the specific event in the image occurred (Fig. 3 image 3). Models trained on synthetic captions totally lost such capabilities. ### Effects when Firing LLM Prior experiments freeze the LLM part, which saves resources and maintains LLM's proficiency to a certain degree. Here we investigate the impact of employing different training captions when firing the LLM. We conduct experiments at the 10M scale, and results are summarized in Tab. 5. Notably, we observe a significant decline in the performance of synthetic captions on the COCO Captions dataset. This stark drop indicates a potential deterioration in the LLM's capabilities when it is trained on the simplified language of synthetic captions. Furthermore, we assess the language performance of the LLM component on the MMLU [24] benchmark. Compared to the original LLM model (LLaMA-2-7B), models trained on synthetic captions experience the most performance degradation, while models trained on raw and CapsFusion captions exhibit relatively less degradation in terms of language performance. ## 5 Conclusion In this work, we identify severe _Scalability Deficiency_ and _World Knowledge Loss_ issues in LMMs trained with synthetic captions. On the other hand, raw web-based image-text pairs possess rich world knowledge but are too noisy to achieve decent performance. We thus propose CapsFusion, an advanced framework to generate high-quality captions in a scalable and effective manner. The resulting CapsFus-120M dataset exhibits all-around superiority over existing image-text datasets, in terms of model performance, efficiency, world knowledge depth, and scalability. These advantages pose CapsFusion as a promising framework that can generate large-scale high-quality image-text data for future scalable LMM training.
大規模多 multimodal モデルは、ゼロショットで多様な multimodal タスクを実行できる卓越した汎用能力を披露しています。大規模なウェブベースの画像・テキストペアがその成功に大きく貢献していますが、過剰なノイズに悩まされています。最近の研究では、Captioningモデルによって生成された代替 captions を用いて、注目すべきベンチマーク性能を達成しています。しかし、私たちの研究では、合成キャプションで訓練されたモデルに存在するスケーラビリティの欠如と世界の知識喪失問題を明らかにしました。これらの問題を根本的に理解すると、既存の合成キャプションの過剰に単純な言語構造と知識の不足が原因であることが分かりました。より高品質でスケーラブルな multimodal pretraining データを提供するために、私たちは CapsFusion を提案しました。これは、大規模言語モデルを使用して、ウェブベースの画像・テキストペアと合成キャプションの情報を統合し、改善する新しいフレームワークです。広範囲の実
2310.20236
Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning
Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time, and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T, and E2D) hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data.
Fei Cheng, Masayuki Asahara, Ichiro Kobayashi, Sadao Kurohashi
2023-10-31T07:41:24
http://arxiv.org/abs/2310.20236v1
Dynamically Updating Event Representations for Temporal Relation Classification with Multi-category Learning ###### Abstract Temporal relation classification is a pair-wise task for identifying the relation of a temporal link (TLINK) between two mentions, i.e. event, time and document creation time (DCT). It leads to two crucial limits: 1) Two TLINKs involving a common mention do not share information. 2) Existing models with independent classifiers for each TLINK category (E2E, E2T and E2D) 1 hinder from using the whole data. This paper presents an event centric model that allows to manage dynamic event representations across multiple TLINKs. Our model deals with three TLINK categories with multi-task learning to leverage the full size of data. The experimental results show that our proposal outperforms state-of-the-art models and two transfer learning baselines on both the English and Japanese data. Footnote 1: Time-to-Time (T2T) is not included in this paper, as we focus on event centric representations. ## 1 Introduction Reasoning over temporal relations relevant to an event mentioned in the document can help us understand when the event begins, how long it lasts, how frequent it is, and etc. Starting with the TimeBank (Pustejovsky et al., 2003) corpus, a series of temporal competitions (TempEval-1,2,3) (Verhagen et al., 2009, 2010; UzZaman et al., 2012) are attracting growing research efforts. Temporal relation classification (TRC) is the task to predict a temporal relation (_after_, _before_, _includes_, etc.) of a TLINK from a source mention to a target mention. Less effort has been paid to explore the sharing information across 'local' pairs and TLINK categories. In recent years, a variety of dense annotation schemas are proposed to overcome the'sparse' annotation in the original Timebank. A typical one is the Timebank-Dense (TD) corpus (Chambers et al., 2014), which performs a compulsory dense annotation with the complete graph of TLINKs for the mentions located in two neighbouring sentences. Such dense annotation increases the chance of pairs sharing common events and demands of managing 'global' event representations across pairs among TLINK categories. However, globally managing event representations of a whole document takes an extremely heavy load for the dense corpora. Timebank-Dense contains around 10,000 TLINKs in only 36 documents and is 7 times denser than the original Timebank. Thus, we propose a simplified scenario called Source Event Centric TLINK (SECT) chain. For each event \(e_{i}\) in a document, we group all TLINKs containing the common source event \(e_{i}\) into the \(e_{i}\) centric TLINK chain and align them with the chronological order of the target mentions appearing in the document. We assume that our system is capable of learning dynamic representations of the centric event \(e_{i}\) along the SECT chain via a 'global' recurrent neural network (RNN). \(DCT\): 1998-02-27 _An intense **manhunt**_(\(e_{\mathbf{1}}\))** conducted by the FBI and the bureau of alcohol, tobacco and firearms **continues**_(\(e_{\mathbf{2}}\))** for Rudolph in the wilderness of western north Carolina. And **this week** (**\(t_{\mathbf{1}}\)**)_, FBI director Louie Freeh assigned more agents to the **search**_(\(e_{\mathbf{3}}\))_. We demonstrate our proposal with the above adjacent-sentence excerpt in Timebank-Dense. '\((e_{s},e_{t})\)' denotes a directed TLINK from the source \(e_{s}\) to target \(e_{t}\) in this paper. Considering the _'manhunt**_(\(e_{1}\))' centric chain: \(\{(e_{1},DCT),(e_{1},e_{2}),(e_{1},t_{1}),(e_{1},e_{3})\}\)2, _'manhunt'_ holds a _'includes'_ relation to _'continues'_. We assume that dynamically updating the representation of '_manhunt_' in the early step '\((e_{1},e_{2})\)' will benefit the prediction for the later step \((e_{1},e_{3})\) to '_search_'. '_manhunt_' is supposed to hold the same '_includes_' relation to '_search_', as the search should be included in the continuing manhunt. Our model further exploits a multi-task learning framework to leverage all three categories of TLINKs in the SECT chain scope. A common BERT Devlin et al. (2019) encoder layer is applied to retrieve token embeddings. The global RNN layer manages the dynamic event and TLINK presentations in the chain. Finally, our system feeds the TLINK representations into their corresponding category-specific (E2D, E2T and E2E) classifiers to calculate a combined loss. The contribution of this work is listed as follows: 1) We present a novel source event centric model to dynamically manage event representations across TLINKs. 2) Our model exploits a multi-task learning framework with two common layers trained by a combined category-specific loss to overcome the data isolation among TLINK categories. The experimental results suggest the effectiveness of our proposal on two datasets. All the codes of our model and two baselines is released. 3 Footnote 3: [https://github.com/racerandom/NeuralTime](https://github.com/racerandom/NeuralTime) ## 2 Related Work ### Temporal Relation Classification Most existing temporal relation classification approaches focus on extracting various features from the textual sentence in the local pair-wise setting. Inspired by the success of neural networks in various NLP tasks, Cheng and Miyao (2017); Meng et al. (2017); Vashishtha et al. (2019); Han et al. (2019, 2019) propose a series of neural networks to achieve accuracy with less feature engineering. However, these neural models still drop in the pair-wise setting. Meng and Rumshisky (2018) propose a global context layer (GCL) to store/read the solved TLINK history upon a pre-trained pair-wise classifier. However, they find slow converge when training the GCL and pair-wise classifier simultaneously. Minor improvement is observed compared to their pair-wise classifier. Our model is distinguished from their work in three focuses: 1) We constrains the model in a reasonable scope, i.e. SECT chain. 2) We manages dynamic event representations, while their model stores/reads pair history 3) Our model integrates category-specific classifiers by multi-task learning, while they use the categories as the features in one single classifier. ### Multi-task Transfer Learning For the past three years, several successful transfer learning models (ELMO, GPT and BERT) Peters et al. (2018); Radford et al. (2018); Devlin et al. (2019) have been proposed, which significantly improved the state-of-the-art on a wide range of NLP tasks. Liu et al. (2019) propose a single-task batch multi-task learning approach over a common BERT to leverage a large mount of cross-task data in the fine-tuning stage. In this work, our model deals with various categories of TLINKs (E2E, E2T and E2D) in a batch of SECT chains to calculate the combined loss with the category-specific classifiers. ### Non-English Temporal Corpora Less attention has been paid for non-English temporal corpora. Until 2014, Asahara et al. starts the first corpus-based study BCCWJ-Timebank (BT) on Japanese temporal information annotation. We explore the feasibility of our model on this Japanese dataset. ## 3 Overview of Proposed Model Figure 1 demonstrates the overview of our Source Event Centric (SEC) model with the previous \(e_{1}\) centric chain example \(\{(e_{1},DCT),(e_{1},e_{2}),(e_{1},t_{1}),(e_{1},e_{3})\}\) in SS 1. Figure 1: The overview of the proposed model. ### BERT Sentence Encoder We apply a pre-trained BERT for retrieving token embeddings of input sentences. For a multiple-token mention, we treat the element-wise sum of token embeddings as the mention embedding. ### Source Event Centric RNN After the BERT layer processing, the system collects all the mention embeddings appearing in the chain: \(\{R_{e_{1}},R_{DCT},R_{e_{2}},R_{t_{1}},R_{e_{3}}\}\)4. Footnote 4: As DCT is not explicitly mentioned in documents, we set \(R_{DCT}\) as a trainable embedding. Our model assigns a 'global' two-layer gated recurrent unit (GRU) model with the left-to-right direction to simulate the chronological order of the SETC chain for updating the centric \(e_{1}\) embeddings. The original \(e_{1}\) embedding \(R_{e_{1}}\) is sent into the GRU as the initial hidden. At \(i\)-th TLINK step, the system inputs the target mention embedding to update the \(i\)-th \(e_{1}\) embedding \(R_{e_{1}}^{i}\) for generating the \(\{i+1\}\)-th step TLINK embedding \(T^{i+1}\). As shown in Figure 1, the \(3\)-rd TLINK embedding \(T_{(e_{1},t_{1})}^{3}\) is the concatenation of the \(2\)-nd step \(R_{e_{1}}^{2}\) and target embedding \(R_{t_{1}}\) as the follows: \[R_{e_{1}}^{2}=max(R_{e_{1}},GRU(R_{e_{2}},h_{1})) \tag{1}\] \[T_{(e_{1},t_{1})}^{3}=[R_{e_{1}}^{2};R_{t_{1}}] \tag{2}\] The element-wise \(max\) is designed to set the initial \(R_{e_{1}}\) as an anchor to avoid the quality dropping of new hiddens after long sequential updating. ### Multi-category Learning After obtaining all the TLINK embeddings \(\{T_{(e_{1},DCT)}^{1},T_{(e_{1},e_{2})}^{2},T_{(e_{1},t_{1})}^{3},T_{(e_{1},e_ {3})}^{4}\}\) in the SECT chain via the previous two common layers, the system feeds them into the corresponding category-specific classifiers. Each classifier is built with one linear full-connected layer and Softmax layer. The system calculates the combined loss as the follows to perform multi-category learning. \[L=L_{E2E}+L_{E2T}+L_{E2D} \tag{3}\] ## 4 Experiments and Results We conduct the experiments of applying the SEC model on both the English TD and Japanese BT corpora. Juman++ [16]5 is adopted to do morphological analysis for Japanese text. TD annotation adopts a 6-relation set (_after_, _before_, _simultaneous_, _includes_, _is_included_ and _vague_). We follow the 'train/dev/test' data split6 of the previous work. For BT, we follow a merged 6-relation set as [20]. We perform the document-level 5-fold cross-validation. In each split, we randomly select 15% documents as the dev set from the training set. The TLINKs statistics of the two corpora are listed in Table 1. Footnote 5: [https://github.com/ku-nlp/jumanpp](https://github.com/ku-nlp/jumanpp) Footnote 6: www.usna.edu/Users/cs/nchamber/caevo Footnote 7: github.com/huggingface/transformers We adopt the English and Japanese pre-trained 'base' BERT8 and empirically set RNN hidden size equal to BERT hidden, 4 SECT chains per batch, 20 epochs, and AdamW (lr=5e-5). The other hyper-parameters are selected based on the dev micro-F1. All the results are 5-run average. Footnote 8: [https://github.com/ku-nlp/jumanpp](https://github.com/ku-nlp/jumanpp) For the lack of comparable transfer learning approaches, we build two BERT baselines as follows (fine-tuning 5 epochs, batch size is 16): * **Local-BERT**: The concatenation of two mentions as TLINK embeddings are fed into the independent category-specific classifier. * **Multi-BERT**: The multi-category setting as [17] of Local-BERT. Each time the system pops out a single-category batch, \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline **Corpus** & **E2D** & **E2T** & **E2E** & **MAT** & **SECT** \\ \hline English & 1,494 & 2,001 & 6,088 & - & 5.5 \\ Japanese & 2,873 & 1,469 & 1,862 & 776 & 2.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of TLINKs in the English and Japanese corpora. ‘SECT’ denotes the average TLINK number per SECT chain. ‘MAT’ is defined in § 4.3 Figure 2: Dev performance (micro-F1) of three training strategies on two datasets. encodes it via the common BERT, and feed it to the category-specific classifier. 'Local-BERT' and 'Multi-BERT' serve as the baselines in the ablation test for the proposed 'SEC' model. 'Local-BERT' is the 'SEC' model removing both global RNN and multi-category learning. 'Multi-BERT' is viewed as the 'SEC' model removing global RNN. ### Asynchronous Training Strategy Fine-tuning BERT is difficultly performed with training SEC RNN simultaneously. The standard fine-tuning only requires 3 to 5 epochs, which indicates the pre-trained model tends to quickly overfit. However, the SEC RNN is randomly initialized and requires more training epochs. * **no freeze** of BERT sentence encoder * **freeze** of BERT sentence encoder * **freeze after \(k\) epochs** Figure 2 shows the validation micro F1 of all TLINKs against the training epochs of the above asynchronous training strategies. **no freeze** shows the evidence of our concern that the curve undulate after the initial 3 epochs. **freeze** performs a stable learning phase with the lowest initialization. **freeze after \(k\) epochs** achieves the balance of the stability and high F1. Therefore, we perform the third strategy for all the following experiments. The number \(k\) is selected from \(\{3,4,5\}\) based on the validation scores. ### Main Timebank-Dense Results Table 2 shows the experimental results on the English TD corpus. 'CATENA' (Mirza and Tonelli, 2016) is the feature-based model combined with dense word embeddings. 'SDP-RNN' (Cheng and Miyao, 2017) is the dependency tree enhanced RNN model.'GCL' (Meng and Rumshisky, 2018) is the global context layer model introduced in SS 2.1. 'Fine-grained TRC' (Vashishtha et al., 2019) is the ELMO based fine-grained TRC model with only the E2E results reported. It's not surprising that the proposed model substantially outperforms state-of-the-art systems, as the existing SOTA didn't exploit BERT yet. Therefore, we offer the ablation test with 'Local-BERT'(w/o multi-categories learning and global SEC RNN) and 'Multi-BERT' (w/o global SEC RNN) to investigate the benefits of our two contributions. The 'SEC' model obtains +3.2, +6.8, +5.2 F1 improvements compared to 'Local-BERT', which suggests the effectiveness of two main proposal. The 'SEC' model further outperforms 'Multi-BERT' by 3.6 gain of the majority category E2E, 1.0 gain of E2T and 0.7 gain of E2D, which indicates the impact of the global SEC RNN. A main finding is that E2E obtains higher gains from 'global' contexts, compare to E2T and E2D. It matches the intuition that events are more globally contextualized and time expressions are usually more self-represented (e.g. normalized time values). E2D mainly requires contextual information from the single sentences by the BERT encoder. E2T takes less advantage of BERT, while multi-category training with E2E, E2D can significantly improves its performance. ### Results on Non-English Data Table 3 shows the results in the Japanese corpus. Different from the TD annotation schema, BT specifies two E2E categories for fitting the Japanese language: 1) E2E: between two consecutive events, 2) MAT: between two consecutive matrix verb events. The state-of-the-art system on BT is the feature \begin{table} \begin{tabular}{l c c c} \hline \hline **Models** & **E2D** & **E2T** & **E2E** \\ \hline Majority Vote & 32.3 & 40.6 & 47.7 \\ \hline _local Models_ & & & \\ CATENA (2016) & 53.4 & 46.8 & 51.9 \\ SDP-RNN (2017) & 54.6 & 47.1 & 52.9 \\ Fine-grained TRC (2019) & - & - & 56.6 \\ Local-BERT & 62.7 & 49.4 & 59.8 \\ \hline _local + multi-category Models_ & & & \\ Multi-BERT & 65.2 & 54.8 & 61.4 \\ \hline _global + multi-category Models_ & & & \\ GCL (2018) & 48.9 & 48.7 & 57.0 \\ **SEC (proposed)** & **65.9** & **55.8** & **65.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Temporal relation classification results (micro F1) on the English Timebank-Dense. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Models** & **E2D** & **E2T** & **E2E** & **MAT** \\ \hline Majority Vote & 68.3 & 50.4 & 43.2 & 39.3 \\ \hline _local Models_ & & & & \\ Yoshikawa (2014) & 75.6 & 55.7 & 59.9 & 50.0 \\ Local-BERT & 80.7 & 58.9 & 61.2 & 54.1 \\ \hline _local + multi-category Models_ & & & \\ Multi-BERT & 81.4 & **61.0** & 63.3 & 61.6 \\ \hline _global + multi-category Models_ & & & \\ **SEC (proposed)** & **81.6** & 60.7 & **64.5** & **64.6** \\ \hline \hline \end{tabular} \end{table} Table 3: Temporal relation classification results (micro F1) on the Japanese BCCWJ-Timebank. based approach [32]. The comparisons are similar to the English data. Our 'SEC' obtains the substantial improvements compared to their work and two BERT baselines. An interesting observation is that MAT TLINKs are usually inter-sentence located at the end of SECT chains, as Japanese is a 'SOV' language. The results indicate that long distance MAT suffers from the low-quality representations in the 'local' setting and benefits from 'global' representation more. ## 5 Conclusion This paper presents a novel transfer learning based model to boost the performance of temporal information extraction task especially for densely annotated dataset. Our model can dynamically update event representations across multiple TLINKs in a Source Event Centric chain scope. Our model exploits a multi-category learning framework to leverage the total data of three TLINK categories. The empirical results show that our proposal outperforms the state-of-the-art systems and the ablation tests suggest the effectiveness of two main proposals. The Non-English experiments support the feasibility of our system on the Japanese data.
temporal関係の分類は、2つの記述間の時系列関連(TLINK)の関係を識別するための対向的なタスクです。つまり、イベント、時間、そして文書作成時間(DCT)の関連です。この分類は、2つの重要な制限をもたらします。1) 共通の記述を含む2つのTLINKは情報共有しません。2) 各TLINKカテゴリの独立した分類モデル(E2E、E2T、E2D)は、全体データの使用を阻害します。この論文では、イベント中心のモデルを提案しました。これは、複数のTLINK間の動的なイベント表現を管理することを可能にします。私たちのモデルは、多タスク学習を使用して、3つのTLINKカテゴリに対応しています。実験の結果、提案モデルは、英語と日本語のデータにおいて、最新のモデルと2つの転移学習基底モデルに上performs。
2309.12481
HANS, are you clever? Clever Hans Effect Analysis of Neural Systems
Instruction-tuned Large Language Models (It-LLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models' abilities. However, earlier works are demonstrating the presence of inherent "order bias" in It-LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate It-LLMs' resilience abilities towards a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models.
Leonardo Ranaldi, Fabio Massimo Zanzotto
2023-09-21T20:52:18
http://arxiv.org/abs/2309.12481v2
# HANS, are you clever? Clever Hans Effect Analysis of Neural Systems ###### Abstract Instruction-tuned Large Language Models (ItLLMs) have been exhibiting outstanding abilities to reason around cognitive states, intentions, and reactions of all people involved, letting humans guide and comprehend day-to-day social interactions effectively. In fact, several multiple-choice questions (MCQ) benchmarks have been proposed to construct solid assessments of the models' abilities. However, earlier works are demonstrating the presence of inherent "order bias" in It-LLMs, posing challenges to the appropriate evaluation. In this paper, we investigate It-LLMs' resilience abilities towards a series of probing tests using four MCQ benchmarks. Introducing adversarial examples, we show a significant performance gap, mainly when varying the order of the choices, which reveals a selection bias and brings into discussion reasoning abilities. Following a correlation between first positions and model choices due to positional bias, we hypothesized the presence of structural heuristics in the decision-making process of the It-LLMs, strengthened by including significant examples in few-shot scenarios. Finally, by using the Chain-of-Thought (CoT) technique, we elicit the model to reason and mitigate the bias by obtaining more robust models. ## 1 Introduction The intensifying dispute on AI abilities has led to the evolution of robust evaluation methods to assess the actual limits of It-LLMs. Recently, many anecdotal examples have been used to suggest that It-LLMs such as GPTs (OpenAI, 2023) and Llama (Touvron et al., 2023) are proficient at understanding that people have ideas, thoughts, emotions, and preferences, which is referred to the Neural Theory of Mind (N-ToM) (Sap et al., 2022). Although these abilities have been observed, earlier works advance conflicting conclusions showing that many solved tasks rely on memorization and superficial heuristics, as well-known as Clever Hans Effect. In fact, it seems that It-LLMs are very sensitive to the arrangement of components in prompts (Zhu et al., 2023), as it directly affects the evaluation of their ability to understand and reason about specific tasks (Wang et al., 2023; Lu et al., 2023). In light of these findings, our research question arises: Do It-LLMs really have N-ToM abilities, or is it a Clever Hans Effect? In this paper, we propose a systematic evaluation using several benchmarks with the multiple-choice questions (MCQ) format to investigate the interplay between N-ToM and Clever Hans Effect. In order to probe the abilities of It-LLMs, we introduce different adversarial strategies by varying the order and altering the content of choices both in zero- and few-shot scenarios. We conduct different experiments using different versions of Llama (Touvron et al., 2023, 20), Vicuna (Chiang et al., 2023), and Falcon (Almazrouei et al., 2023) on four different MCQ benchmarks. Hence, by using PIQA (Bisk et al., 2019), OpenBookQA (Mihaylov et al., 2018), CommonsenseQA (Talmor et al., 2019), Social IQA (Sap et al., 2019) we demonstrate that It-LLMs have particular N-ToM abilities, but they are not robust. More specifically, behind in-depth analyses in a zero-shot scenario, we discover a substantial sensitivity gap on average between the original and adversarial benchmarks. Following, we tested different settings in a few-shot scenario, where we observed that introducing examples in the input prompt led to marginal improvements in the robustness of the It-LLMs. These results led us to hypothesize that considerable sensitivity in prompting emerges from It-LLMs' positional bias in that they tend to favor specific structures. Therefore, Clever Hans' heuristics emerge as the choice is not made through reasoning ability. Nevertheless, the integration of demonstrations within the input prompts has manifested as a salient mechanism, markedly enhancing the predictive accuracy of It-LLMs. Together, the impact of the Chain-of-Thought paradigm elucidates bifurcated advantages: it fortifies both the robustness and interpretative stability inherent to the models while concurrently attenuating the positional bias. These methodological augmentations suggest emergent N-ToM abilities, indicating a more profound and contextually attuned linguistic grasp. Our findings can be summarized as follows: * It-LLMs, while lacking robust N-ToM abilities, often resort to structural heuristics; * When instructed appropriately via few-shot demonstrations, the stability of It-LLMs improves considerably; * Hiring a step-by-step methodology boosts enriched reasoning abilities within It-LLMs, resulting in more consistent results. Via these studies, we have contributed to a deeper understanding of how the order of options influences the decision-making process of It-LLMs in multiple-choice questions and offer practical solutions to increase robustness and reliability in such tasks. ## 2 Empirical Investigation & Analysis Intending to empirically assess the incline between the Neural Theory of Mind abilities and Clever Hans traps into which Instruction-tuned Large Language Models (It-LLMs) could fall, we propose a series of experiments where we use four question-answering benchmarks presented in Section 2.1 and several adversarial experiments introduced in Section 2.2). ### Speculative Benchmark An essential component of the Theory of Mind (ToM) is the ability to reason about the intentions and reactions of participants to social interactions. To measure it in LLMs, i.e., Neural-ToM (N-ToM) with empirical methods, Sap et al. (2022) was used Social IQa (Sap et al., 2019). In our work, we extend the study by also considering: PIQA (Bisk et al., 2019), OpenBookQA (Mihaylov et al., 2018), CommonsenseQA (Talmor et al., 2019). Table 1 shows one example for each dataset. The common factor in these datasets is the type of question-answering format, as they are multiple-choice questions (MCQ). This format makes it easier to edit the prompt and observe the output. In particular, the selected datasets deal with the following topics: Figure 1: We proposed three different prompts: the original prompt consisting of the Question and the Choices and two adversarial prompts consisting of the Question and different Choices order (the example is taken from the OpenBookQA). OpenBookQAis a resource that contains questions requiring multi-step reasoning, common knowledge, and rich text comprehension. It is modeled behind open-book exams for evaluating human understanding of a topic. CommonsenseQAis one of the best-known datasets of answers to multiple-choice questions dealing with different types of general common-sense knowledge. Physical Interaction Question Answering (PIQA)is a resource consisting of a series of everyday situations with a pair of typical or atypical solutions. The choice of the most appropriate solution is binary. Social Interaction Question Answering (Social IQa)is a benchmark focusing on reasoning about people's actions and social implications. The actions in Social IQa cover various social situations and candidates for plausible and not plausible answers. Hence, we select benchmarks with the same structure, MCQ, by the number of different choices as they range from the five choices of CommonsenseQA to the four of OpenBookQA, three of Social IQa, and finally, the two of PIQA. This choice allowed us to conduct different types of analysis introduced later. ### Adversarial Shuffling The It-LLMs' impressive knowledge and desirable N-ToMs abilities can be empirically assessed through a series of benchmarks. However, these abilities should persist in the presence of alterations such as the order of choices in MCQ. To probe robustness, we introduce probing experiments by changing the order of the target choices. In particular, we propose two different versions wherein, in the first, we insert the target choice as first, and in the second, we insert the target choice as last, which we defined as "First Target" and "Last Target", as showed in the blue and red block in Figure 1. ## 3 Experiments In order to investigate the open question of social intelligence and Theory of Mind in modern NLP models from an empirical viewpoint, we extended the evaluations of Sap et al. (2022) to a series of Speculative Benchmarks (Section 2.1) altered with appropriately constructed Adversarial Shuffling (Section 2.2) prompts. Then, to assess the factual abilities of the Instruction-tuned Large Language Models (It-LLMs), we set up several baseline models (Section 3.1), which we probed with different approaches (Section 3.2). Hence, we performed a series of systematic evaluations to observe the impact of the proposed methods. ### Instruction-tuned LLMs In this paper, in order to produce an empirical analysis of the objective ability of different Large Language Models (LLMs), we use four Instruction-tuned (It-LLMs). Their power seems to be novel tuning called Instruction-tuning. These It-LLMs are fine-tuned LLMs on Instruction-following demonstrations (Ouyang et al., 2022) and how an \begin{table} \begin{tabular}{l|c} **Dataset** & **Example** \\ \hline \hline OpenBookQA (Mihaylov et al., 2018) & _When birds migrate south for the winter, they do it because_ \\ & **A) they are genetically called to.** B) their children ask them to. \\ & **C) it is important to their happiness.** D) they decide to each. \\ \hline Social IQa (Sap et al., 2019) & _Taylor gave help to a friend who was having trouble keeping up with their bills._ \\ & _What will their friend want to do next?_ A) Help the friend find a higher \\ & paying job. **B) Thank Taylor for the generosity.** C) pay some of their late employees. \\ \hline PIQA (Bisk et al., 2019) & _How do you attach toilet paper to a glass jar?_ **A) Press a piece of double-sided** \\ & **tape to the glass jar and then press the toilet paper onto the tape.** \\ & B) Spread mayonnaial all over the jar with your palms and then roll the jar in toilet paper. \\ \hline CommonsenseQA (Talmor et al., 2019) & _Aside from water and nonrishment what does your dog need?_ \\ & A) bone. B) charm. C) petted. \\ & **D) lots of attention.** E) walked. \\ \hline \end{tabular} \end{table} Table 1: Examples of the datasets used in this paper. \begin{table} \begin{tabular}{l|c} **Model** & **Backbone** \\ \hline \hline Alpaca-13b (Taori et al., 2023) & Llama \\ Vicuna-13b (Chiang et al., 2023) & Llama \\ Instruct-Falcon 7b (Almazrouei et al., 2023) & Falcon \\ Llama2-chat 13b (Touvron et al., 2023b) & Llama2 \\ \hline \end{tabular} \end{table} Table 2: Models used in our work, found on huggingface.co. We used all the default configurations proposed in the repositories for each model. important part of the currently in-vogue LLMs have at their base a decoder-only architecture. Therefore, we experiment with models of different families of LLMs with similar sizes to avoid creating critical differences. In particular, Alpaca-Lora, fine-tuned on Standford Introduction-following demonstrations (Taori et al., 2023) that has at its backbone Llama-13b (Touvron et al., 2023), Llama-2-chat-13b fine-tuned on custom data (Touvron et al., 2023), Vicuna-13b (Chiang et al., 2023) fine-tuned on ShareGPT data and Falcon-7b-instruct (Almazrouei et al., 2023) fine-tuned on Refined-web data (Penedo et al., 2023). For simplicity of notation in the following experiments, the models will be named as follows: Alpaca (Alpaca-Lora), Falcon (Falcon-7b-instruct), Vicuna (Vicuna-13b), Llama2 (Llama-2-chat-13b). These selected models, summarized in Table 2, are all accessible open-source on the Hugging Face platform (Table 4). ### Experimental Setup & Evaluation It-LLMs seem to have interesting abilities as well as introduced in Section 5. However, It-LLMs seem to be sensitive to the input required. They produce satisfactory answers if they are rightly prompted. To investigate whether their abilities are attributable to Coincidental correlations or inherited N-ToM abilities, we standardized the probing techniques to conduct systematic analyses that yield robust empirical results. Multiple-Choice PromptingWe set the prompts by structuring them as follows: "Choose the answer to the question only from options [A, B, C, and D]. Question: {question}. and after the line character the "Choices: {options}." also appropriately separated by the return character and finally "Answer:". Zero- & Few-shot PromptingFurthermore, we conducted the experiments in a zero-shot and one-shot scenario. In the first case, the prompt consists of the introduction of the task, the question, and the possible choices (see Figure 1). In the second case, a prompt like the previous one was constructed in which an example with the corresponding target was inserted (see Figure 5). Chain-of-Thought PromptingFinally, in order to elicit the reasoning abilities of the proposed models, we adopted the Chain-of-Thought (CoT) approach (Wei et al., 2023) by prompting the input query after "Answer:" the formula "Let's think step by step" (see Figure 5). EvaluationThe most commonly used evaluation methods for MCQ tasks are language-model probing, where the option with the highest probability is chosen (Brown et al., 2020), and multiple-choice probing, where models are asked to respond. The evaluation in the former case is done with a function that takes the max value, while in the latter case, a string matching. The second method is widely used in recent evaluations because it applies to models such as GPT-x (GPT-3.5 and GPT-4) (OpenAI, 2023) that do not produce probabilities. We could use both methods in our experiments, but we selected the second method for a comparable and scalable pipeline. We performed a string matching of the generated outputs and the target choice. ## 4 Results Looking for evidence that Theory of Mind (ToM) has been inherited from Neural Minds is like looking for a drop in the ocean. The results in Table 3 show the fluctuations in the performances obtained from Instruction-tuned Large Language Models (It-LLMs) on more straightforward patterns (Section \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{3}{c}{**OpenBookQA**} & \multicolumn{3}{c}{**Social IQA**} & \multicolumn{3}{c}{**CommonsenseQA**} & \multicolumn{3}{c}{**PIQA**} \\ \cline{2-13} & **Origin** & **First** & **Last** & **Origin** & **First** & **Last** & **Origin** & **First** & **Last** & **Origin** & **First** & **Last** \\ \hline Alpaca & 36.2 & +11.7 & -9.2 & 48.2 & +8.5 & -18.6 & 55.2 & +8.4 & -11.7 & 62.7 & +2.3 & -1.8 \\ Falcon & 54.8 & +3.2 & -13.6 & 57.5 & +3.6 & -14.5 & 60.2 & +5.3 & -7.8 & 68.6 & +1.7 & -0.9 \\ Vicuna & 58.1 & +3.9 & -8.6 & 60.3 & +3.1 & -6.4 & 66.4 & +6.3 & -6.4 & 74.2 & +1.9 & -1.2 \\ Llama2 & 61.2 & +3.6 & -5.8 & 65.6 & +4.3 & -5.2 & 80.5 & +2.3 & -4.6 & 82.5 & +1.6 & -1.2 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy on the benchmarks introduced in Section 2.1 performs on the original order of the choices ’Origin’, shifting the target choice respectively as first ’First’ last ’Last’. The specific position of the target choice causes drastic fluctuations in performance. 4.1). However, although the evident gaps seem to be order-dependent, the performances obtained from the few-shot scenario are encouraging (Section 4.2). These data presaged a strong inclination toward Clever Hans's effects. Therefore, we analyzed the impact of elicitation on the reasoning of It-LLMs using promting techniques (Section 4.3) that showed strong improvements. Fine-grained analysis revealed critical issues about the robustness of It-LLMs and their tendency to Clever Hans effects; however, elicitation to reasoning produced thrilling results that opened the way for new hypotheses about the Neural-ToM abilities inherited by It-LLMs. ### Does the Order Matter? The order of the input-prompts seems to have a considerable impact on the choices of the It-LLMs. In fact, as shown in Table 3, there are significant imbalances in accuracy as the target options change (see the differences in the Firsts and Lasts columns). This positional bias manifests more in zero-shot scenarios, as also showed in Robinson et al. (2023); Zheng et al. (2023). Furthermore, the gaps differ between the benchmarks; e.g., in PIQA, there are no significant differences as it has only two possible choices. In addition to highlighting the presence of a bias toward order, this phenomenon presages factual evidence that models are prone to adopt shallow heuristics when faced with several choices. For this reason, we analyzed in Section 4.4 whether the performances on the original benchmarks are partly supported by the instances with the first choice, i.e., option 'A)', as the original target. ### Could Few-shot Prompting be a solution? Although the It-LLMs are affected by order bias, they should also be sensitive to the structure of the prompt. Hence, we conduct experiments in a few-shot scenario, particularly one-shot. As introduced in Section 3.2, we constructed the prompt by providing a random pair instance-target of the benchmark under evaluation, for example, as Figure 5. As shown in Figure 2, constructing input-prompts with question-answer demonstrations helped reduce the order bias predominantly for the adversarial versions of the benchmarks considered (see the red columns in Figure 2). However, although the results were encouraging, providing examples in a few-shot scenario is not an optimal strategy for two reasons: firstly, it is not possible to analyze the proper knowledge and abilities of the It-LLMs; secondly, providing examples very close to the question the model is supposed to answer could cause the model to fall into Clever Hans effects Shapira et al. (2023). Figure 2: Evaluation results on proposed benchmarks. First means that the target is the first choice. Last means that the target is the last choice. ### N-ToM Abilities or Prompting Techniques? Stimulating the generative abilities of It-LLMs could be the key. Figure 2 shows that the performance of models where Chain-of-Thought prompting has been done is more stable and significantly better. In particular, Llama2 and Vicuna have benefited best from this technique. Hence, constructing prompts with strategically placed choices facilitates shallow heuristics, and providing examples produces Clever Hans Effects elicitation to step-by-step reasoning prompts the It-LLMs to consider the whole question with choices. Moreover, the production of the choice between the various seems more robust as the model seems less uncertain. However, this strategy does not always seem to have positive effects. Alpaca-Alpaca-Lora and Falcon do not have the same sound effects as the other two models. ### Ablation Study Downstreaming our analysis, we observed the presence of a bias in the order of choices. Indeed, as discussed in Section 4.1, there is a strong bias towards the first choice, i.e., 'A')'. Therefore, we examined whether this bias supports the performances of the original benchmarks. We then reproduced all the experiments by eliminating the instances that target the first choice. In this experiment, we did not consider PIQA as it only has two choices; therefore, the results are irrelevant for this experiment. Our experiment in Figure 3 reveals a gap between the performances obtained without the'simple' instances. This result shows that, indeed, the performance of the evaluation benchmarks is affected by positional bias. However, these are more dramatic than denying all experiments but must be considered as they could distort many evaluations. ## 5 Related Work ### Evaluation of Large Language Models Increasing confidence in It-LLMs requires a fundamental empirical assessment part. Traditional evaluation methods assess the ability to respond to instructions by calculating metrics such as BLEU, ROUGE, or BERTScore to compare the generated response with a reference response. However, these metrics need to adequately measure the alignment of the generated response with human intent (He et al., 2023). Although human evaluation is considered the most accurate measure of model performance, it is expensive and time-consuming to perform at scale. Therefore, researchers have begun using It-LLMs to evaluate generative models' ability to follow human instructions (Zheng et al., 2023; Lu et al., 2023). Zheng et al. (2023) used GPT-4 (OpenAI, 2023) as an arbiter to compare the answers of the two models. However, Wang et al. Figure 3: Accuracy on original benchmarks vs. corrupted benchmarks. They stem from the original ones without instances where the target choice is the first among the multiples. (2023c,b) demonstrated several weaknesses in this method, giving rise to a proliferation of skepticism that has been reinforced by a series of works highlighting sensitivity to prompting (Lu et al., 2023) and instability to response generation (Wang et al., 2023b; Zhu et al., 2023). ### Question-answering Benchmark In parallel with the multiple validation techniques, numerous Question-answering benchmarks have arisen consisting of multiple subtasks characterized by multiple-choice questions. These benchmarks have been introduced as a method to assess reasoning skills and (Artetxe et al., 2019; Lewis et al., 2020; Hendrycks et al., 2021; Suzgun et al., 2022) factual abilities (Elsahar et al., 2018; Petroni et al., 2019). Despite the difficulties present in these tasks, great strides have been made with language models achieving human-like performance in various benchmarks (OpenAI, 2023; Savelka et al., 2023; Licevin et al., 2023). However, the effective use of these tasks to effectively probe reasoning and other knowledge presents substantial challenges that deserve further investigation. ### Clever Hans Effect & Neural Theory of Mind Large Language Models psychotherapy seems to be an emerging field (Hewitt et al., 2023; Meng et al., 2023; Lamparth and Reuel, 2023) In recent years, studies on the emerging abilities of Large Language Models have proposed numerous theories (Wei et al., 2022; Kasneci et al., 2023). Some of these have been empirically proven, while others have remained only hypotheses and conjectures that are difficult to prove. Numerous studies have shown that LLMs can inherit certain Theories of Mind (ToM) from learning, defining this as Neural-ToM abilities (Le et al., 2019; Sap et al., 2019). However, numerous works have refuted these theories by scapegoating the Clever Hans Effect (Shapira et al., 2023). The latter phenomenon has manifested in multiple forms on numerous well-known benchmarks (Webson and Pavlick, 2022; Carlini et al., 2023). In our contribution, we analyzed whether several open-source It-LLMs are able to defend themselves against the traps of the Clever Hans Effect by proposing a series of experiments. Behind extensive analysis, we discovered that It-LLMs are prone to adopt superficial heuristics when they are facilitated in their decisions. On the other side of the coin, they can apply robust mechanisms when prompted to reason. This opens up different attractive scenarios on the promising approaches of Chain-of-Thought techniques (Wei et al., 2023). ## 6 Future Works In future work, we plan to extend our experimentation to different models. On one side of the model, we will extend our work to different models by including GPT-3.5 and GPT-4 (OpenAI, 2023); on the other side, we will study the impact and robustness of the varying parameters of the backbone model. These studies will allow us to observe the problem from more sides and investigate the consequences of shallow heuristics. We will also investigate Chain-of-Thouth abilities in few-shot contexts to elicit models to reason about richer prompts. Last but not least, we will analyze the impact of further bias injection in the well-known benchmarks to observe whether the abilities of It-LLMs can overcome challenging scenarios. ## 7 Conclusion The Instruction-tuned Large Language Models (It-LLMs) have been demonstrating interesting abilities in real-world understanding. Empirically assessing these abilities is a challenging task. We propose systematic evaluations through multiple-choice questions (MCQ) benchmarks in our contribution. However, our study revealed an inherent order-bias in these models. Through adversarial testing, we observed a significant discrepancy in performance, particularly when altering the sequence of options, underlining a prevailing selection bias that challenges the reasoning abilities of the It-LLMs. We identified a link between positional preferences and model selections, which led us to theorize the existence of structural heuristics guiding the decision-making process. By incorporating relevant examples in few-shot contexts, this notion was further strengthened. Using Chain-of-Thought approaches allowed us to make the model introspect its decisions, thus reducing observed bias and resulting in more reliable and robust It-LLMs. Our results revealed some limitations regarding robustness in zero-shot scenarios but simultaneously showed that the CoT approach enhances stability. Our future research will focus on proposing definitely unseen benchmarks in order to evaluate real abilities without the presence of distorted glass.
指示調整された大規模言語モデル (It-LLM) は、認知状態、意図、およびすべての当事者の方の反応を理解する能力を発揮しており、人間が社会的な相互作用を効果的に導き、理解できるようにしています。実際、複数の選択肢の質問(MCQ)の基準が提案され、モデルの能力を構築するための sólidaな評価を導き出しています。しかし、過去の研究では、It-LLMに存在する「順序バイアス」が示されており、適切な評価に課題をもたらしています。この論文では、It-LLMの耐性能力について、4つのMCQ基準を使用して調査します。悪意のある例を導入することで、選択肢の順序を変化させた場合、特に選択のバイアスと、モデルの選択能力に関連する、という結果を示しました。これは、順序バイアスによるモデルの選択能力の発見に繋がっています。そして、モデルの決定プロセスにおける
2305.19601
Monitoring the evolution of dimensional accuracy and product properties in property-controlled forming processes
As recent trends in manufacturing engineering disciplines show a clear development in the sustainable as well as economically efficient design of forming processes, monitoring techniques have been gaining in relevance. In terms of monitoring of product properties, most processes are currently open-loop controlled, entailing that the microstructure evolution, which determines the final product properties, is not considered. However, a closed-loop control that can adjust and manipulate the process actuators according to the required product properties of the component will lead to a considerable increase in efficiency of the processes regarding resources and will decrease postproduction of the component. For most forming processes, one set of component dimensions will result in a certain set of product properties. However, to successfully establish closed-loop property controls for the processes, a systematic understanding of the reciprocity of the dimensions after forming and final product properties must be established. This work investigates the evolution of dimensional accuracy as well as product properties for a series of forming processes that utilize different degrees of freedom for process control.
Sophie Charlotte Stebner, Juri Martschin, Bahman Arian, Stefan Dietrich, Martin Feistle, Sebastian Hütter, Rémi Lafarge, Robert Laue, Xinyang Li, Christopher Schulte, Daniel Spies, Ferdinand Thein, Frank Wendler, Malte Wrobel, Julian Rozo Vasquez, Michael Dölz, Sebastian Münstermann
2023-05-31T07:12:16
http://arxiv.org/abs/2305.19601v1
Monitoring the evolution of dimensional accuracy and product properties in property-controlled forming processes ###### Abstract As recent trends in manufacturing engineering disciplines show a clear development in the sustainable as well as economically efficient design of forming processes, monitoring techniques have been gaining in relevance. In terms of monitoring of product properties, most processes are currently open-loop controlled, entailing that the microstructure evolution, which determines the final product properties, is not considered. However, a closed-loop control that can adjust and manipulate the process actuators according to the required product properties of the component will lead to a considerable increase in efficiency of the processes regarding resources and will decrease postproduction of the component. For most forming processes, one set of component dimensions will result in a certain set of product properties. However, to successfully establish closed-loop property controls for the processes, a systematic understanding of the reciprocity of the dimensions after forming and final product properties must be established. This work investigates the evolution of dimensional accuracy as well as product properties for a series of forming processes that utilize different degrees of freedom for process control. microstructure evolution, microstructure, process, product property, component performance, soft sensor, monitoring, non-destructive testing, closed-loop control ## 1 Introduction An increased effort on monitoring metal forming processes is observed recently (Awasthi et al., 2021; Biehl et al., 2010; Jayakumar et al., 2005; Kumar and Das, 2022; Ralph and Martin, 2020). As product tolerances and production standards become ever stricter within the context of sustainability as well as resource efficiency, forming processes not only need to be open-loop but rather closed-loop controllable (Awasthi et al., 2021; Bambach et al., 2022). To a large extent, forming processes already rely on closed loop controls for the inline adjustment of the processes, however, control of the components product properties is primarily open-loop (Hao and Duncan, 2011). This particularly means that the microstructure evolution (e.g. texture, dislocation density) during the forming process, which dictates the product properties (e.g. strength and stiffness) of the component, is currently largely disregarded. This causes various disadvantages. A forming process, that does not take property fluctuations in the workpiece into account, will eventually lead to time intensive postproduction of the component (meaning downstream process steps that only arise do to the mechanical properties not meeting the requirements) and/or considerable amounts of scrap (Alwood et al., 2016; Polyblank et al., 2014). Implementing closed-loop controls into the processes that get feedback on the microstructure evolution (thus product properties) will, hence, enable the forming processes to be designed more efficiently as well as sustainably. For a schematic overview of the problem as well as the proposed solution to establish a closed-loop control based on product properties, see Figure 1. However, this endeavor harbors several challenges. First and foremost, it must be established whether and how sensors and actuators (in this case meaning all parameters in the process responsible in influencing the system, including forming tools but not limited to them) can be positioned in the processes, so that a microstructure evolution, hence product properties, may be detected and optimally manipulated. In most forming processes, actuators will have a highly non-linear effect on the product properties such as the dimensional accuracy and mechanical properties, making the model definition a complicated task, prone to many uncertainties (Alwood et al., 2009; Allwood et al., 2016). Regarding the position of the sensors, it is often difficult to measure the target values directly in the points of interest, meaning spatial and time-dependent analytical relationships between the process parameters and the product properties must be derived (Alwood et al., 2016). Furthermore, the sensor selection is limited to non- and semi-destructive testing technologies. To overcome these challenges, one primary and initial step must be taken - establishing a systematic understanding of the interaction of the target values, i.e. the product properties and dimensional accuracy. This is especially challenging, yet essential for introducing a property-based closed-loop control in forming processes. This is mainly due to the fact that usually forming processes can form defined dimensions of a workpiece that are strictly tied to a set of product properties. Thus, it must be investigated if and how the evolution of properties can be decoupled from the evolution of dimensional accuracy. To detect the microstructure evolution (consequently the product properties) during the forming process, a suitable sensor and/ or measurement technique must be identified. As the workpieces will need to be ready for application non- or at least semi-destructive sensors will need to be used. Non-destructive testing refers to the examination of materials and/ or components to determine their fitness for services without the degradation of the wanted properties. Non-destructive testing methods can be subdivided broadly into five categories - ultrasonic, electric, penetrant, electrical as well as radiographic (Halmshaw, 1991; Hinsley, 1959; Hull and John, 1988). Semi-destructive testing methods refer to methods causing small cavities in the component, however having low impact on it. Methods referred to as semi-destructive include penetration methods such as ultrasonic contact impedance (UCI-) hardness tests or pull-off methods (Jaskowska-Lemanska and Przesmycka, 2020; Jaskowska-Lemanska and Sagan, 2019). Depending upon the tested materials and materials characteristics, a suitable non-destructive or semi-destructive testing method must be established for use. Figure 1: Schematic depiction of proposed closed-loop control based on product properties adapted from (Polyblank et al., 2014) Furthermore, the utilized sensors' signals must be able to detect the microstructure evolution that needs to be correlated to the to be controlled product properties. In this study, processes are investigated, where particularly the development of microstructural phases influencing hardness, strain hardening, strength, and residual stresses of steel components during forming represent product properties of interest. Hence, especially electromagnetic sensors, such as magnetic Barkhausen noise (MBN) and eddy current testing (ECT), offer a potentially useful inline measurement technique as they are particularly susceptible to a microstructure evolution. These experimental techniques are briefly discussed in the following: * The testing method of (MBN) can be described as follows: When an alternating magnetic field is applied to a ferromagnetic material (as is the case for most steels), the magnetization of the ferromagnet will occur discontinuously as well as abruptly. These discontinuous changes occur due to the spontaneous movement of so-called domain walls in the ferromagnet as they overcome microstructural hindrances, so-called pinning sites, such as inclusions, dislocations, grain boundaries etc. These jumps lead to electrical noises, that are then detected by the measurement equipment. Variations in the pinning site intensity versus the actual domain wall motion is then the foundation of the MBN method for characterizing the microstructure evolution, as this will ultimately lead to effects on the MBN characteristics (Ghanei et al., 2014; Perez Benitez et al., 2020; Wang et al., 2012). Current MBN testing equipment include but are not limited to the 3MA-II sensor technology developed by Fraunhofer Institute for Nondestructive Testing (Fraunhofer Institute for Nondestructive Testing IZFP), the Rollscan-Barkhausen noise signal analyzers from developer Stresstech (Stresstech Oy) as well as the QASS \(\upmu\)magnetic testing equipment from QASS developers (QASS GmbH). * During ECT a coil produces an alternating magnetic field. When a conductive material is subjected to this magnetic field, so called eddy currents flowing through the material in closed loops will be produced. These eddy currents then entail a secondary electromagnetic field which counterposes the primary electromagnetic field from the coil, hence causing measurable changes in the electromagnetic field of the coil. If a tested material contains discontinuities in the microstructure, the flow pattern of the eddy current is disturbed which in turn also influences the electromagnetic field of the coil (Ghanei et al., 2014; Halmshaw, 1991; Hinsley, 1959; Hull and John, 1988). Hence, ECT also offers a suitable method for the detection of the microstructure evolution during forming. Current ECT equipment includes but is not limited to the EddyCation Professional system developed at University of Magdeburg (Methodisch-Diagnostisches Zentrum Werkstoffpufung e.V) and Eddy-Current Vector sensor (working title) developed at TU Chemnitz. As it is now established, which inline measuring equipment may be used, it is necessary to identify a method that allows the description of the relationship between the now detectable process-induced evolution of the measurand and the product property that shall be monitored. So-called soft sensors pose as such a method. The term soft sensor is a fusion of the words "software" and "sensor". They typically consist of an arrangement of at least one sensor or measurand respectively in combination with a mathematical modeling approach. This combination is then used to establish a relationship between the measurand and the sought variable (Becker and Krause, 2010; Fortuna et al., 2007; Jiang et al., 2021; Kadlec et al., 2009). Soft sensors have found their application in several industries and continue to gain popularity in all kinds of disciplines, such as robotics (Goh et al., 2022; Jaffer and Young, 2009; Zou et al., 2021), medicine and pharmaceuticals (Arpaia et al., 2021; Dahiya et al., 2023; Gyurkes et al., 2022; Hwang et al., 2022; Qiu et al., 2022) as well as various engineering disciplines as petrochemical process design (Morais et al., 2019), autonomous driving (Sekhar et al., 2022) or even estimation of cement fineness (Stanisic et al., 2015). Dependent upon the to be monitored parameters, the industrial process and thus the resulting system, a suitable modelling approach must be identified. Generally, soft sensors can be differentiated broadly into two branches - model-driven and data-driven soft sensors. Model-driven soft sensors, also called white box models, are usually based on the underlying physical and/ or chemical framework of the process, meaning they have complete phenomenological knowledge of the process (Fortuna et al., 2007; Jiang et al., 2021; Kadlec et al., 2009). However, when the a-priori knowledge of the process, thus system is insufficient, or the modelling is disproportionately complicated, data-driven soft sensors, also called black box models, become relevant. They rely purely on empirical observations of the system. In between these two categories, many combinations can be formulated, taking advantage of these two extrema. Generally speaking, soft sensor design is made up of three stages: (1) data and information collection, (2) design and (3) implementation (Fortuna et al., 2007; Jiang et al., 2021; Kadlec et al., 2009). Thus, the quality of the data must be established, meaning the positioning of the sensors and actuators must be chosen in such a way, that the effects of missing data, outliers etc. are minimized. Secondly, the model type has to be identified. The model type is dependent upon the systems nature and can be summarized by three parameters particularly: the linearity, the dynamics and last but not least the degree of intelligence of the model. Lastly, the selected model needs to be implemented (Fortuna et al., 2007; Jiang et al., 2021; Kadlec et al., 2009) In the authors' case, particularly for the first two design steps, it must be determined how the measurand evolution and thus the product properties are influenced by the forming processes. Meaning, for the studied forming processes, how the formed geometry that is usually tied to a defined set of product properties may be decoupled from the evolution of relevant properties, as there is a lack in systematic understanding of decoupling strategies in different forming processes. Creating this understanding, it will ultimately allow the manipulation of the product properties during forming using different degrees of freedom (DOF), however still leading to the targeted dimensional accuracy of the workpiece. Hence in **Chapter 2**, this paper will introduce and discuss options for the decoupling of dimensional accuracy and product property evolution during the forming processes. In **Chapter 3**, an overview on the anticipated decoupling strategies for selected forming processes will be given. **Chapter 4** consists of the detailed case studies of each selected forming process and their respective decoupling strategy. Finally, **Chapter 5** will conclude this study. ## 2 Strategies for decoupling of geometry and property evolution during forming Product properties determine the application of the workpiece. Hence, it is important to first and foremost know the product properties of a component and, furthermore, to be able to influence them according to the required characteristics. During forming, the semi-product undergoes plastic deformation, leading to a permanent change not only in shape but also in the product properties. To understand the product property evolution during forming, the microstructure evolution that occurs must be investigated, as the microstructure dictates the product properties (Hornbogen and Warlimont, 2001). This holds true in particular for the strain hardening, which is mainly based on the increased dislocation density resulting from the forced plastic deformation. During forming, microstructural parameters as the density and distribution of dislocations, phase distribution as well as grain shapes and orientations, as well as damage in the material are affected, which in turn account for the final product properties as the strength, strain hardening behavior, hardness, and so on. This microstructure evolution during forming can be influenced, namely, by the temperature, the strain rate, and the state of stress. The forming temperature, for example, has significant influences on the yield strength of the material. Generally, it can be said that the yield strength decreases with increasing forming temperature. For body-centered cubic metals, the strain hardening behavior of the material is not strongly influenced by an increase in temperature. However, in face-centered cubic metals it is due to the impeded movement of the dislocations with a decrease in temperature (Bleck, 2011; Doege and Behrens, 2010; Hornbogen and Warlimont, 2001). An increase in strain rate leads to an increase in yield strength and tensile strength, whereas uniform elongation decreases. This holds true for strain rates below \(\approx\) 10 - 2 1/s. Whenever, this critical value is exceeded, the heat exchange between the workpiece and surrounding is not sufficient. This leads to a heating of the component, which in turn again leads to a decrease in yield and tensile strength (Bleck, 2011; Doege and Behrens, 2010; Hornbogen and Warlimont, 2001). Furthermore, as a result of the forming process, the stress state, state of damage as well as the texture in the component changes. In terms of stress state, hydrostatic stresses must be mentioned as they do not particularly influence the level of plasticity, though may be used to suppress or foster damage evolution. However, they do lead to a change in microstructure in distorting the lattice, hence, favoring diffusion (Gao et al., 2011). This shows that the macroscopic product properties are dictated by microscopic occurrences that are strongly dependent upon external factors. Hence, the reciprocity of process, microstructure and product properties needs to be fully understood to not only open-loop control processes but rather to closed-loop control them. Several studies regarding the microstructure evolution of steels during various processes have already been conducted. Just to name a few, the authors in (Bontcheva and Petzov, 2003) introduce a mechanical model for the numerical simulation of die forging, within which the microstructure evolution is coupled to thermo-mechanical phenomena. Reimann et al. (Reimann et al., 2019) use machine learning algorithms to numerically model the mechanical response of various microstructures subjected to loads based on representative volume elements and crystal plasticity modeling. Yang et al. (Yang et al., 2010) propose a model simulating microstructure evolution during induction heating for austenitization of steels based on cellular automata. The authors in (Salehiyan et al., 2020) study the microstructure evolution during deformation of a quenching and partitioning steel (QP980) and Vajragupta et al. (Vajragupta et al., 2020) present a micromechanical model for the mechanical behavior of a DP600 steel during sheet metal forming. Investigations utilizing these approaches to systematically influence the interaction of the macroscopic product properties and the forming dimensions for the implementation of closed-loop controls of processes are scarce to non-existent. And it is precisely in this research gap that this study aims to contribute. Ultimately, this newly established understanding will allow not only to describe the interaction between processes, microstructures and product properties. Furthermore, it allows the extension onto the component performance, meaning the loading capacity of a component in the relevant load case for the design relative to the weight of the component. Theory and Modelling of Selected Forming Processes In the previous two chapters, it has been established that the described phenomena are occurring on different scales and need to be fully understood. However, starting on the smallest scale, detailed descriptions of the microstructure evolution which determine the product properties during forming used for the implementation of inline closed-loop property controls are still to be proposed. This is where this study aims to make an impact. In giving a detailed description for selected processes, see **Chapter 4**, for methods to influence the interaction of dimensional accuracy and product properties, a new engineering foundation for closed-loop property controls is laid. The processes outlined are: 1. Skin-pass rolling: Skin-pass rolling is a process used to set the desired surface finish of strips. It uses textured rolls to imprint surface asperities on strips with minimal thickness reduction. Conventional methods use a single roll pass and adjust the roughness transfer ratio by changing process parameters such as thickness reduction. However, this limits flexibility and the range of achievable product properties for the subsequent processes such as painting and sheet metal forming. In this process, the roughness parameters of the strip \(R_{a}\) and \(RP_{c}\) are the surface properties of interest, as they influence the friction coefficient, the paint adhesion as well as paint appearance in the following processes. These surface properties are independently controlled by utilizing two roll stands and the corresponding strip tensions and are measured by integrating a contactless roughness sensor. 2. Flow-forming: Flow-forming is an incremental forming process used to produce rotationally symmetrical hollow components with high dimensional accuracy, for example, in drive technology or in the aerospace industry. The process is characterized by high cost-effectiveness compared to metal-cutting processes. In addition, the components have a very high surface quality and improved mechanical properties due to the process-related high strain hardening. 3. Open-die forgoing process: According to DIN 8583, the open-die forging process belongs to the free-forming processes (Deutsches Institut fur Normung e.V.). In this process, the workpiece can be moved and formed freely between two open dies. This method of open-die forging, therefore, offers an ideal basis for locally influencing the microstructure development during the forming process. This is intended to improve the creep and fatigue properties. For the microstructure transformation, the semi-finished product must be heated locally. Therefore, an appropriate tool has to be designed that allows conductive heating during the open-die forging process. Targeted control can be used to decouple the development of dimensional stability and product properties by controlling the microstructure evolution. 4. Thermomechanical Tangential Profile Ring Rolling: Tangential profile ring rolling is a highly efficient process for manufacturing ring-shaped parts such as bearings. By applying specific process parameters with regard to temperature and rate of deformation, it is possible to affect the microstructure, strength, and surface hardness of the final product during the rolling process, without requiring secondary heat treatment. Due to the complex interactions of initial material state, material physics and machine, a closed-loop control is needed to achieve the desired final state. 5. Punch-hole-rolling: The punch-hole-rolling process is an incremental sheet forming process creating a rimmed hole with a cylindrical interior surface in the sheet plane. This shape can be applied for example as a fixture for bearings in machine parts of logistic systems or automotive drivetrain components. To keep a maximum of flexibility with different bearing types and sizes the processing steps of punching and rolling must be tuned to reach required geometric tolerances, microstructures and retained strength and ductility of the part. A sufficient balance and control of these properties is essential to guarantee reproducible and durable metal sheet products from the punch-hole-rolling process. 6. Reverse Flow-forming: Reverse flow-forming is an incremental process that offers flexibility and efficiency. Through an intended wall thickness reduction, it allows the production of tubular parts with excellent shape, dimensional accuracy, and surface qualities that meet the stringent requirements of industries such as aerospace. Applications include drive shafts for jet engines and helicopters. A unique aspect of reverse flow-forming is that the material flows in the opposite direction of the roller tool movement. This process is difficult to predict in terms of stress and strain distribution and is affected by many factors such that adjusting local workpiece properties, as microstructure profile, remains a challenge. 7. Freeform bending: Freeform bending with movable die is a process allowing the bending of complex geometries without having to change the bending tool. The process finds its application in the automotive and aerospace industry as well as in energy and chemical production. All these industries, however, are facing problems with post processing steps of the freeform bent parts. Thus, product parameters of interest are residual stresses, strength as well as the induced plasticity during bending as these particularly dictate further processing steps as well as service application of the component. 8. Press hardening: Multi-stage hot sheet metal forming allows the forming of complex shaped press-hardened components at high stroke rates. Initially, rectangular blanks are heated to the austenitization temperature \(T_{\text{y}}\) by means of induction heating, whereafter, a temperature profile is set using resistance heating and cooling by compressed air from flat fan nozzles. In the first forming stage, the blank is stretch-formed into a hat shaped profile followed by die bending, where one sidewall of the hat shaped profile is bent. However, product properties that are influenced by thermal and mechanical influences, such as hardness and sheet thickness, are difficult to predict, thus limiting the application of the process. As types of forming processes are manifold, they can further be categorized regarding their forming stages, whether they are hot formed or cold formed and what kind of material is used. The authors propose the following categorization, see **Figure 2**, for the forming processes based on which general insights for an implementation of a closed-loop property control may be gleaned in **Chapter 5**. The numbers in the pictograms correspond to the numbering above. Figure 2: Categorization of forming processes according to forming stages, temperature and Materials investigated ## 4 Case Studies and Results ### Skin pass rolling (Single-stage, Cold, Ferromagnetic Material) Skin-pass rolling is a rolling process that is typically used to achieve a specific surface finish on strips after they have been subjected to conventional cold rolling. This process uses rolls with textured surfaces to imprint a certain roughness on the strips, resulting in a minor reduction of their thickness. The textured surfaces on the rolls are created using various techniques such as electrical-discharge texturing (EDT), shot-blast texturing (SBT), or laser texturing (LT), and can be either stochastically or deterministically distributed. The surface roughness of the resulting sheets, which is characterized by parameters such as the average roughness \(R_{\text{a}}\) and peak number \(RP_{\text{c}}\), plays a crucial role in determining the product properties of the sheets, including their tribological properties in the subsequent forming process (Emmens) paint adhesion, and paint appearance after painting (Scheers et al., 1998). The effectiveness of the surface texturing process is determined by the roughness transfer ratio, which compares the outgoing roughness on the strips to the roughness on the rolls. Conventionally, only one rolling pass is employed in skin-pass rolling. To achieve the desired outgoing roughness on strip surfaces, the roughness transfer ratio is to be adjusted by changing the process parameters such as thickness reduction. To save the expense of manufacturing new textured rolls and reduce labor costs for roll changing, the roughness of the work rolls cannot be influenced. Therefore, in conventional skin-pass rolling, thickness reduction is always coupled with the final surface roughness of strips, which reduces the flexibility of the process. Moreover, the \(R_{\text{a}}\) and \(RP_{\text{c}}\) cannot be controlled separately, thus, the achievable range of product properties is restricted as well. In this study, a novel approach for the skin-pass rolling process is presented. The system features a two-stand rolling configuration, comprising a pair of untextured and textured (EDT) rolls respectively. The strip tensions for these two stands are regulated by three dancer rollers positioned before, between, and after the rolling stands, as illustrated in **Figure 3**. In this system, the maximum rolling force is limited to 50 kN; the tension adjustment ranges of the three dancers are [50, 600] N, [50, 1500] N, and [50, 1500] N, respectively. Additionally, rotary encoders and thickness sensors are employed to monitor the position and thickness of the strip, and a contactless roughness sensor is installed at the end to facilitate real-time optimization of the surface texturing model (\(R_{\text{a}}\) and \(RP_{\text{c}}\)). By manipulating the strip tensions (Schulte et al., 2021b) and controlling the distribution of the total height reductions over the two passes, this system offers greater control over the final product properties of the strip. Furthermore, the use of the textured (EDT) rolls and strip tension control, allows to optimize the surface finish while keeping the thickness reduction to a minimum. To model the rolling process and the associated geometry change and resulting properties, a total of three models were developed or implemented, capable of predicting the resulting strip thickness (Wehr et al., 2020), roll stand deflection (Schulte et al., 2021a), and strip roughness (Schulte et al., 2021b) at the exit of each individual rolling stand. By incorporating both roll stands and adjusting the strip tension, the total height reduction can be divided into two passes utilizing distinct roll surfaces. This approach not only enhances control over the resulting strip roughness, but also enables manipulation of the stress profile in the roll gaps, thereby affecting the neutral point and contact stress level. As a result, the imprinting in the roll gap can be independently controlled for both strip geometry and surface finish. Figure 3: Schematic depiction of skin pass rolling. A superordinate adaptive control system is implemented to determine the optimal strip tensions and desired strip thickness for the first stand based on the desired outgoing strip thickness and roughness. To achieve this, a nonlinear model of the rolling mill is used to create an optimal control problem (OCP), which balances the deviation of the controlled variables and the change of the manipulated variables. To solve this OCP in real-time, it is formulated as a sequential quadratic problem and iteratively solved using the qpOASES solver. The calculated references for strip tensions are then applied to the rolling mill, while the desired thicknesses for the two rolling stands are supplied to a subordinate strip thickness control scheme. The decoupling of the strip thickness \(h_{2}\) and strip roughness \(Ra_{2}\) could be demonstrated in the following experiment, see **Figure 4**(Schulte et al., 2023). Here, the total height reduction was divided between two rolling stands \(s_{1}\) and \(s_{2}\) in order to control the desired roughness \(Ra_{2,des}\). ### Flow-Forming (Single-stage, Cold, Ferromagnetic Material) Incremental forming processes (e.g. metal spinning, shear forming or flow-forming) are used to produce mostly rotationally symmetrical components from flat blanks or tubes. During the forming, the semi-finished product rotates together with a mandrel and is formed by the roller feed (**Figure 5**). The control of such processes is usually path based, i.e. it is not possible to react to disturbance variables such as variations in the geometry of the semi-finished product, variations in material properties, or process disturbances (friction, tool wear). The product property strain hardening is a positive result of the forming process, but are themselves only considered to a limited extent in the process design. The aim of this project is to take strain hardening into account in the design of incrementally manufactured components and to use it as a controlled variable during forming. For this purpose, strain hardening needs to be measured during the process. As there is no sensor that can measure the strain hardening during the forming process, a new sensor system has to be developed. This consists of several coils for determining the magnetic properties and a further temperature sensor. A soft sensor calculates the strain hardening based on the magnetic properties using a material dependent map. This is used as the basis for controlling the roller feed to achieve the desired strain hardening in the component. An advantage of the property-controlled flow-forming is that the forming machine itself can be used as the actuator, which means it can be used on almost any conventional flow-forming machine. In addition, the control system Figure 4: Experimental results of an independent control of strip thickness (\(h_{2}\)) and roughness \(Ra_{2}\), using only the two roll stands, compare (Schulte et al., 2023) improves the process result independent from external influences. The new approach allows to produce components with graded strength, which can be used for crash components. It also makes it possible to achieve the optimum ratio between process time and strain hardening of the component during process design. In general, flow-forming produces a high plastic strain at a low roller feed rate and a lower plastic strain at a high roller feed rate. This is due to the different flow proportions resulting from the different displaced volumes displaced per mandrel revolution. This affects, for example, the pile-up volume of the material in front of the roller. In the case of a large pile-up volume, the stretching and subsequent compression of the material as it is formed under the roller to its final diameter results in greater plastic strain and thus greater strain hardening. The produced dimensional accuracy (e.g. final tube diameter) is therefore independent of the roller feed, as the final geometry is mainly determined by the gap between the roller and the mandrel. Roller feed can therefore be used as a DOF to control strain hardening, thus decoupling the evolution of dimensional accuracy and product property. In this way, the property-controlled process allows the production of parts with identical geometry but defined work hardness variations. To adjust the roller feed rate during the process a real-time measurement of the plastic strain is required, independent of geometric variations like working distance and wall thickness. The determination of plastic strain and the resulting calculation of strain hardening by the flow curve within the soft sensor is based on the coevolution of mechanical and magnetic properties of ferromagnetic materials in the case of plastic strain. By enhancing the plastic strain and the resulting increase of defects the mechanical embrittlement coincides with the magnetic embrittlement, a reduction in the ability to adapt to external magnetic fields results. In addition to the effect of magnetic embrittlement, the induced stress in the material will change the ability to magnetize in different directions depending on the direction of the induced stress by the Villari effect (Joh et al., 2013). In this regard, a multi-sensor system (see **Figure 5**) has been designed to characterize the magnetic embrittlement in the form of the relative magnetic permeability and the directional variations in the form of the magnetic anisotropy in a single measurement. The multi-sensor system also has to quantify and compensate for geometric influences such as working distance, tilting of the sensor, and the curvature of the workpiece. For this reason, the central excitation coil is used for the method of inductance spectroscopy to independently quantify the working distance between the sensor and workpiece and the magnetic permeability to target magnetic embrittlement (Lu et al., 2018). The pickup coils use the excitation field of the sensor coil to quantify directional variations of the magnetic flux density on the circumference of the excitation coil allowing the separation of the magnetic anisotropy and effects due to the tilting of the sensor (Wendler et al., 2021). The directional properties of the magnetic anisotropy are used to evaluate contributions from the Villari effect and mechanical stress. Figure 5: Approach of the property-controlled flow-forming. The sensor system does contain a soft sensor with an internal material model to use the information on the magnetic anisotropy and the changes of magnetic permeability to estimate the plastic strain as feedback for the control loop of the forming process (Lause et al., 2021). ### Open-die forging process (Multi-stage, Hot, Paramagnetic Material) Isothermal forging heats the tools to the workpiece temperature. This makes near-net-shaping feasible for materials with poor machinability. Advances in isothermal forging have recently made it possible to use turbine blades made of intermetallic titanium aluminides (TiAl) in serial production in commercial jet engines. However, there are currently many limitations to this technology can be found. The cast stock material consists of coarse lamellar colonies (**Figure 6 (a)**) and has highly anisotropic forming properties. For this reason, the preform has to be oversized. This is necessary to ensure that the degree of deformation is high enough to transform the cast structure. With current processes and tooling technology, the material utilization of turbine blades is often less than 10%. However, the closed-die forging process does not allow for a localized control input into the workpiece. This research project aims to develop a multi-stroke, property-controlled open-die forging process in which the heating of the workpiece area to be formed takes place locally. In contrast to swaging, the open-die forging process allows the use of additional DOF for the closed-loop control of the property-determining globularization kinetics in the process. This is because the globularization kinetics of a TiAl alloy are subject to both material and process related disturbance variables. The knowledge and results obtained can be transferred to other materials and forging processes. The prediction of the microstructure development in the workpiece is based on level set formulations and neural networks, which act as a soft sensor. The tests are carried out on a servo screw press of the type SHP-400 from Nidec SYS GmbH, Grafenau, Germany (Feistle et al.). The servo technology in conjunction with a machine control system adapted by Fraunhofer IGCV makes it possible to actively influence the course of the stroke curve \(k\) and thus the ram stroke speed \(v\) during the stroke. This represents a significant DOF in process control, since the prevailing strain rates \(\dot{\varepsilon}\) in the material and the globularization kinetics can be influenced. The globularization kinetics are also influenced by the temperature fields present in the process. Consequently, the process temperature \(T\), and thus indirectly the power \(P\) of the source of the conductive heating unit, is considered as another DOF. The globalization of the material structure (**Figure 6 (b)**) depends on the locally prevailing degree of deformation \(\varepsilon\). By integrating the feeder into the test setup (**Figure 7 (a)**) in combination with the variable mold closing height per stroke, the degree of deformation and the globularization kinetics can be actively influenced by additional DOF (**Figure 7 (b)**). Figure 6: (a) Initial Lamellar Microstructure; (b) Remodeled Globular Microstructure Figure 7: (a) Schematic Draft of the Open-Die Forging Tool with Integrated Actuators and Sensors; (b) Degrees of Freedom of the forming process The variable mold closing height per stroke makes it possible to define the DOF \(h\) of the height of the material to be formed, which is reflected in the stroke curve \(k\). The number \(n\) of blow sequences results from the desired residual material height \(s\) and the height \(h\) to be formed of the individual stroke as a function of the material start height \(s_{0}\). The positioning of the specimen at the beginning of a stroke sequence is described by the DOF \(x_{0}\). Between the strokes of a blow sequence, the specimen can be displaced by the feeder in \(x_{1}\) direction. The displacement is described by the DOF \(l\). The amount of displacement per stroke sequence is kept constant. The number of displacements in \(x_{1}\) direction represents the DOF \(i\). The geometry of the active elements of the tool can be considered as a further DOF, since it influences the degree of deformation as well as the material flow and thus the globalarization kinetics in the material region to be formed. In this research project, the geometry of the active elements is not varied. The drive of the machine and the feeder are the initial actuators of the considered open-die forging process. The influence of the described DOF on the recrystallized material volume is visualized in **Figure 8** on the basis of a half-model of an FE simulation. For better comparability, the recrystallized areas are plotted against the sample geometry (bottom row of images, **Figure 8**). The residual material height \(s\) of the formed area after the forging process is identical in both simulations. The material and damage model as well as the recrystallization of the microstructure according to the globularization kinetics were considered by a subroutine (Bambach et al., 2016; Imran et al; Imran et al., 2020; Imran and Bambach, 2018) in the forming simulation. Simulation validation was performed by comparing the globularized material volume from simulation and experiment. In the forging process, the forging temperature is considered a central process parameter, but the forging temperature is rarely varied during the forging process, so the power source is considered a subordinate actuator. In particular, the DOF of the impact sequence and the DOF of the forging temperature or the power of the heating unit allow the material properties to be actively influenced. The boundary condition residual material height \(s\) and the degree of freedom of the geometry of the active tool elements primarily influence the geometry of the component to be produced. To control the manufacturing process, process data must be collected. In this research project, the previously discussed DOF are used to characterize the globularized microstructure during forming. The relevant process parameters are recorded by specially selected sensors. The displacement of the active element involved in the forming process is measured by a displacement sensor. The recorded displacement signal can be used to derive the stroke curve and, in particular, the movement speed \(v\) of the active element, the prevailing global strain rate \(\dot{\varepsilon}\) and the residual component height \(s\). The integrated line scanner allows the component geometry to be recorded along the \(x_{1}\) axis of the test specimen in the area of the forming die. By combining the characteristics of the displacement sensor and the line scanner, the deformation \(\varepsilon\) can be determined. The recorded characteristics of the pyrometer are used to control the power \(P\) of the heating unit. The three sensors discussed can be considered primary sensors for controlling an open-die forging process. Additional information is provided by the load cell and the infrared camera integrated in the test setup. The data from the load cell is used for process monitoring to prevent overstressing of the system or the forming tool. The integration of the infrared camera makes it possible to evaluate a wider range of materials in terms of the prevailing temperature distribution and the historical temperature profile. The coupling of actuators, sensors, machine control, and the implemented controller is shown in **Figure 9**. The figure also shows the integration of the soft sensor for predicting the globularization kinetics. A detailed description of the implementation and the interfaces has already been shown in (Feistle et al.). Figure 8: Influence of the DOF \(x_{0}\), \(h\), \(l\), \(i\) at constant process temperature and velocities on the globularized material volume The soft sensor is based on a Conditional Variational Autoencoder, which is an extension of a Variational Autoencoder. Through training, the generative model learns a latent representation of the input data (Kingma and Welling, 2013). A Variational Autoencoder consists of an encoder network and a decoder network. The encoder network maps input data to a distribution in latent space, and the decoder network maps points in latent space back to data space. The goal of a Variational Autoencoder is to maximize the lower bound of the log-likelihood of the data. The Conditional Variational Autoencoder conditions the encoder and decoder on additional information that the Variational Autoencoder does not. The conditioning is typically done by concatenating the information to the input of the encoder and decoder. This allows the Conditional Variational Autoencoder to generate samples with specific attributes, making it useful for tasks such as image generation and information retrieval (Sohn et al.). The neural network training data is based on FE forging simulations performed. The network can be used to estimate not only the globalization kinetics, but also the deformation of the test specimen and the prevailing degree of deformation \(\varepsilon\). The recrystallized material volume in the half-model, shown in **Figure 10 (a)**, of a forging sequence performed is superimposed in the next step with the input vector of the process data of the upcoming forging cycle and thus serves as an input variable for the Conditional Variational Autoencoder. **Figure 10 (b)** shows the simulated reference state of the recrystallized material volume of the final forging process. **Figure 10 (c)** shows the estimated distribution of the recrystallized volume by applying the Conditional Variational Autoencoder. A comparison between the FE model and the Conditional Variational Autoencoder shows that the deviation is in the single digit percentage range. The mathematical modeling of the underlying dynamics is done via a level set \(\phi(t,x)\). This level set describes the region of the already transformed microstructure with the desired globalization \(X\). The evolution of the level set is described by a Hamilton-Jacobi equation (**Formula 1**) \[\frac{\partial\phi}{\partial t}+H(x,\nabla\phi)=0,\phi(0,x)=\phi_{0}(x),\phi( t,x_{0})=f(t,x_{0})\] By a suitable reformulation, the deviation from a desired reference state can be described by a system of hyperbolic partial differential equations. For this system, a control rule was determined in (Herty and Thein, 2022), which minimizes a disturbance exponentially in time. Figure 10: (a) Input Image of the Degree of Recrystallisation of the Previous Stroke; (b) Calculated Degree of Recrystallisation by FE-Simulation Mode 1; (c) Determined Degree of Recrystallisation by Conditional Variational Autoencoder Figure 9: Process Setup and Control-Loop in Open-die Forging for Local Control of Globalization Kinetics and Geometry ### Thermomechanical Tangential Profile Ring Rolling (Cyclical, Hot, Ferromagnetic Material) Ring rolling is a forming process for the fast and inexpensive production of ring-shaped parts. In particular, tangential profile ring rolling (TPRR) enables the production of bearing rings or similar parts with near net-shape geometry and dimensions and advantageous material properties due to a high degree of strain hardening at a high cadence. Further control of material properties can target specific phase compositions, grain sizes and/or additional boundary layer properties. To achieve this goal, the required thermomechanical treatment must be carried out during the forming operation, without disturbing the final geometric shape (Brosius et al.). This can only be achieved by decoupling the influence of various actuators with regard to their effect on the net shape and on the microstructure evolution and then employing a closed-loop predictive control system in the process. In particular, as the part's shape is generated by the tool geometry, the final geometry results from the positioning of the rolls at the end of the process. The path toward this final alignment gives an almost arbitrary DOF that can be used as a method to influence the material properties. Both the rolling force and the amount of forming per part rotation can be controlled separately within a fairly large process window and without additional actuators already result in different microstructural results due to partial dynamic recrystallization (Lafarge et al., 2023). An even larger influence can be reached by also controlling the forming temperature during the process. To avoid unnecessary complexity and as no advantages could be gained from a re-heating of the part, this control is only done via active cooling of the part with compressed air. The overall forming process can then be controlled by cascading closed loop control responsible for the different aspects of the aspect. This design also offers a lower complexity, as the property control loop can use separately controlled physical conditions such as a temperature value as its actuating variable instead of physical actuator inputs. This was validated for the cooling rate control loop. For this model-predictive closed-loop control, multiple sensors are required to evaluate the current process conditions. Apart from internal information such as the current advance, direct measurement of the force and surface temperature is performed. Due to variations of the material properties, additionally the microstructure needs to be evaluated in a non-destructive way during the forming process. In this application, this is done by an ECT system monitoring the outside of the rotating ring. This setup requires an additional step, as only near-surface information can be gained that way. This information is integrated into a state model that is constantly re-aligned to the measurement obtained by ECT sensing. This process model is based on a linearized characteristic model that is derived from a large number of FEM simulations of various process conditions, including those that cannot be (safely) reached in practical experiments. For the simulation of geometry and microstructural properties, the use of a pseudo plain strain FE model enables to determine strain and temperature. This data can then be used for the computation of microstructure and hardness. For experimental validation of these concepts, a ring rolling machine 1986 UPWS 31,5.2 ring rolling machine from VEB Werkzeugmaschinenfabrik Bad Duben (Ficker et al., 2005) was heavily modifed. This machine has been altered in several ways to allow for individual control of: rolling force, rolling rate and an additional cooling by several compressed air nozzles allowing for air flow rates between 0 and 500 l/min (see **Figure 11(a)**) for a schematic view of the machine kinematics). All of these can be controlled as functions of time. The forming tools consist of a main roll which pushes the workpiece towards a free-rolling mandrel. Main roll and mandrel together contain the desired ring profile. Additionally, sensors were added to measure tool displacement, rolling force, ring growth, ring surface temperature and microstructure evolution using a high-temperature ECT system. In this example, ring blanks were manufactured out of 16MnCr5 (1.7131, AISI 5115) tube material which was then austenitized at 900\({}^{\circ}\)C for 30 minutes. Rings were then transferred to the rolling machine and formed using various parameter sets. It is well-known that the final ring diameter is influenced by the main roll advance per rotation (Rolf and Perter, 1984). Additionally, **Figure 11(b)** shows that without further control, the diameter is also dependent on the cooling employed, chiefly because the flow stress is directly influenced by temperature (Lafarge et al.). The influence of those parameters on the achievable hardness is shown in **Figure 11(c)**. It can be seen that the desired product properties can be achieved over a range of parameters that leads to varying dimensional accuracy, but crucially these are not directly correlated. Since following a specific cooling temperature profile is key to achieving the desired microstructural changes (Hutter et al., 2021), it is clear that it is indeed not possible to simply follow a prescribed forming profile, even with precise control. Instead, to decouple the generation of dimensionally accurate parts from the product properties, model-predictive control (Lafarge et al., 2021) of the actuators as proposed above is required. ### Punch-Hole-Rolling (Cyclical, Cold, Metastable Paramagnetic Material) The punch-hole-rolling process (developed by the Institute for Production Engineering and Forming Machines (PtU) in cooperation with the Institute for Applied Materials (IAM-WK)) is an incremental forming process, which deforms sheet metal, however, shows strong similarities to bulk forming operations. In contrast to ordinary sheet forming processes, the sheet thickness is changed significantly, creating a collar on both sides of the sheet starting from a preformed hole \(r_{0}\). The process can be used, for example, as a bearing seat for electric motors and offers the advantage that the bearing seat sits within the sheet plane and thus offers improved stability in the housing. A key product property is hardness, which improves the wear properties of the tribologically loaded bearing seat. The collar is formed by a roller which is inserted into the hole at the start of the process and is then moved in a spiral pattern, simultaneously increasing the diameter of the hole and the height of the collar in dependence of the desired final radius rf. The variable geometrical product properties are the diameter of the hole and the height of the collar. The variable process parameters are the radius of the starting hole \(r_{0}\), the radial position of the roller r(t), its radial feed rate \(f\)(t), and its angular velocity \(\omega\)(t), see **Figure 12 (a).** In order to be able to realize a closed-loop control of the forming progress in a way that allows the mutual variation of dimensional accuracy of the collar as well as the product property (e.g. hardness), a MBN sensor is used to indirectly measure the hardness in a non-destructive, rapid way. This measurement technique is currently implemented in the forming tool with a magnetization frequency of 125 Hz and a magnetization voltage of 9 V for in operando contactless measurements (sensor to workpiece distance 50 \(\mu\)m). Previous tests show that the radial feed rate \(f\)(t) has a significant influence on the collar height and the martensite fraction, whilst changing the rotational speed only significantly affects the martensite fraction. Therefore, a decoupling of the evolution of dimensional accuracy and product properties respectively hardness is feasible. In recent investigations, tests were carried out by means of punch-hole-rolling 3 mm thick sheets of TRIP (transformation-induced plasticity) steels 1.4301 and 1.4404 in a semi-factorial variation of the individual process parameters (\(r_{0}\), \(\omega\)(t), \(f\)(t)). Regarding the TRIP-Steels, it can be stated that the strain rate-dependent martensite formation is an advantageous property for process control (see **Figure 12 (b)**). It has already been shown in recent results that the martensite content depends on Figure 11: (a) Ring rolling machine kinematics and actuators. (b) influence of air flow (corresponding to cooling rate) and mandrel speed on resulting dimensions (from (Kumar and Das, 2022)). (c) influence of the same parameters on hardness achieved. the one hand on the degree of forming and on the other hand on the strain rate (Fonstein, 2015), as can be modelled by the Olsen-Cohen model (Olson and Cohen, 1975a). Therefore, the martensite fraction can be controlled solely by varying the deformation speed at constant overall plastic deformation. In conclusion, this allows the hardness within the collar to be varied as the martensitic transformation is accompanied by an increase in hardness. ### Reverse Flow-forming (Multi-Stage, Cold, Metastable Paramagnetic Material) Due to the large number of disturbance variables, in reverse flow-forming is difficult to predict stress and strain distributions during the production process as well as the final product geometry, dimensions and properties (Mohebbi and Akbarzadeh, 2010; Runge, 1994). In conventional reverse flow-forming, the focus is put on the geometry and dimensional accuracy, specifically regarding the wall thickness and the elongation of the workpieces **Figure 13 (a)**. The present case study additionally aims to control local product properties. For this purpose, seamless tubes made of metastable austenitic steel such as AISI 304L (X2CrNi18-9 / 1.4307) are used as raw material. Since the austenitic phase is not in equilibrium, under specific temperature conditions phase transformation from metastable austenite into \(\alpha\)'-matrensite occurs during the deformation process (Nishiyama, 2014; Olson and Cohen, 1975b). This microstructure evolution entails changing of the magnetic and mechanical properties. On the one hand, the material transforms from paramagnetic austenite to ferromagnetic martensite. On the other hand, consequent to the increase of martensite in the microstructure, a strain-induced hardening process takes place (Knigge, 2015). The reverse flow-forming process offers various mechanical DOF: the infeed and feed rate of the roller tool, the rotational speed of the mandrel, and the intended number of passes. Additionally, the local workpiece temperature in the forming zone has a significant impact on the final product properties that can be controlled by means of active cooling or heating, e,g, with cryogenic cooling or heat induction. In this case study a PLB 400 spinning machine from Leifeld Metal Spinning GmbH (Ahlen, Germany) with a drive power of 11 kW and a maximum achievable spindle speed of 950 rpm was used. Here, the hydraulically driven cross support is the main actuator moving the roller tool in axial \(x\) and radial \(z\) directions. The cross support is used to control the infeed depth \(z\) position, the feed rate \(f=\dot{x}\) and the number of passes during the flow-forming process (**Figure 13 (b)**). It is possible also adjusting the angular position, and rotational speed of the mandrel and the workpiece by means of the control of the spindle. The machine setup is equipped with force and displacement sensors of the cross support, as well as an encoder to control the angular position of the mandrel. Figure 12: Schematic representation of the punch-hole-drilling process and its variable process parameters (a) and dependence of martensite fraction on strain rate and overall plastic strain for the steel 1.4404 (b). To control the martensite content while concurrently ensuring the workpiece geometry, two sensor systems are installed. Regarding the online monitoring of the workpiece dimensions, two laser distance sensors OM70 manufactured by Baumer GmbH (Friedberg, Germany) measure the actual wall thickness reduction. The amount of transformed \(\alpha^{\prime}\)-matrensite is monitored by means of a soft sensor composed in its hardware by the micromagnetic testing system 3MAII by Fraunhofer IZFP (Saarbruecken, Germany). In this case, the sensor measures the maximum amplitude of the MBN (\(M_{\max}\)), which has been successfully correlated with the evolution of the strain-induced \(\alpha^{\prime}\)-matrensite. The software part of the soft sensor consists of a mathematical model based on empirical data. This model computes the amount of \(\alpha^{\prime}\)-matrensite in the workpiece from the micromagnetic measurements under the influence of process parameters like the feed rate. **Figure 14: (a)** illustrates the 3D-surface that represents the mathematical model and its corresponding equation. Under isothermal conditions the variation of process parameters like infeed depth \(z\) and feed rate \(f\) of the roller tool produces a defined thickness in the final workpiece, which triggers the phase transformation in the microstructure with a determined amount of \(\alpha^{\prime}\)-matrensite. To simulate both - geometrical as well as material properties, a control-orientated system model was developed (Kersting et al., 2023). Here, an empirical, multivariate characteristic curve is used as a partial model to calculate the wall thickness resulting from the forming process. This was developed on the basis of the experimental investigations using a polynomial regression approach. For the computation of the martensite content, a material model based on experimental data was developed to calculate the amount of \(\alpha^{\prime}\)-matrensite depending on the infeed depth and feed rate of the roller tool (**Figure 14: (b)**). Higher infeed depth and lower feed rate of the roller tool favor the phase transformation process of the material, due to the higher local strain concentrations during flow-forming. This directly describes the coupling between workpiece geometry and the corresponding evolution of properties. The discussed models are applied in real-time as an observer and soft sensor for the property-controlled flow-forming process. Besides online application, they are used to determine the process strategy and to design the control for decoupling geometry and workpiece properties. Figure 14: 3D surface representation of: (a) softsensor model and (b) material model. Figure 13: Reverse flow-forming: (a) Process principle and (b) machine setup. In this study case, three different process strategies have been implemented to decoupling the relationship between the workpiece geometry and the evolution of properties. This entails specifically, to achieve a change in product properties, namely the \(\alpha\)'-martensite content, without relevant variation of the wall thickness of the workpieces (Arian et al., 2021). The first strategy is carried out under isothermal conditions and consists of an intelligent, coordinated variation of the roller tool feed rate and the infeed depth. It uses the fact that a certain wall thickness reduction could be realized with different parameter combinations that result in different martensite fractions. The second strategy is also performed under isothermal conditions in a multi-stage flow-forming process in which the plastic deformation is carried out by means of several number of passes of the roller tool. The third strategy includes thermomechanical forming in which the \(\alpha\)'-martensite formation is locally favored in a single-stage process by means of the use of temperature actuators, namely by means of cryogenic cooling or heat induction (Arian et al., 2023). The application of these strategies enables the production of high-quality components with specific gradation of properties, excellent dimensional accuracy and surface quality. ### Freeform bending with movable die (Single-stage, Cold, Ferromagnetic Material) Freeform bending with movable die is a process allowing the bending of complex geometries without having to change the bending tool. As of now, set geometrical dimensions may be bent that are, meanwhile, tied to a strict set of product properties. However, for the sustainable and economically advantageous process design, a closed-loop control based on the product properties that may separately influence the dimensional accuracy must be established. As a foundation for the successful design of such a closed-loop control, the DOF at disposal in the process need to be combined in such a way that the evolution of product properties are decoupled from the dimensional accuracy (Maier et al., 2021). Product parameters that are especially of interest within this research, are residual stresses, strength as well as the induced plasticity during bending as these particularly dictate further processing steps and service application of the component (Maier et al., 2021; Stebner et al., 2021). The machine at hand is a 6-axis freeform bending machine, designed by J. Neu GmbH (Grunstadt, Germany). As the name suggests, this machine offers six DOF: translation of the die in two directions on the \(xy\)-plane, rotation of the bending die around \(x\)-, \(y\)- and \(z\)-direction as well as the movement of the feed along the \(z\)-direction. When in use, so-called rotation and deflection of the bending die are needed to produce freeform bent parts. In this case, deflection means the translation of the bending die along the \(y\)- or \(x\)-axis and rotation along the \(x\)-, \(y\)- and \(z\)-axis. For a better understanding, see **Figure 15 (a)**(Maier et al., 2021). These DOF indicate that the interaction of dimensional accuracy and product properties lies in the superposition of stresses. Perspectively, the forming temperature may present a further DOF for the decoupling of dimensional accuracy and product properties (Maier et al., 2022). The path of the bending die, depicted in **Figure 15 (b)**, is as follows: \(\overline{AB}\) is the unbent tube, made up of longitudinally welded tubes with the material P235 TR1, with a distance of 275 mm, followed by the undefined section \(\overline{BC}\) where the bending die moves into the maximum position. The maximum position is defined by the communicated final deflection and rotation angle. \(\overline{CD}\) is the distance, where only the feed is moving in \(z\)-direction and the bending die is held constant at full deflection and angle. In section \(\overline{DE}\) the bending die moves back to neutral position and \(\overline{EF}\) is again unbent tube. To influence the evolution of dimensional accuracy and product properties, the authors primarily utilize the superimposition of stresses, in introducing an innovative bending strategy - the non-tangential bending. During this bending strategy, the bending die at final deflection, meaning in section \(\overline{CD}\), is not tangential to the tube. Leading to the component being bent slightly over or under due to the bending die being moved to differently combined deflections and rotations (Maier et al., 2021). Utilizing this novel bending strategy, it is now possible to bend tubes with the same geometry, however independent from product properties namely the axial residual stresses, as can be seen in **Figure 15 (c)** and **(d)**. They show, the axial residual stress state before bending in the steel tubes as well as post undergoing non-tangential bending. It can be seen, that prior to the bending, the axial residual stresses are scattering strongly, while after the bending they can be influenced in a targeted manner as well as decoupled from the dimensional evolution of the tubes, which lays the foundation for a property-based closed-loop control. Now, based upon the understanding of product property evolution and dimensional accuracy, a soft sensor deriving the relevant mechanical properties as well as closed-loop control may be derived, that influences the actuators of the machine. The soft sensor, in this case, relies on two types of sensors: ultrasonic contact impedance (UCI-) and MBN measurements based on which residual stresses, strength and plasticity level predictions can be given under the consideration of measurement and process noise based on a gray-box and state space modeling approach. A closed-loop control can then be designed which Figure 15: (a) Schematic depiction of the system of the freeform bending process with movable die and decoupling methods with the respective DOF, (b) path of the bending die position during process, (c) Axial residual stresses in unbent state and (d) in bent state as validation of decoupling product property evolution from the dimensional accuracy. will intervene in the actuators position based on the predictions of the soft sensor (Ismail et al., 2021, 2022; Stebner et al., 2021; Stebner et al., 2022). ### Press Hardening (Multi-stage, Hot, Ferromaqnetic Material) In press hardening, the thermo-mechanical interactions and their effect on the final product properties are complex and therefore difficult to predict (Neugebauer et al., 2012). For a multi-stage press hardening process, the number of possible interactions increases (Demazel et al., 2018), as well as the number of disturbance variables. With the aim of being able to set the product properties in multi-stage press hardening, here using the example of hardness and thickness distribution, a closed-loop control is utilized allowing for the thermo-mechanical interactions and the disturbance variables to be considered online. As a basis for the application of the control, multi-stage press hardening in a progressive die with the tool layout and actuators presented in **Figure 16 (a)** is assumed. The progressive die is being operated in an industry-grade servo press. Initially, 22MnB5 slit strip with a ferritic-pearilitic microstructure is precut at room temperature into rectangular blanks and a carrier strip. In the first process stage relevant for the process control, the rectangular blanks are heated to the austenitization temperature \(T_{\text{y}}\) by means of induction heating. Next in the cooling/heating stage, a temperature profile is set using resistance heating (in area 2) and cooling by compressed air from flat fan nozzles (in area 1). In the first forming stage, the blank is stretch-formed into a hat shaped profile. A variable blankholder force \(F_{\text{BH}}\) enables to adjust the sheet draw-in \(E\), hence, the sheet thinning \(\Delta\)s. During the final process step - the die bending -, one sidewall of the hat shaped profile is bent. In addition, the stroke rate \(f_{\text{SR}}\) can be altered throughout the process. Depending on the stroke rate, different contact times between the hot sheet Figure 16:(a) Process setup and control-loop for multi-stage press hardening in a progressive die;( b) Hardness and thinning distribution of the formed part for different sets of control variables - results from numerical simulation of the multi-stage process. and the forming tools are obtained, whereby the effective quenching rate throughout the whole process is varied. For feedback of the current product properties, a soft sensor is employed. The soft sensor is a cascade of three sub-models. First, based on the in-situ temperature measurements with thermocouples in the individual stages and a thermal imaging camera, the temperature distribution is reconstructed as a function of location and time (Kloeser et al., 2021). Then, based on the now known temperature distribution, with the second sub-model, the plasticity in the forming stages and thus the sheet thickness distribution is estimated. The third sub-model then uses the information on the thermal (temperature - 1. sub-model) and mechanical (plasticity - 2. sub-model) history to predict the microstructure evolution and as a result the hardness distribution. Then the soft sensor output is used to adjust the control variables according to the deviation from the target properties. A multivariable control, in this case the simultaneous control of the sheet thinning \(\Delta\)s and the hardness distribution in area 1 as well as area 2 of the formed part (other areas are neglected), can only be accomplished if the target properties can be set in a decoupled manner via the available DOF. To demonstrate how the decoupling can be achieved in the given process with the presented extended actuator setup, the hardness and the sheet thinning distribution of the final formed and quenched part resulting from different parameter sets of control variables listed in **Table 1** are shown in **Figure 16 (b)**. The data is obtained from thermo-mechanical simulation of all process stages with the FEM software LSDyna (solver R12) using the material model Mat248. With the parameter sets 1 to 3, differing hardness distributions in the area 1 and 2 can be produced while maintaining the same sheet thinning (lower than the material thickness fluctuation of \(\pm\) 2 %). Varying the thermal degress of freedom only, set by the actuators of stage 1 and 2, either minimal hardness in area 1 (para. Set 1) or in area 2 (para. set 2) can be achieved or, alternatively, maximized hardness in both areas (para. set 3). On the other hand, by changing the mechanical parameters as well (para. set 4), controlling the sheet thinning is possible, while maintaining the hardness distribution according to para. set 3 (same thermal control variables). This shows the potential for controlling of various product properties in a decoupled manner. ## 5 Conclusion and Outlook In conclusion, this paper offers important insights on decoupling strategies regarding the dimensional accuracy and the product properties of components shaped by different forming processes. This lays an important basis for the implementation of closed-loop property controls within the processes that not only enhance but optimize the processes regarding both sustainable as well as economical aspects in their design. The following table lists all processes described in **Chapter 4** with the respective actuator/DOF at hand separated into categories of stress and temperature, which are used to influence the dimensional accuracy and mechanical properties. Furthermore, the individual product properties of interest and the investigated materials are listed. Based on **Table 2**, the following four questions arise: 1. Which actuators are suited best for controlling the target values dimensional accuracy and mechanical properties? \begin{table} \begin{tabular}{p{113.8pt} p{56.9pt} p{56.9pt} p{56.9pt} p{56.9pt}} \hline \hline **Process** & **DOF in terms of \(\boldsymbol{\sigma}\)** & **DOF in terms of T** & **Parameter of Interest** & **Material** \\ \hline (1) Skin pass-rolling & \(\bullet\) Superposition of stresses due to strip tension & & \(\bullet\) Surface properties & E-Cu58, DC01 \\ \hline (2) Flow-Forming & \(\bullet\) Feed rate & & \(\bullet\) Strain hardening & S235+N \\ \hline (3) Open-die forging process & \(\bullet\) Forming speed & \(\bullet\) Concductive heating of test specimen & \(\bullet\) Microstructure grain size (lamellar \(\boldsymbol{\texttt{J}}\) global) & TNM-B1 (Ti-43.5Al-4Nb-1Mo-0.1B) \\ \hline (4) Thermomechanical Ring Rolling & \(\bullet\) Rolling force & \(\bullet\) Temperature rate & \(\bullet\) Strength \(\bullet\) Surface & 16MnCr5 100Cr6 \\ \hline (5) Punch-hole-rolling & \(\bullet\) Infeed Rotational speed & \(\bullet\) Infeed 100 \({}^{\circ}\)C (holding time/passive) & \(\bullet\) Strength \(\bullet\) Hardness & 1.4301 1.4404 \\ \hline (6) Reverse Flow-forming & \(\bullet\) Feed rate & \(\bullet\) Forming temperature & \(\bullet\)\(\alpha\)’-martensite fraction & 1.4307 1.4307 1.4307 1.4307 1.4307 1.4307 1.4308 1. 2. Which sensors are useful? 3. Which mathematical models are capable of modeling both geometry and properties? 4. How exactly is the decoupling realized? The following tables summarize the findings regarding these four research questions, based on the selected choice of forming processes. Furthermore, recommendations regarding actuators in forming processes, sensor choices, mathematical models as well as decoupling strategies are formulated. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline \multicolumn{1}{|p{113.8pt}|}{**Which sensors are useful?**} \\ \hline _Forming Stages_ & _Forming Temperatures_ & _Materials_ \\ \hline \multicolumn{1}{|p{113.8pt}|}{**Single-Stage Forming:**} & **Cold:** & **Ferromagnets:** \\ \multicolumn{1}{|p{113.8pt}|}{Actuators using the superposition of stresses are well suited to influence both dimensions and mechanical properties.} & \multicolumn{1}{p{113.8pt}|}{All actuators can be used to influence the product properties.} & \multicolumn{1}{p{113.8pt}|}{All actuators can be used to influence the product properties.} \\ \hline \multicolumn{1}{|p{113.8pt}|}{**Multi-Stage Forming:**} & **Hot:** & **Parameters:** - \\ \multicolumn{1}{|p{113.8pt}|}{The actuator temperature is especially important whenever phases and grain morphology are of importance, while the actuator stress is always suited for a manipulation of the dimensions.} & \multicolumn{1}{p{113.8pt}|}{**Meta-stable Paramagnets:**} \\ \multicolumn{1}{|p{113.8pt}|}{The actuator temperature and superposition of stresses are important for respective setting of both dimensions and mechanical properties.} & \multicolumn{1}{p{113.8pt}|}{**Meta-stable Paramagnets:**} \\ \multicolumn{1}{|p{113.8pt}|}{**In Summary:**} & \multicolumn{1}{p{113.8pt}|}{**Actuator \(\sigma\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Actuator T:** Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\sigma\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Especially relevant in cold forming for the manipulation of both dimensions (exploitation of the Bauschinger effect) and mechanical properties. Dimensions always set via this actuator.} & \multicolumn{1}{p{113.8pt}|}{**Attuator \(\tau\)**: Primary actuator in hot forming and always needed when microstructural morphology or phases in steel are relevant product properties. Multi-Stage Forming: Temperature sensors always needed when evolution in microstructural morphology and phases are relevant. BHN suited when plasticity induced phase transition is to be monitored. **Cyclical Forming:** When grain morphology or phases are parameters of interest, temperature sensors should be used. BHN and ECT sensors are also suitable, however, some sensors are unresistant to heat. Furthermore, for BHN be advised that the magnetic parameters of the material will change when the Curie temperature is reached. **Meta-stable Paramagnets:** BHN can be used, if phase transformation from fcc to bcc stable austenite undergoing the TRIP effect is investigated, BHN may be used for monitoring the induced bcc phase fraction. The authors furthermore advise to be aware of influences due to temperature on the sensors (not equipped to withstand high heat) and the materials (ferromagnetic properties lost at Curie Temperature). **ECT sensors:** Well-suited for all materials and small degrees of deformation. **Temperature sensors:** Advised to use if microstructure morphology and phase fractions are important. **Which mathematical models are capable of modeling both geometry and properties?** **Parameters:** BHN cannot be used on paramagnets. Other sensors can be used. **Meta-stable Paramagnets:** BHN can be used, if phase transformation from fcc to bcc stable austenite undergoing the TRIP effect is investigated, BHN may then be used to correlate phase fractions. \begin{table} \end{table} Table 5: Summary of which mathematical models are suitable to model the evolution of mechanical properties and dimensions with respect to forming stages, forming temperature, and investigated material. **Cyclical Forming:** Modeling approach must be gray-box or black-box model. **In Summary:** * **White-box models**: White-box models are not suited to fully model the product properties as they develop during the forming process, as there is no complete phenomenological understanding of the influences of the actuators on the product properties. Thus, the modeling approach must be at least based on a gray-box modeling. * **Gray-box models/Black-box models**: Both are suited to model the development of the product properties with respect to the actuators in the process. Black-box models are especially suited when uncertainties are to be modeled. * **Numerical simulation**: As soon as the temperature and morphology of the grains is important, simulation models are important for the modeling of actuators influence on product properties. * **Material** is not a main influencing parameter for choice of modeling approach, rather product properties of interest and actuators in system. **How exactly is the decoupling realized?** [MISSING_PAGE_POST] **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** 29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** **29** 29** 29** 29** **29** 29** **29** **29** 29** **29** 29** **29** **29** 29** 29** ** 29** ** 29** 29** **29** 29** **29** 29** ** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29** 29 * **Mathematical Models:** For the mathematical modeling of the processes, at least gray-box modeling is necessary, as there is no full phenomenological knowledge of the systems available. Black-box models also offer a good modeling solution. * **Material:** Ferromagnetic materials can be used in all systems with all types of sensors. Paramagnetic materials cannot be investigated using BHN, unless they are meta-stable and convert to bcc structure. Next to this newly established understanding on how actuators in forming processes influence the mechanical properties and the dimensional accuracy of the component, further research hypotheses arise that will need to be investigated for final implementation of closed-loop property control based on soft sensors: * As product properties are always a function of the microstructure evolution, characteristic fields between the product properties with regard to the microstructure evolution induced by the forming processes can be derived. * To establish a real-time closed-loop control, the temporal influences in the system must also be considered. That is, how does the microstructure, ergo the product properties, develop, depending on time, at what point in time does the relevant actuator influence the product properties, and finally how quickly can measurements be taken. Regarding the temporal influences will then allow the respective actuator to be controlled precisely at the correct point in time, manipulating the product properties to the desired characteristic. The authors will work on these topics within the third funding period of the priority program 2183 and will publish their findings. **Funding** This research was funded by the Forschungsgemeinschaft (DFG, German Research Foundation) -- 424334318; 424334660; 424334154; 424334856; 42433859; 424337466; 424334423; 424335026.
製造 engineering 領域の最近のトレンドは、持続可能な生産プロセス設計の明確な発展を示しています。この開発に伴い、監視技術の重要性は高まっています。製品特性の監視に関しては、ほとんどのプロセスはオープンループ制御されており、最終的な製品特性を決定する微構造進化は考慮されていません。しかし、製品特性に応じてプロセスアクチュエータを調整・操作できる閉ループ制御は、資源の効率性と生産コストの削減に繋がる可能性があります。ほとんどの成形プロセスにおいて、部品の寸法セットは、製品特性セットに一致します。しかし、プロセスを閉ループ制御するための成功的な実施には、成形後の寸法と製品特性の関係性を体系的に理解する必要があります。この研究は、異なる自由度を有する成形プロセスを対象に、寸法精度と製品特性の変化を調査しています。
2309.10296
Minimum-length chain embedding for the phase unwrapping problem on D-Wave's advantage architecture
With the current progress of quantum computing, quantum annealing is being introduced as a powerful method to solve hard computational problems. In this paper, we study the potential capability of quantum annealing in solving the phase unwrapping problem, an instance of hard computational problems. To solve the phase unwrapping problem using quantum annealing, we deploy the D-Wave Advantage machine which is currently the largest available quantum annealer. The structure of this machine, however, is not compatible with our problem graph structure. Consequently, the problem graph needs to be mapped onto the target (Pegasus) graph, and this embedding significantly affects the quality of the results. Based on our experiment and also D-Wave's reports, the lower chain lengths can result in a better performance of quantum annealing. In this paper, we propose a new embedding algorithm that has the lowest possible chain length for embedding the graph of the phase unwrapping problem onto the Pegasus graph. The obtained results using this embedding strongly outperform the results of Auto-embedding provided by D-Wave. Besides the phase unwrapping problem, this embedding can be used to embed any subset of our problem graph to the Pegasus graph.
Mohammad Kashfi Haghighi, Nikitas Dimopoulos
2023-09-19T03:53:37
http://arxiv.org/abs/2309.10296v1
# Minimum-length chain embedding for the phase unwrapping problem on D-Wave's Advantage architecture ###### Abstract With the current progress of quantum computing, quantum annealing is being introduced as a powerful method to solve hard computational problems. In this paper, we study the potential capability of quantum annealing in solving the phase unwrapping problem, an instance of hard computational problems. To solve the phase unwrapping problem using quantum annealing, we deploy the D-Wave Advantage machine which is currently the largest available quantum annealer. The structure of this machine, however, is not compatible with our problem graph structure. Consequently, the problem graph needs to be mapped onto the target (Pegasus) graph, and this embedding significantly affects the quality of the results. Based on our experiment and also D-Wave's reports, the lower chain lengths can result in a better performance of quantum annealing. In this paper, we propose a new embedding algorithm that has the lowest possible chain length for embedding the graph of the phase unwrapping problem onto the Pegasus graph. The obtained results using this embedding strongly outperform the results of Auto-embedding provided by D-Wave. Besides the phase unwrapping problem, this embedding can be used to embed any subset of our problem graph to the Pegasus graph. Quantum Annealing, Phase Unwrapping, Pegasus, Embedding, QUBO ## I Introduction Two-dimensional phase unwrapping is the process of recovering unambiguous phase values from a two-dimensional array of phase values known only modulo \(2\pi\) rad. The measured phase is also affected by random noise and systematic distortions. This problem arises when the phase is used as a proxy indicator of a physical quantity, which is the time delay between two signals in the case of interferometric SAR (InSAR) [1] and can be used to extract accurate three-dimensional topography. As the phase is observable only on a circular space where all measured values are mapped to the range \((-\pi,\pi]\), the observed data must be mapped back to the full range of real phase values to be meaningful. For unwrapping, the sampling rate is assumed to be suitable to prevent aliasing, i.e., the absolute difference in phase between two adjacent points is assumed to be smaller than \(\pi\). The most commonly used unwrapping technique is based on network programming strategies that formulate the problem as a minimum cost flow (MCF) [2] problem. One of these solvers is the sequential tree-reweighted message passing (TRWS) algorithm [3]. However, since the InSAR images can be quite large--normally larger than \(600M\) pixels--the process of phase unwrapping via TRWS can take a prohibitively long time. Hence, we explore whether a quantum computing system could be a potential candidate for solving such a problem. Quantum annealing systems are able to solve problems in quadratic unconstrained binary optimization (QUBO) form. Any unconstrained quadratic integer problem with bounded integer variables can be transformed by a binary expansion into QUBO [4]. The phase unwrapping problem is a quadratic unconstrained problem and it can be mapped to a QUBO. InSAR images tend to be quite large, often exceeding \(600M\) pixels, requiring at minimum a 600M-qubit quantum annealer; such a machine is not currently available. To overcome present day technology limitations, we have developed a method where we partition the image then use quantum annealing on the individual partitions to obtain suboptimal labelling, and then use quantum annealing in a second phase to obtain labels that approach the ones obtained through classical methods. We have named our method "super-pixel decomposition". We have reported the preliminary results of the super-pixel decomposition method in [5], while in [6] and [7] we presented enhancements of the proposed method by utilizing additional (marginal) pixels in each of the sub-images, and refined our experimental analysis by using a larger dataset of synthetic images providing us with statistically more robust results. In our experimentation, we observed that the mapping of the problem on the annealing machine plays a crucial role in achieving improved performance. This has also been observed by the developers of D-Wave annealers [8] stating that there is "strong evidence that chain length plays an important role in performance". Achieving thus symmetrical embeddings with a minimum chain length promise an increased performance. In this work, we study a variety of embeddings for the phase unwrapping problem on D-Wave's Advantage architecture and experimentally study the impact of the chain length on the performance. One of the embeddings we have developed has chain lengths of one meaning that our embedding utilizes the direct links of the underlying Pegasus graph. The rest of the paper is organized as follows: In Section II we provide a background of the phase unwrapping problem and the Pegasus graph deployed on D-Wave's Advantage architecture, in Section III we explain our embedding methodology, in Section IV we present the experimental results, and we conclude with Section V. ## II Background ### _Phase Unwrapping Formulation_ Let \(\phi\), \(\varphi\), and \(k\) denote the unwrapped phase, the wrapped phase, and an integer label to be estimated. For the phase of a pixel \(i\), we have, \[\phi_{i}=\varphi_{i}+2\pi k_{i} \tag{1}\] Phase unwrapping, i.e. estimating \(\phi\), is an ill-posed problem. The Nyquist criterion [9] ensures that phase unwrapping can be performed correctly. The unwrapping problem can then be expressed as an optimization problem of the cost function [5], \[E=\sum_{(s,t)\in A}W_{st}\left(k_{t}-k_{s}-a_{st}\right)^{2}+\sum_{s\in A} \omega_{s}\left(k_{s}-a_{s}\right)^{2} \tag{2}\] where \(k_{i}\) are the labels that will determine the original phase as per (1), \(A\) is the set of pixels in the SAR image, \(W_{st}\) are weights defining the neighbourhood structure and \(a_{ij}\) are constants obtained from the image. The weights \(W_{st}\), \(\omega_{s}\), and the bias \(a_{s}\), are chosen heuristically and represent ad-hoc information one may have on the scene. ### _Quantum Annealing_ Quantum computational systems, such as the ones developed by D-Wave, use quantum annealing to locate the ground-state of an artificial Ising system [10], which is a QUBO problem. Any quadratic unconstrained optimization problem can be cast as a QUBO problem by expressing the integer variables in binary. Analytical and numerical evidence indicates that quantum annealing can outperform simulated annealing [11]. D-Wave Systems provides implementations of different quantum annealing systems, starting from the D-Wave One announced in 2011 [10]. We have used the D-Wave 2000Q_6 machine (\(2041\) qubits) and currently the D-Wave Advantage (\(5640\) qubits) machine [12]. An Ising Hamiltonian describes the behaviour of such a system as \[H_{p}=\sum_{i=1}^{N}h_{i}\sigma_{i}^{z}+\sum_{i,j=1}^{N}J_{ij}\sigma_{i}^{z} \sigma_{j}^{z} \tag{3}\] where \(h_{i}\) is the energy bias for spin \(i\), \(J_{ij}\) is the coupling energy between spins \(i\) and \(j\), \(\sigma_{i}^{z}\) is the Pauli spin matrix, and \(N\) is the number of qubits. Quantum annealing on this system is achieved by the gradual evolution of the Hamiltonian system [10], \[H\left(t\right)=\Gamma\left(t\right)\sum_{i=1}^{N}\Delta_{i}\sigma_{i}^{x}+ \Lambda\left(t\right)H_{p} \tag{4}\] As time passes, \(\Gamma\) decreases from 1 to 0 while \(\Lambda\) increases from 0 to 1. If the annealing is performed slowly enough, the system stays in the ground state of \(H(t)\) for all times, \(t\), ending up at the end of the annealing at the ground state of \(H_{p}\). The Hamiltonian in (3) can be rewritten in vector form as \(H\left(s\right)=s^{T}Js+s^{T}h\), in the form of a QUBO problem [13]. As used in the rest of this paper, the objective function is expressed in QUBO form in scalar notation, and is defined as follows: \[C\left(x\right)=\sum_{i}a_{i}x_{i}+\sum_{i<j}b_{i,j}x_{i}x_{j} \tag{5}\] where \(x\in\left\{0;1\right\}^{n}\) is a vector of binary variables and \(\left\{a_{i};b_{i,j}\right\}\) are real coefficients. Before an application problem can be solved on a quantum annealer, it must first be mapped into QUBO form. As a first step in transforming the InSAR problem into a QUBO problem, the \(k_{i}\) label that is non-binary valued must be transformed into binary valued. Let \(k_{i}\in\left\{0,D_{i}-1\right\}\), where \(D_{i}\) is the number of allowed values (labels) for \(k_{i}\). This can be achieved by writing \(k_{i}\) in binary. The binary transformation restricts the number of new-valued binary variables required to represent \(k_{i}\). Let \(d_{i}=\left\lceil\text{log}_{2}D_{i}\right\rceil\) and \(k_{i}=\left\langle\mathbf{2},\mathbf{x}_{i}\right\rangle\) where the vector \(\mathbf{x}_{i}=[x_{i,d_{i}},\cdots,x_{i,1},x_{i,0}]\) represents the bits of \(k_{i}\) and \(\mathbf{2}=[2^{d_{i}},\cdots,2,1]\) is the vector of powers of two. Equation (2) can be written in QUBO form as: \[E=\sum_{(s,t)\in A}W_{st}\left(\sum_{i}b_{i}x_{i,t}-\sum_{i}b_{i} x_{i,s}-a_{st}\right)^{2}\\ +\sum_{s\in A}\omega_{s}\left(\sum_{i}b_{i}x_{i,s}-a_{s}\right)^{2} \tag{6}\] where \(b_{i}\) is the weighting coefficient for the binary variable \(x_{i}\) (\(b_{i}=2^{i}\) in the case of the binary encoding). Many problems can be formulated as QUBO to take advantage of quantum annealing, potentially converging faster than other techniques to an optimum solution [14]. ### _Chimera Network_ The Chimera graph is the underlying architecture of the D-Wave \(2000Q\_6\) system. A Chimera cell consists of 8 qubits located in two columns. Nodes in each column of a Chimera cell are connected to all nodes of the other column but have no connections to nodes within their own column. Figure 1 shows Chimera cells together with connections to neighboring cells [15]. In the D-Wave \(2000Q\_6\) system, there is a matrix of \(16\times 16\) Chimera unit cells in the whole graph. Unit cells are connected horizontally and vertically to adjacent unit cells. Chimera graph has a degree of 6 as each node is connected to four nodes of their own cells and two nodes of adjacent cells. Figure 5 shows a more complete representation of the intra-unit-cell connections. ### _Pegasus Network_ The Pegasus graph is the underlying architecture of D-Wave's Advantage machine. This architecture provides more qubits and couplers in comparison to D-Wave's previous architecture which deployed Chimera architecture. A section of this architecture is shown in Figure 2. The Pegasus graph includes the Chimera graph as its sub-graph. Each Pegasus unit cell (shown with square grids in Figure 2) consists of three Chimera unit cells. The Pegasus graph has a degree of 15 and a nominal degree of 12 [12]. This architecture also includes \(K_{4}\) (complete graph of degree 4), and also \(k_{6,6}\) (bipartite graph of degree 6). ## III Methodology In this section, we summarize our methodology in mapping our phase unwrapping problem to the Pegasus network natively and also onto the Chimera graph symmetrically. ### _The phase unwrapping problem graph_ We want to map 2D images with pixels of a maximum label of 3. binary variables of these images can be specified using three coordinates, the first two coordinates denote the position of pixels in the vertical and horizontal directions. Specifically, we labeled the vertical coordinate as \(i\) and the horizontal coordinate as \(j\). The third coordinate, denoted as "b", is used to distinguish between the two bits of each pixel. \(q=1\) represents the Most Significant Bit (MSB), and \(q=0\) represents the Least Significant Bit (LSB). For example, \((i,j,q)=(2,3,1)\) represents the MSB of the fourth pixel of the third row in the image as \(i\) and \(j\) start from 0. As we are using four-neighbor connectivity for unwrapping, the connections of each pixel with its top, bottom, left, and right pixels should be taken into account. Each pixel has 2 binary variables that need to be connected. Each of these variables is also connected to the binary variables of its 4 adjacent pixels. This results in a total of 9 connections per binary variable: One connection to its own pixel and two connections to each of four neighboring pixels. However, marginal pixels may exhibit fewer connections. Figure 3 shows the graph of our problem for an image with the size of \(4\times 4\). This graph can be expanded to larger sizes while maintaining the same structure. In the following two sections, we shall focus on the method of mapping the phase unwrapping problem (and any problem described by the graph in Figure 3) to two D-Wave architectures namely the Chimera and Pegasus. These architectures deploy the Chimera and Pegasus interconnects discussed earlier. The focus of our approach is to obtain mappings that are as symmetric as possible with minimum-length chains Fig. 1: Chimera unit cells and their connections. Green edges show connections between nodes of different unit cells. Fig. 3: The graph of phase unwrapping problem for a \(4\times 4\) image. Nodes are labeled with a 3-digit number. The first digit (i) corresponds to the column of the pixel, the second digit (j) corresponds to the row of the pixel (both start from 0), and the third digit (q) specifies which bit of the pixel is being referred to. Fig. 2: Pegasus graph. Unit cells are shown in green squared grids. The red graph is a Chimera unit cell in the Pegasus graph. ### _Mapping onto the Chimera graph_ As mentioned before, our problem forms a graph of degree of 9. On the other hand, the degree of the Chimera graph is 6. Consequently, it's not possible to map the binary variables directly to Chimera nodes. Alternatively, we chain multiple qubits in the Chimera graph and map the variables to that qubit chains. This is accomplished symmetrically in [6]. We mapped each pixel to one Chimera cell. Each binary variable of a pixel is mapped to a chain of four qubits inside a Chimera cell as illustrated in Figure 4[6]. As qubits need to be connected to each other in a qubit chain, and there aren't any connections between qubits of a single column in a Chimera unit cell, we mapped binary variables into the qubits of two rows in a Chimera cell. In Figure 4, links between qubits of a single binary variable (qubit chain) are shown in red while connections between two qubit chains corresponding to LSB and MSB of a pixel are shown in black [6]. Moreover, to provide necessary connections between adjacent pixels, we changed the location of rows corresponding to the qubits of MSB and LSB for neighboring cells as shown in Figure 5. Therefore, we introduced two types of cells. In "type A" cells, qubits of LSB are placed in the first and fourth row of a unit cell and qubits of the MSB are placed in the middle rows, while in "type B" cells, the second and the fourth row are considered for the LSB, and the remaining rows are associated with the MSB. Chimera cells are alternating between "type A" and "type B" in columns and in rows. This symmetric arrangement provides us with all necessary connections between binary variables. In figure 5, Red links show connections between different binary variables [6]. ### _Mapping onto the Pegasus graph_ To map the problem graph onto the Pegasus graph, we start with a sub-image of size \(2\times 2\) and then continue mapping its adjacent sub-images until all pixels are mapped. Consider an image with sub-images like Figure 6. We start by mapping the red sub-image. This sub-image is mapped to eight qubits of two Chimera unit cells: six qubits of one Chimera unit cell, and two qubits of another one with the relative positions as Figure 7. In a sub-image of size \(2\times 2\), each pixel has two adjacent pixels, and consequently, its binary variables should be connected to those of adjacent pixels. In the mapping of sub-images, these connections are provided. For example, binary variables of pixel \((i,j)\) in Figure 6 should be connected to those of \((i,j+1)\), and \((i+1,j)\) inside the red sub-image, and in the mapping of this sub-image, these connections exist. All sub-images of size \(2\times 2\) can be mapped using the mentioned mapping. Pixels of a sub-image have connections with those of adjacent sub-images, and these connections should be provided in the mapping. We continue by mapping the right sub-image of the mapped one which is the yellow sub-image in Figure 6. We can map this sub-image similar to what we did for the red one. However, the left pixels of this sub-image are connected to the right pixels of the red one (i.e., pixel \((i,j+2)\) is connected to pixel \((i,j+1)\), and pixel \((i+1,j+2)\) is connected to pixel \((i+1,j+1)\)). Consequently, we map the yellow sub-image in a specific Fig. 4: Mapping a pixel onto a Chimera unit cell. Each binary variable of a pixel is mapped to a chain of four qubits [6]. Fig. 5: The manual embedding mapping onto the Chimera graph [6]. Fig. 6: Sub-images of size \(2\times 2\) shown in different colors. relative location with the red sub-image to provide these connections. This is done by mapping the yellow sub-image in the lower left side of the red one as Figure 8. In this figure, red and yellow edges show connections inside the red and the yellow sub-images respectively, while the turquoise edges show the connections between pixels of the red sub-image and pixels of the yellow sub-image. To avoid congestion in the plots, the third coordinate representing the bit of each pixel is excluded in drawing the labels. We can map every other two horizontally adjacent sub-images with the same approach. Now that we mapped two sub-images, we continue by mapping the button sub-images of those that were mapped (the blue and green sub-images in Figure 6). These two sub-images are also horizontally adjacent and we can map them the same as the previous adjacent sub-images. However, the top pixels of these two sub-images have connections with the bottom pixels of the previous sub-images (i.e., pixel \((i+2,j)\) is connected to pixel \((i+1,j)\), pixel \((i+2,j+1)\) is connected to pixel \((i+1,j+1)\), and so on). We map these two sub-images with a relative location with the previous ones such that these connections are provided. This is accomplished by mapping them in the lower right side of the previously mapped sub-images as Figure 9. In Figure 9, the pair of blue and green sub-images are mapped similarly to the pair of red and yellow sub-images. Connections between blue and green sub-images are shown in magenta. These edges are like the turquoise edges as both are of the horizontally adjacent sub-images. Also, connections between red and blue sub-images as well as connections between the yellow and green sub-images are shown in olive. Red and blue sub-images are vertically adjacent and so are yellow and green ones. As a result, connections between red and blue sub-images are similar to those between yellow and green ones. A Chimera cell that was partially used in the mapping of previous sub-images now is completely used as the mapping of new sub-images matched the previous one. Four sub-images are mapped so far and we can continue the same procedure to map another four sub-images. The location of the next four sub-images follows the approach that we used to map adjacent sub-images. I.e., the right four sub-images of the mapped one are mapped in the lower left direction of them similar to what we did for mapping the yellow sub-image after the red one. Also, the bottom four sub-images of the mapped one can be mapped in the lower right direction of them as we did for mapping blue and green sub-images after red and yellow sub-images. Continuing this trend, we can map the whole image. In the end, the location of pixels after mapping is congruent with the location of them in the original ones. However, they rotated by 45 degrees counterclockwise and then reflected over the y-axis. Except for marginal sub-images, all other Chimera unit cells are fully used for mapping as the mapping Fig. 8: Mapping of two horizontally adjacent sub-images onto the Pegasus graph. Red and yellow edges represent connections between pixels inside the red and the yellow sub-images respectively, while turquoise edges show the connections between adjacent pixels of those two sub-images. Fig. 7: Mapping a \(2\times 2\) sub-image onto the Pegasus graph. Fig. 9: Mapping four adjacent sub-images onto the Pegasus graph. Connections between the blue and green sub-images are shown in magenta while connections between the red and blue sub-images as well as connections between the yellow and green sub-images are shown in olive. of sub-images matches together and forms a consistent graph of Chimera unit cells. ## IV Experiments In this section, we present experimental performance results of three different types of embeddings, i.e., Pegasus Native embedding (the one that we proposed in this paper), Chimera symmetrical embedding deployed originally on the Chimera network of the \(2000Q\_6\) machine and ten automatically generated embeddings using D-Wave's Ocean tool. Table I shows the properties of these embeddings. Five Automatic embeddings (\(A1\), \(A2\),..., and \(A5\)) are generated for the Advantage machine and another five (\(A6\), \(A7\),..., and \(A10\)) for the \(2000Q_{6}\) machine. ### _Setup_ #### Iv-A1 Terminology RCLO: The Ratio of Chain-Length-One (RCLO) is the proportion of chains of length one in the population of chains in the embedding. It provides an additional measure of how compact is the embedding. #### Iv-A2 Configurations We used the default amounts for the annealing parameters in our experiments. I.e., annealing_time\(=20\mu s\) and number_of_reads \(=1000\). #### Iv-A3 Datasets The datasets consist of simulated (SAR) data with a medium noise level (SNR=10dB) and medium complexity (Perlin correlation=18). This dataset presents a large exploration space, with a wide spectrum that includes high-frequency data that present a challenge for the phase unwrapping process. The dataset includes \(5\) synthetic images with the size of \(10\times 10\) pixel, generated using a Perlin Noise Generator [16]. #### Iv-A4 Accuracy Metrics To determine how close two images (of identical size) are to each other, we use the _matching fraction_ metric defined as the fraction of pixels that are identical in the two images. To evaluate the accuracy of our methods, we compare the obtained image to Noisy Unwrapped Ground-truth images which are the images obtained by a sensor or synthetically by adding noise to synthetic noise-free ground-truth images [6]. ### _Results_ In terms of chain length, our proposed Pegasus native embedding is the optimum embedding with the lowest possible chain length. D-Wave's Ocean software couldn't find any other embedding with an average chain length of one or even close to one (Table I). Furthermore, for the \(2000Q\_6\) machine, our proposed Chimera symmetric embedding in [6] has the lowest chain length in comparison with the other five embeddings generated by D-Wave. As shown in Table I, average and maximum chain lengths of embeddings for the Advantage machine are lower than those of the \(2000Q\_6\) machine as the Pegasus architecture has more connectivities than the Chimera architecture. The obtained accuracy for images of the dataset using seven embeddings (the Pegasus native, the Chimera symmetric, and five Automatics) for the Advantage machine is reported in Table II. The accuracy of equal to or more than \(98\%\) for all images and the average accuracy of \(99\%\) was obtained from our proposed Pegasus native embedding, and it outperforms other embeddings by far. In the Advantage machine, the Chimera symmetric embedding has a performance similar to Automatic embeddings. The average obtained accuracy for all Automatic embeddings is \(59.64\%\), and slightly better than Chimera symmetric embedding. Figure 10 shows the average accuracy of all embeddings in the Advantage machine. However, for the \(2000Q\_6\) machine, as illustrated in Figure 11, the Chimera symmetric embedding outperforms Automatic embeddings. The obtained results for \(2000Q\_6\) machine are reported in Table III. Our experiments seem to indicate that short chain lengths result in better performance (accuracy). Thus the Pegasus native and the Chimera symmetric, having the shortest chains in the respective architectures, resulted in the best performance for our phase unwrapping problem. Similarly, from the automatic embeddings derived by D-WAVE's Ocean tool, the ones with the shortest average-length chains (A2 and A6) had the best performance. However, the performance is not linearly related to the average chain length. As examples to the contrary, one can consider mappings A5 and A1 where A5 has a shorter average chain length (1.795 vs. 1.895 for A1 ) yet the performance of the A5 embedding is worse than that of the A1 (41.4 vs. 64.4 for A1). Similar behavior can be seen for A10 as compared to A7. Finally, the Chimera symmetric embedding was used in both the Advantage (Pegasus) and the \(2000Q\) (Chimera) machines. Given that Chimera is a subgraph of Pegasus, one would have expected similar performance obtained from both architectures. Yet, the \(2000Q\) machine yielded consistently better performance than the Advantage one. This result is consistent with the results reported in [17] and needs further investigation. The above observation notwithstanding, the Advantage architecture outperforms the \(2000\)Q as its increased degree of connectivity results in much shorter chains. ## V Conclusion In this work, we proposed a heuristic mapping to embed the phase unwrapping problem onto the D-Wave's Advantage architecture. This embedding is the optimal one in terms of chain length. Automatic embeddings generated by D-Wave's Ocean tool couldn't find any embedding with an average chain length close to ours. We experimentally showed that our embedding outperforms others significantly. Our experiments also confirmed that lower average chain length of embedding on D-Wave's machines could generally result in a better performance for the phase unwrapping problem. Fig. 11: Average accuracy of different embeddings on \(2000\)Q_6 machine. Green and purple columns correspond to the Chimera symmetric and Automatic embeddings respectively. The error bars show the standard deviation. Fig. 10: Average accuracy of different embeddings on the Advantage machine. Red, green, and blue columns correspond to our Pegasus native, Chimera symmetric, and Automatic embeddings respectively. The error bars show the standard deviation. ## Appendix A Cost Derivation Denoting by \(\varphi_{i}\) the phase of pixel \(i\), and by \(\phi_{i}\) the wrapped phase of the same pixel, we can relate the phase and wrapped phases of pixels \(i\) and \(j\) as follows. \[\varphi_{i}=\phi_{i}+2\pi k_{i} \tag{7}\] and \[\varphi_{j}=\phi_{j}+2\pi k_{j} \tag{8}\] Further, due to the Nyquist criterion, and if pixels \(i\) and \(j\) are neighbouring, then \[\mid\varphi_{i}-\varphi_{j}\mid<\pi\,. \tag{9}\] or \[-\pi<\varphi_{i}-\varphi_{j}<\pi \tag{10}\] and using (7) and (8) \[-\pi<\phi_{i}-\phi_{j}+2\pi(k_{i}-k_{j})<\pi \tag{11}\] or \[-\frac{1}{2}<\frac{\phi_{i}-\phi_{j}}{2\pi}+(k_{i}-k_{j})<\frac{1}{2} \tag{12}\] Denoting the nearest integer (or round) function as \(nint(.)\) then \[nint(\frac{\phi_{i}-\phi_{j}}{2\pi}+(k_{i}-k_{j}))=0 \tag{13}\] Since \(k_{i}-k_{j}\) is an integer, then \[nint(\frac{\phi_{i}-\phi_{j}}{2\pi}+(k_{i}-k_{j}))=\] \[nint(\frac{\phi_{i}-\phi_{j}}{2\pi})+(k_{i}-k_{j})=0 \tag{14}\] denoting \[a_{ij}\ \text{\emph{def}}\ -nint(\frac{\phi_{i}-\phi_{j}}{2\pi}) \tag{15}\] then equation (14) can be rewritten as \[k_{i}-k_{j}-a_{ij}=0\Rightarrow k_{i}-k_{j}=a_{ij} \tag{16}\] This equation is the basis of the cost function the optimization of which will produce appropriate values for the labels \(k_{i}\). The unwrapping problem can then be expressed as an optimization problem of the cost function \[E=\sum_{(s,t)\in A}W_{st}|k_{t}-k_{s}-a_{st}|\,, \tag{17}\] that is, \[\arg\min_{k}\left[\sum_{(s,t)\in A}W_{st}|k_{t}-k_{s}-a_{st}|\right]\,, \tag{18}\] where \(k_{i}\) are the labels that will determine the original phase as per Equation (1), \(A\) is the set of pixels in the SAR image, and \(W_{st}\) are weights defining the neighborhood structure. ## Acknowledgment This research was supported in part by grants from the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery grants program and by Quantum BC.
量子計算の進展に伴い、量子アニーリングは、難問の計算問題を解く強力な手法として導入されています。この論文では、量子アニーリングが相対に解くことができる可能性について研究しています。特に、難問の計算問題の一つである相対解き問題を対象としています。相対解き問題を量子アニーリングで解くためには、現在最大の量子アニーレータであるD-Wave Advantageを使用しています。しかし、この機械の構造は、問題グラフの構造と不一致しています。そのため、問題グラフをペガサスグラフにマッピングしなければならないことが必要であり、このマッピングは結果の質に大きな影響を与えています。D-Waveの報告書や実験結果から、低鎖長は量子アニーリングのパフォーマンスに優れています。この論文では、相対に解くためのグラフをペガサスグラフに埋め込むために、最低限の鎖長を持つ新しい
2309.11188
Rebellions and Impeachments in a Neural Network Society
Basede on a study of the modern presidencial democracies in South America, we present a statistical mechanics exploration of the collective, coordinated action of political actors in the legislative chamber that may result on the impeachment of the executive. By representing the legislative political actors with neurla networks, we observed that the larger the effective number of presidential-agenda items are treated, the smaller the chances for a cross-party dialogue, which, if combined with a decrement in the president's public approval rating, could trigger an impeachment process.
Juan Neirotti, Nestor Caticha
2023-09-20T10:18:17
http://arxiv.org/abs/2309.11188v2
# Rebellions and Impeachments in a Neural Network Society ###### Abstract Based on a study of the modern presidencial democracies in South America, we present a statistical mechanics exploration of the collective, coordinated action of political actors in the legislative chamber that may result on the impeachment of the executive. By representing the legislative political actors with neurla networks, we observed that the larger the effective number of presidential-agenda items are treated, the smaller the chances for a cross-party dialogue, which, if combined with a decrement in the president's public approval rating, could trigger an impeachment process. Introduction The mechanism for deposition of institutional power in several areas of the world has changed in the last half century. A new pattern of government overthrow took over the traditional military coup, specially in Latin America as extensively documented in [27]. These abrupt changes of collective behavior are typically seen in a society which exerts pressure on the parliament to promote changes outside the election model by parliamentary impeachment. External pressures which include elements such as the state of the economy, perception of corruption or their combination may be distal reasons, but at a closer look the correlated behavior of the parliament follows the emboldenment that derives from the collective perception that there is sufficient strength in the opposition camp to overthrow the executive. Technically still within the realm of constitutional order, despite being associated to affective rather than ideological affinity [14; 15], this transition mechanism seems to bring new theoretical challenges to comparative studies of presidentialism. To illustrate the problem and to better understand the expected characteristics of the model, we will briefly discuss three different instances were the executive was either impeached or nearly impeached by the legislative. The first case corresponds to the presidency of Fernando Collor de Mello, from Brazil. Collor won the 1989 elections in the second round with 54% of the votes, but his party only had 8% of the sits in the Chamber of Deputies and 4% in the Senate. By March 1990, when Collor was sworn into office and his approval ratings were at +60%, the consumer price index rose by 84% (observe the evolution of these numbers in figure 1); at this point Collor lunched his first economic plan. But in spite of the extreme measures imposed, government control of inflation proved elusive and popular discontent increased. The application of a second (unsuccessful) economic plan (Collor II), and a number of corruption cases revealed by the press provoked a plummet on the approval ratings and triggered a number of streets demonstrations. With very few allays in the Legislative, impeachment procedures were triggered and by the end of September 1992 the Chamber of Deputies approved (by a vote of 441-38) the impeachment by the Senate. The second case corresponds to the presidency of Carlos Andres Perez, from Venezuela. Perez won his second term in 1988 with 53% of the vote and, in contrast to Collor's case, he was the leader of the largest party in the country. In order to put under control the critical situation inherited from the previous president, Perez announced an economic package (the Great Turnaround) in February 1989. These measures triggered an abrupt rise in inflation, and an immediate popular response in the form of riots (Caracazo). Observe the evolution of the presidential approval rating and the number of popular marches per month in figure 2. The human rights violations occurring during the days of the Caracazo, the increasing number of protests, and media exposes revealing scandals involving members of the administration compromised the credibility of the government and its capacity to control the economy. By September 1991 the followers of president Perez within the ruling party lost their internal elections. On February and November Figure 1: Time evolution of the consumer prise index (full-black line) [1] and the presidential approval rating (dashed-red line) from 1990 to September 1992 (adapted from [27]). 1992 there were attempts of coup d'etat. By May 1993 was suspended by the Senate. The third case corresponds to the presidency of Ernesto Samper, from Colombia. He won the elections of June 1994 with 50.6% of the votes. In the weeks following the election, the press began to reveal a possible contribution from the Cali drug cartel to Samper's campaign. The Prosecutor General opened an investigation (Process 8000) and, preemptively, Samper asked the Congress (under his party's control) to also investigate the accusations. The investigations were closed and, although his presidential approval rating continue to decline, Samper kept control of the Congress and managed to finish his tenure (figure 3). In these three cases we observe patterns also found in in other presidencies. These observations can be synthesize Figure 3: Approval and disapproval presidential ratings during Ernesto Samper’s tenure, from August 1994 to August 1998 [27]. Figure 2: Time evolution of (dashed-red line) from 198 as follows: At the beginning of the presidential period the presidential approval ratings are high, and they fluctuate according to the information the electorate receives about the current issues (either policies or scandals). Meanwhile the members of the legislative chamber discus the president's proposals under the influence of their internal political alliances and the effective pressure exerted by the presidential approval ratings. The more items are discussed in the chamber the more is known about the presidential agenda, which rearranges the chamber's alliances and provides more information to the pollsters, feeding into the cycle of influences and interactions. Naively, we conclude that the actions of the agents in the legislative chamber are based on the opinions they form on the items propposed by the president to be discussed, and that such discussions are modified by the alliances they have with each other and by the pressure from the polls. The larger the difference in opinions between legislative agents and the president and the lower the presidential ratings the higher the chances of impeachment. There is a need to devise mechanisms for opinion formation and for alliance formation in order to provide insight for the understanding of the mechanics of modern presidential democracies. Empirical evidence coming from research in psychology supports there is a cost of dissent, with humans trying to attain social conformity modulated by peer pressure [3; 9; 30; 32] and that conformity is learned from interactions within a social network e.g. [16]. The cost of dissension on some set of issues can be modeled using techniques of Statistical Mechanics of disordered systems and there is an extensive literature on polarization [4; 7; 18; 21; 28; 31; 33] and echo chambers [8; 12; 22; 35] in opinion dynamics models. Our aim in this paper is to address yet again collective behavior by studying Agent Based Models, but with the specific aim of addressing constitutional impeachments. The model is introduced in Section II. The analytical approach derives from the use of Statistical Mechanics in the space of interactions in the Gardner style [11], here applied not only to a single perceptron agent, but to a population of such agents. The relevant time scales of the different changes that can occur are discussed and this leads to the methodology appropriate for the analysis. In Section III we present the structure of the agents, the relevant order parameters that characterize the macroscopic state of the society and the analytical framework. The two types of quenched disorder, from the issues under discussion and the graph of interactions, lead as shown in Section IV, to functional mean field equations that determine the thermodynamics of the model. In Section V we present analytical results obtained from the study of the saddle-point equations. Readers interested in the lessons that can be gleaned from this toy model should go to Section VI where the interpretation in terms of less mathematical terms can be found. A short version of the extensive calculations is shown in the Supplementary Material (SM) Section in VII.1. ## II The model Our members-of-congress agents, are simple neural networks that discuss and emit for or against opinions about multidimensional issues. In addition there is a special agent, playing the role of the current executive leader, called the president, to be followed or deposed depending on the intensive parameters of the model. The agenda consists of \(N\)-dimensional binary issues \(\mathbf{\xi}_{\mu}\), and a simple measure of the agenda's complexity is given by \(\alpha=P/N\), where \(P\) is the number of pertinent topics under discussion. The \(a^{\rm th}\) agent's for or against opinion is \(\sigma_{a\mu}\), arising from the issue and its internal state \(\mathbf{J}_{a}\), \(\sigma_{a\mu}=\varphi(\mathbf{J}_{a},\mathbf{\xi}_{\mu})\in\{-1,+1\}\), where \(a\) runs over the members of congress and the president is designed by a label \(B\), and its opinion on issue \(\mathbf{\xi}_{\mu}\) is \(\sigma_{B\mu}=\varphi(\mathbf{B},\mathbf{\xi}_{\mu})\), where \(\mathbf{B}\) is the internal state of \(B\). A specific choice for \(\varphi\) will be postponed for now, but its output is binary, i.e. \(\varphi\in\{-1,+1\}\). The inner circle or click of a particular congress-agent is represented by an adjacency matrix \(\mathbf{G}\) with entries \(g_{ac}\neq 0\) if agents \(a\) cares about the opinion of agent \(c\) and zero if not. The weighed opinion on issue \(\mathbf{\xi}_{\mu}\) of \(a\)'s peers is \(\Sigma_{a\mu}=\sum_{c}g_{ac}\sigma_{c\mu}\). We consider the cost for agent \(a\) to hold an opinion on the \(\mu\)th issue to arise from two contributions: \[C_{a,\mu}=-\frac{1+\sigma_{B\mu}\sigma_{a\mu}}{2}-\frac{1-\sigma_{B\mu}\sigma _{a\mu}}{2}\sigma_{a\mu}\Sigma_{a\mu}. \tag{1}\] Equation (1) implements a mechanism of _corroboration_ as follows. If agent \(a\) agrees with \(B\), i.e. \(\sigma_{B\mu}\sigma_{a\mu}>0\), only the first term contributes to the cost, which gets reduced in one unit. If \(a\) and \(B\) disagree, \(\sigma_{B\mu}\sigma_{a\mu}<0\) and the second term is different from zero. If the weighed opinion of \(a\)'s peers is in agreement with \(B\), i.e. \(\sigma_{B\mu}\Sigma_{a\mu}=-\sigma_{a\mu}\Sigma_{a\mu}>0\), the cost increases, and if \(\sigma_{a\mu}\Sigma_{a\mu}>0\) the cost decreases. If agreeing with its peers is less costly than agreeing with \(B\), \(a\) can form a local consensus against \(B\), through corroboration. A simple rearrangement of terms allows us to write the cost as: \[2C_{a,\mu}=-\sigma_{B\mu}\sigma_{a\mu}-\Sigma_{a\mu}\sigma_{a\mu}+\Sigma_{a \mu}\sigma_{B\mu}-1. \tag{2}\] The first term describes the advantage of having the same opinion as the president. The second, of concurring with its peers. The third one can be attributed to the disadvantage that other members of its peer group are in alliance with the president. The last is just an additive constant. The overall cost for the entire congress, defined by the microscopic states \(\{{\bf J}_{a}\}\) of the agents, the topological structure of alliances \({\mathbf{G}}\) and the complete presidential agenda \({\cal A}=\{{\mathbf{\xi}}_{\mu},\sigma_{B\mu}\}_{\mu=1,\ldots P}\), gives the full Hamiltonian cost of the system: \[E(\{{\bf J}_{a}\},{\cal A},{\mathbf{G}})=\sum_{a,\mu}C_{a,\mu}. \tag{3}\] The cost for agent \(a\), expression (2), depends on the \(g_{ac}\) in two places. In the first, which we leave as shown, it describes the interaction of the peers \(c\) with agent \(a\). The second describes the overall influence of the president on the group of peers of \(a\). To simplify matters we consider, in the second term, to have a mean \(\nu\eta_{0}\) and disregard its fluctuations, where \(\eta_{0}\) is the average intensity of the influence exerted by another agent and \(\nu\) the average size of the group. Then the overall cost, up to an additive constant, simplifies to: \[E_{0}(\{{\bf J}_{a}\};{\cal A},{\mathbf{G}})=-\frac{1}{2}\left[(1-\nu\eta_{0})\sum _{a,\mu}\sigma_{B\mu}\sigma_{a\mu}+\sum_{\mu ac}g_{ac}\sigma_{a\mu}\sigma_{c \mu}\right]. \tag{4}\] The choice of techniques used to analyze the system follow from a discussion of the relevant "physical" time scales of the problem. While there is no conservation law that applies to the global cost in any strict sense, there are several relevant time scales associated to this discussion. The agenda under discussion and the political alliances are supposed to remain valid on a long time scale \(\tau_{q}\) of around one year, certainly less than \(\tau_{P}\), the 4 or 5 years of the presidential cycle. For times of the order of \(\tau_{q}\) we expect the variables \(\nu\) (the co-conspirators clique-size), \(\eta_{0}\) (the co-conspirators interaction strength), and \(\alpha\) (the volume of the agenda covered) to remain constant. Agents interact and may change opinions about the issues on a faster scale \(\tau_{op}\) of a few days. \(\tau_{op}\) is the typical time elapsed between the treatment of subsequent agenda items \({\mathbf{\xi}}_{\mu}\) and \({\mathbf{\xi}}_{\mu+1}\). The expected value of the cost is sufficiently stable on an intermediate time scale \(\tau_{C}\), which is larger than the time scale associated to the dynamics of the agents but much shorter than changes of the issues of national interest, \(\tau_{op}\ll\tau_{C}\ll\tau_{q}<\tau_{P}\). \(\tau_{C}\) is of the order of weeks, similar to the time validity of presidential polls data. This separation of the time scales determines the methodology of analysis of the problem and leads to a description of the system with a Boltzmann distribution with a \(\beta\), conjugated to the expected value of the cost, that controls the size of the fluctuations above the ground state. It can be interpreted as the pressure that society at large exerts on congress. As an example, [2] choose \(\beta\) as a measure of the president's polling. Since the time scale in which the agenda and the political alliances changes is still larger, their random effect can be averaged over as quenched disorder. This is reasonable, since at least during \(\tau_{q}\) the prominent issues of the agenda are to some extent fixed, as are the intra-party alliances. The macroscopic state of the system is characterized by order parameters to be described below. Still within the presidential cycle, changes due to externalities may lead to changes in the intensive parameters. Phase transitions may occur for a congress divided into a situation and opposition parties, to a majoritarian congress that either supports a state of almost unanimous support for the president or it is in almost total opposition. These transitions are to a constitutional dictatorship regime or to a state where the conditions for impeachment are ripe. They signal presidential crises driven by the collective properties of congress and not by external or internal military forces that act by simple dissolution of congress. ## III Methods and order parameters We have not yet made explicit the _a priori_ measure of the \(\sigma\) variables. If it were just a product of independent uniform measures, e.g. Ising like variables, several interesting features of the problem would remain untouched. Thus we decided for more structured agents, which we model by a neural network classifier with a binary for/against, \(\pm 1\) output. In order to keep it analytically manageable, we choose the simplest architecture, the single layer perceptron. Linearly separable models, in some manner similar to the Rescorla-Wagner model [29] from psychology have been shown to be useful in describing human performance in several cases. Therefore the dynamical variables of an agent are \(N\) dimensional vectors \({\bf J}_{a}\) and its opinion on an issue is \(\sigma_{a\mu}=\mathrm{sgn}({\bf J}_{a}\cdot{\mathbf{\xi}}_{\mu})\). The issues from the agenda are constructed by choosing independently \(P=\alpha N\) vertices of the \(N\) dimensional hyper-cube with coordinates of absolute value equal to one. Under the assumption that the average value of \(E_{0}\) in equation 3, is approximately constant over a cycle of discussions of order \(\tau_{C}\) and the random agenda and alliances quenched on the \(\tau_{q}\) scale, standard arguments yield the probability distribution of the states of the congress-agents, given by: \[{\cal P}({\bf J}_{a}|\beta,{\cal A},{\bf G})=\frac{1}{Z}{\cal P}_{0}({\bf J}_{ a})\exp\{-\beta E_{0}(\{{\bf J}_{a}\};{\cal A},{\bf G})\}, \tag{5}\] where \({\cal P}_{0}({\bf J})=(2\pi{\rm e})^{-N/2}\,\delta\left({\bf J}\cdot{\bf J}-N\right)\) is the _a priori_ measure of the agents weights, taken to be independent and uniform over the spherical shell of radius \(\sqrt{N}\) in \(N\) dimensions. The discussion about the separation of time scales require the use of quenched disorder. The macroscopic properties of the system are obtained from the free energy \(f=-\beta^{-1}\overline{\ln Z}\), averaged of the possible agendas and alliances, taken to be fixed on the relevant time scale. The interactions between agents, encoded in the matrix \(\mathbf{G}\), are assumed independent of each other and identically distributed. They are constructed in two steps. First, a Bernoulli with parameter \(p\), is used to decide if there is a connection present between two peers. Then, the strength of their interaction \(\eta\) is drawn from a Normal distribution centered at \(\eta_{0}\) with variance \(\Delta^{2}.\) This leads to \(\nu=2p(M-1)\) (see equation (66)) where the factor 2 accounts for the fact that the graph is directed, then: \[{\cal P}(g,\eta,x|\nu,\eta_{0},\Delta^{2}) = {\cal P}(g|\eta,x){\cal P}(x|p){\cal P}(\eta|\eta_{0},\Delta^{2}) \tag{6}\] \[= \delta(g-x\eta)\left[(1-p)\delta(x)+p\delta(x-1)\right]{\cal N}( \eta|\eta_{0},\Delta^{2}).\] The problem is complicated by the existence of two sources of quenched disorder, the agenda \({\cal A}\) and the alliances, encoded in the matrix \(\mathbf{G}\). An adaptation of ideas introduced in References [5; 24; 34] to treat this problems associated to coding theory are needed here. Due to the technical impossibility of computing the average of a logarithm we proceed by applying the replica formalism [23], i.e.: \[f=-\beta^{-1}\lim_{n\to 0}\frac{Z^{n}-1}{n}, \tag{7}\] where \(Z^{n}=\prod_{\gamma=1}^{n}Z^{\gamma}\) is the partition function of the replicated system, each of the \(n\) systems linked to an index \(\gamma\). Taking expectations over the alliances brings forward the following population averages order parameters: \[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\equiv{\mathbb{ E}}\left[\frac{1}{M}\sum_{a}\left(\sigma_{a\mu_{1}}^{\gamma_{1}}\sigma_{B\mu_{1} }\ldots\sigma_{a\mu_{\ell}}^{\gamma_{\ell}}\sigma_{B\mu}\right)\right], \tag{8}\] where \(\varrho_{\mu}^{\gamma}\) is the average agreement of the population with \(B\) on the \(\mu\)-th issue on the \(\gamma\) replica, and \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) are population averages of the agreement of individual \(a\) with \(B\) across systems \(\gamma_{1}\) (on issue \({\bf S}_{\mu_{1}}\)) to \(\gamma_{\ell}\) (on issue \({\bf S}_{\mu_{\ell}}\)). Their expectation values are \(\ell\)-point correlation functions for the opinions. The introduction of these parameters also requires the introduction of conjugate parameters \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\). Observe that \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) is the average of a local properties, thus the conjugate variable \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) must represent the average effect of the local neighborhood on the local agent. By imposing the replica-symmetric ansatz [5; 24; 34], the order parameters should not present any dependency on either replica or agenda item indexes, they should only depend on their number \(\ell\). By observing that the definition of the order parameters equation (8) satisfy \(-1\leq\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\leq 1\), we suppose the existence of a field \(\tanh(\beta z)\) which is drawn from two normalized distributions \(\pi(z)\) and \(\hat{\pi}(z)\) such that: \[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\int{\rm d}z \,\pi(z)\tanh^{\ell}(\beta z)\qquad\quad\tilde{\varrho}_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\nu\int{\rm d}z\,\hat{\pi}(z)\tanh^{\ell} (\beta z). \tag{9}\] Graph disorder introduces these two probability densities \(\pi(z)\) and \(\hat{\pi}(s)\), that are functional order parameters that describe the level of consensus at the local and neighborhood levels, respectively. It is their behavior that signals the transitions from a two parties equilibrium to a consensus that can be either for or against the presidential agent. Observe that the parameters defined in (9) have been introduced in (72) and (73). The usual order parameters associated with the agents overlaps and with the president are also introduced: \[R_{a}^{\gamma} = {\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf B}/N),\quad q_{a}^{ \gamma\rho}={\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{a}^{\rho}/N), \tag{10}\] \[W_{ab}^{\gamma} = {\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{b}^{\gamma}/N), \quad t_{ab}^{\gamma\rho}={\mathbb{E}}({\bf J}_{a}^{\gamma}\cdot{\bf J}_{b}^{ \rho}/N). \tag{11}\] Under the assumption of replica symmetric saddle points \(R_{a}^{\gamma}=R\), \(q_{a}^{\gamma\rho}=q\), and, by Reference [26], \(W_{ab}^{\gamma}=t_{ab}^{\gamma\rho}=W\), the properties of the system follow from the extrema of the free energy functional (see Section VII.1): \[f[q,R,\pi,\hat{\pi}] = T\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where the averages are taken over the following distributed variables \(\eta\sim\mathcal{N}(\eta|\eta_{0},\Delta^{2})\), \(y\sim\mathsf{P}(y|\hat{\pi})\), and \(x\sim 2\mathcal{N}(x|0,1)\mathcal{H}\left(-Rx/\sqrt{q-R^{2}}\right)\), where \(\mathcal{H}(t)\) is the Gardner error function \(\mathcal{H}(t)=\int_{t}^{\infty}\mathrm{d}x\mathcal{N}(x|0,1)\). We used the short hand \(\epsilon=\epsilon(\beta,\nu\eta,y)=\left[\exp(2\beta(1-\nu\eta_{0}+y))-1\right] ^{-1}\). The free energy is a functional of the normalized distributions \(\pi\) and \(\hat{\pi}\). The new variable \(y\)'s distribution is: \[\mathsf{P}(y|\hat{\pi})\equiv\int\frac{\mathrm{d}\hat{y}}{2\pi} \mathrm{e}^{-iy\hat{y}}\exp\left[\nu\left(\int\mathrm{d}s\hat{\pi}(s)\mathrm{e }^{i\hat{y}s}-1\right)\right]. \tag{13}\] The characteristic function \(\phi_{s}(\hat{y})\) of \(\hat{\pi}(s)\), and the generator function of the cumulants of \(s\), \(K_{s}(\hat{y})\) are: \[\phi_{s}(\hat{y}) = \int\mathrm{d}s\hat{\pi}(s)\mathrm{e}^{i\hat{y}s}, \tag{14}\] \[K_{s}(\hat{y}) = \log\phi_{s}(\hat{y}). \tag{15}\] We observe that there exists a distributed variable \(u\) such its generator of cumulants function can be defined as: \(K_{u}(\hat{y})\equiv\phi_{s}(\hat{y})-1\). Add \(\nu\) independent copies of \(u\) to define \(y=\sum_{i=1}^{\nu}u_{i}\), since: \[\mathsf{P}(y|\hat{\pi}) = \int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy\hat{y}}\exp \left[\nu K_{u}(\hat{y})\right], \tag{16}\] \[= \int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy\hat{y}}\left[ \phi_{u}(\hat{y})\right]^{\nu}, \tag{17}\] where the \(u_{i}\) are random variables with the property that the \(r\)th cumulant of \(u\) is equal to \(\mathbb{E}(s^{r}|\hat{\pi})\), the \(r\)th moment of \(s\). Hence as \(\nu\) grows, \(y\) becomes normal. The \(r\)th order cumulant \(\kappa_{r}^{(y)}\) of \(y\) satisfies: \[\kappa_{r}^{(y)}=\nu\kappa_{r}^{(u)}=\nu\mathbb{E}(s^{r}|\hat{\pi}), \tag{18}\] so the cumulants of \(y\) are constructed by accumulation of the cumulants of \(u\) or of the moments of \(s\). It automatically follow that: \[\mathbb{E}(y|\mathsf{P}) = \nu\mathbb{E}(s|\hat{\pi}) \tag{19}\] \[\mathbb{E}(y^{2}|\mathsf{P})-\mathbb{E}(y|\mathsf{P})^{2} = \nu\mathbb{E}(s^{2}|\hat{\pi}). \tag{20}\] Since \(\mathbb{E}(s^{2}|\hat{\pi})\) turns out to be proportional to \(\eta_{0}^{2}\), \(y\)'s variance is proportional to \(1/\nu\) in the relevant region where \(\nu\eta_{0}\) is of order \(1\). ## IV Saddle point equations The extreme of the free energy (12) is determined by the saddle point equations, which determine the order parameters in a self-consistent way. The distribution \(\hat{\pi}(s)\) satisfies: \[\hat{\pi}(s)=\int\mathrm{d}z\int\mathrm{d}y\,\mathsf{P}(z,s,y|\hat{\pi})=\int \mathrm{d}z\int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\mathsf{P}(z|y)\mathsf{P}( s|z), \tag{21}\] where \[\mathsf{P}(z|y) = \left\langle\delta\left[z-\beta^{-1}g(x;\epsilon,q)\right] \right\rangle_{x} \tag{22}\] \[\mathsf{P}(s|z) = \left\langle\delta\left(s-\beta^{-1}\mathrm{arctanh}\left[ \tanh(\beta\eta)\tanh(\beta z)\right]\right)\right\rangle_{\eta}, \tag{23}\] and \[g(x;\epsilon,q)=\frac{1}{2}\ln\frac{1+\epsilon}{\epsilon}+\frac{1}{2}\ln\frac{ 1-\mathcal{H}_{+}}{\mathcal{H}_{+}};\mathrm{with}\ \mathcal{H}_{+}=\mathcal{H}\left(\sqrt{\frac{q}{1-q}}x\right). \tag{24}\] The equations for \(\pi(z)\) and \(\hat{\pi}(s)\) are: \[\pi(z) = \int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\left\langle\delta\left[z- \beta^{-1}g(x;\epsilon,q)\right]\right\rangle_{x}, \tag{25}\] \[\hat{\pi}(s) = \int\mathrm{d}z\,\pi(z)\left\langle\delta\left(s-\beta^{-1} \mathrm{arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right]\right)\right\rangle_{ \eta}. \tag{26}\] This shows that \(\pi(z)\) is the distribution of the local field \(z\) associated to the agent, that is constructed over the influence of its neighborhood through \(\mathsf{P}(y|\hat{\pi})\), the distribution of consensus in the neighborhood of the agent, and the influence of the agenda through the average over \(x\). These two contributions represent the sources the agent uses to form its opinion. The distribution of the neighborhood effective field \(\hat{\pi}(s)\) acting on the local agent is obtained by averaging over the distribution of the local agent field through \(\pi(z)\) and through the distribution of influences through the average over \(\eta\). Observe that if there is agreement between agent and president and if the influence between peers is strong (large \(\eta\)), the neighborhood field \(s\) becomes large and positive. If the agent does not give any importance to its peers (\(\eta=0\)), the distribution \(\hat{\pi}(s)\) becomes a delta function centered at zero, and the system decouples. In addition, these functional saddle point equations also depend on the usual parameters \(q\) and \(R\), which satisfy \[\frac{q-R^{2}}{1-q} = \frac{\alpha}{\pi}\int\mathrm{d}y\,\mathsf{P}(y|\hat{\pi})\int \mathcal{D}x\,\frac{\exp\left(-\frac{qx^{2}}{1-q}\right)\,\mathcal{H}\left(- \frac{Rx}{\sqrt{q-R^{2}}}\right)}{\left[\epsilon+\mathcal{H}\left(-\sqrt{ \frac{q}{1-q}x}\right)\right]^{2}} \tag{27}\] \[\frac{R}{\sqrt{1-q}} = \frac{\alpha}{\pi}\sqrt{\frac{q}{q-R^{2}}}\int\mathrm{d}y\, \mathsf{P}(y|\hat{\pi})\int\mathcal{D}x\frac{\exp\left\{-\left(\frac{q}{1-q} +\frac{R^{2}}{q-R^{2}}\right)\frac{x^{2}}{2}\right\}}{\epsilon+\mathcal{H} \left(-\sqrt{\frac{q}{1-q}x}\right)}, \tag{28}\] The numerical solution of this set of equations is discussed in Section VII.2. ## V Macroscopic characterization of the model In Section VII.2 we demonstrate that for sufficiently large neighborhoods (\(\nu>O(1)\)), for sufficiently high pressure \(\beta\), and for a very narrow distribution of social strengths, i.e. \(\Delta\ll\eta_{0}\) and \(\mathcal{P}(\eta)=\mathcal{N}(\eta|\eta_{0},\Delta^{2})\), there are three possible solutions for equations (25) and (26). Two of them are the pure _conservative_ state, obtained if \(\nu\eta_{0}<1\) and the other is the _polarized_ pure state if \(\nu\eta_{0}>1\). There is a possibility of a third solution which is a mixture of the two pure states, that appears in the region of the phase space where dialogue between opposite positions may exist. By defining the parameter \(\Lambda\) as: \[\Lambda(R)\equiv\frac{\mathrm{sgn}(R)}{2\beta}\frac{q}{1-q}, \tag{29}\] which allow us to plot a partial phase diagram, presented in figure 7. In the region \((|\Lambda|,\eta_{0})\in\mathbb{A}\) a convex combination of both pure states is found. For \((|\Lambda|,\eta_{0})\notin\mathbb{A}\) the distributions can be expressed as: \[\hat{\pi}_{0}(z) \equiv \mathcal{N}\left(z\left|\mathcal{I}_{0}^{\star}(\Lambda,\eta_{0} ),\eta_{0}^{2}-[\mathcal{I}_{0}^{\star}(\Lambda,\eta_{0})]^{2}+\Delta^{2}\right.\right) \tag{30}\] \[\pi_{0}(s) \equiv \mathcal{N}\left(s\left|1+\nu[\mathcal{I}_{0}^{\star}(\Lambda, \eta_{0})-\eta_{0}]+\frac{1}{2}\Lambda,\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4 }\Lambda^{2}\right.\right)\] (31) \[\mathsf{P}_{0}(y|\hat{\pi}) \equiv \mathcal{N}\left(y\left|\nu\mathcal{I}_{0}^{\star}(\Lambda,\eta_ {0}),\nu(\eta_{0}^{2}+\Delta^{2})\right.\right), \tag{32}\] where \(\mathcal{I}_{0}^{\star}\) is the only solution to \[\mathcal{I}_{0}^{\star}=\eta_{0}\mathrm{erf}\left(\frac{1-\nu\eta_{0}+\nu \mathcal{I}_{0}^{\star}+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+ \Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]}}\right) \tag{33}\] outside region \(\mathbb{A}\) (this equation is developed in Section VII.2, equation (94)). We have observed that in the region of interest \(\nu\eta_{0}\sim O(1)\), the variance of \(\mathsf{P}_{0}(y|\hat{\pi})\) is of order \(O(\nu^{-1})\), therefore we can approximate this distribution by: \[\mathsf{P}_{0}(y|\hat{\pi})\approx\delta(y-\nu\mathcal{I}_{0}^{\star}). \tag{34}\] Inside the region \(\mathbb{A}\) we have mixed states described by: \[\hat{\pi}_{\mathrm{m}}(z) \equiv h_{+}\mathcal{N}\left(s\left|\mathcal{I}_{+}^{\star},\Delta^{2} \right.\right)+h_{-}\mathcal{N}\left(s\left|\mathcal{I}_{-}^{\star},\Delta^{2} \right.\right) \tag{35}\] \[\pi_{\mathrm{m}}(s) \equiv \mathcal{N}\left(s\left|1+\nu[\mathcal{I}^{\star}-\eta_{0}]+ \frac{1}{2}\Lambda,\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda^{2}\right.\right)\] (36) \[\mathsf{P}_{\mathrm{m}}(y|\hat{\pi}) \equiv \mathcal{N}\left(y\left|\nu\mathcal{I}^{\star},\nu\{\left(( \mathcal{I}^{\star})^{2}\right)+\Delta^{2}\}\right.\right), \tag{37}\] where \({\cal I}_{\pm}^{\star}\) are the stable solutions to equation (33) in \(\mathbb{A}\), \(h_{\pm}\) are suitable weights (104) satisfying \(0\leq h_{\pm}\leq 1\) and \(h_{+}+h_{-}=1\), \({\cal I}^{\star}\) is the mixed solution: \[{\cal I}^{\star} \coloneqq h_{+}{\cal I}_{+}^{\star}+h_{-}{\cal I}_{-}^{\star} \tag{38}\] \[\left\langle({\cal I}^{\star})^{2}\right\rangle \coloneqq h_{+}({\cal I}_{+}^{\star})^{2}+h_{-}({\cal I}_{-}^{\star})^{2}, \tag{39}\] and, given that for all \((\lambda,\eta_{0})\in\mathbb{A}\), \(\nu\eta_{0}\sim O(1)\), we have that: \[{\sf P}_{\rm m}(y|\hat{\pi})\approx\delta\left(y-\nu{\cal I}^{\star}\right). \tag{40}\] The application of equations (34) and (40) into equations (27) and (28) produce the following expressions \[\frac{q-R^{2}}{1-q} = \frac{\alpha}{\pi}\left({\rm e}^{\kappa}-1\right)^{2}\int{\cal D }x\,\frac{\exp\left(-\frac{qx^{2}}{1-q}\right)\,{\cal H}\left(-\frac{Rx}{ \sqrt{q-R^{2}}}\right)}{\left[1+\left({\rm e}^{\kappa}-1\right){\cal H}\left( -\sqrt{\frac{q}{1-q}x}\right)\right]^{2}} \tag{41}\] \[\frac{R}{\sqrt{1-q}} = \frac{\alpha}{\pi}\left({\rm e}^{\kappa}-1\right)\sqrt{\frac{q}{ q-R^{2}}}\int{\cal D}x\frac{\exp\left\{-\left(\frac{q}{1-q}+\frac{R^{2}}{q-R^{2}} \right)\frac{x^{2}}{2}\right\}}{1+\left({\rm e}^{\kappa}-1\right){\cal H}\left( -\sqrt{\frac{q}{1-q}x}\right)}, \tag{42}\] where \(\kappa\equiv 2\beta(1-\nu\eta_{0}+\nu{\cal I}^{\star})\). Observe that these equations are invariant under the following transformation \((\kappa,q,R)\rightarrow(-\kappa,q,-R)\). At very high pressure \(\beta\), the equations (27) and (28) can be expressed as: \[\frac{q_{\pm}-R_{\pm}^{2}}{1-q_{\pm}} = \frac{\alpha}{\pi}\int{\cal D}x\,\frac{\exp\left(-\frac{q_{\pm}x ^{2}}{1-q_{\pm}}\right)\,{\cal H}\left(-\frac{R_{\pm}x}{\sqrt{q_{\pm}-R_{\pm} ^{2}}}\right)}{\left[{\cal H}\left(-\sqrt{\frac{q_{\pm}}{1-q_{\pm}}x}\right) \right]^{2}} \tag{43}\] \[\frac{R_{\pm}}{\sqrt{1-q_{\pm}}} = \pm\frac{\alpha}{\pi}\sqrt{\frac{q_{\pm}}{q_{\pm}-R_{\pm}^{2}}} \int{\cal D}x\frac{\exp\left\{-\left(\frac{q_{\pm}}{1-q_{\pm}}+\frac{R_{\pm}^ {2}}{q_{\pm}-R_{\pm}^{2}}\right)\frac{x^{2}}{2}\right\}}{{\cal H}\left(-\sqrt {\frac{q_{\pm}}{1-q_{\pm}}x}\right)}, \tag{44}\] where the sub-index \(+(-)\) is valid for \(\nu\eta_{0}<(>)1\). The \(\beta\rightarrow\infty\) solutions satisfy \(q_{\pm}=\pm R_{\pm}\). These results justify naming the solution with sub-index \(+\) as conservative and the solution with sub-index \(-\) as polarized. Similar behavior is observed for finite but large values of the pressure. For sufficiently large pressures, sufficiently large \(\nu\) and a volume of information \(\alpha\gg\beta\), we can also demonstrate that: \[q = 1-\frac{Q_{0}^{2}}{\alpha^{2}}+o(\alpha^{-2}) \tag{45}\] \[Q_{0} = \frac{(2\pi)^{3/2}}{2+\sqrt{\pi}} \tag{46}\] and \[R=\begin{cases}q+2\pi\sqrt{3}Q_{0}^{3}\alpha^{-3}{\rm e}^{-2\beta}&\nu\eta_{0 }<1\\ -q-2\pi\sqrt{3}Q_{0}^{3}\alpha^{-3}{\rm e}^{-2\beta(\nu\eta_{0}-1)}&1<\nu\eta_ {0}\end{cases}. \tag{47}\] Due to the odd parity of \(R(\kappa)\), we can conclude that the plane \((\nu\eta_{0},\beta^{-1})\) is divided into two phases, a conservative phase for which \(R>0\) and \(\nu\eta_{0}<1\) and a polarized phase with \(R<0\) and \(\nu\eta_{0}>1\). Consider the set of points with coordinates \((\beta^{-1},\nu\eta=1,\alpha)\) such that the parameter defined in equation (29) becomes \(|\Lambda^{\star}|=0.411\) (see figure 7). In consequence, the solution of equation (33) over this line is \(\nu{\cal I}^{\star}=0.988\) and the correspondent index \(\kappa\) becomes a function of the overlap \(q\), i.e. \[\kappa^{\star}(q)=\frac{\nu{\cal I}^{\star}}{|\Lambda^{\star}|}\frac{q}{1-q}. \tag{48}\] By solving the equations (41) and (42) with \(\kappa\) given by (48), we obtain the curve presented in figure 4. We solve for the properties of the equilibrium state valid in the \(\tau_{q}\) time scale in the intensive parameters of the system: \(\beta\) the pressure, \(\alpha\) a measure of the complexity of the agenda and \(\nu\eta_{0}\) a measure of the peer pressure by other agents in congress which arises from the mean number of interlocutors \(\nu\) and the mean intensity of their interaction \(\eta_{0}\). These results are presented as the phase diagrams shown in figure 5. By fixing the value of \(\alpha\), we can study the behavior of the system for a given volume of information. We constructed figure 6 by solving equation (33) for different values of \(\beta^{-1}\) and \(\nu\eta_{0}\) at fixed values of \(\alpha\) with \(\nu=10\) and \(\Delta=0.01\). The Figure 4: Critical pressure against volume of information. When the number of issues treated in the legislature is not sufficiently large, there is always a phase around the opinion boundary \(\nu\eta=1\) where polarized and conservatives states coexist. This area represents the collection of points \((\alpha,\beta^{-1})\) where a discussion between members of the chamber with different positions may occur. There is a critical volume of information \(\alpha^{*}=1.534(1)\) bellow which there is always room for discussion, no matter how high the pressure \(\beta^{-1}\) is. Above this threshold there is always a maximum pressure \(\beta^{-1}(\alpha)\) such that above it there is no more discussion and positions are definitely set. Figure 5: Phase diagram of the system in terms of the parameters \(\nu\eta_{0}\), \(\beta^{-1}\) and \(\alpha\). There are two phases separated by the plane \(\nu\eta_{0}=1\). For \(\nu\eta_{0}<1\) we have that \(R>0\) and the average consensus is in favor of \(B\). For \(\nu\eta_{0}>1\), \(R<0\) and the average position of the agents is to form local alliances against the president \(B\). In all the points of the space above the surfaces \(\nu\eta_{+}\) and \(\nu\eta_{-}\), the distribution describing the position of the neighborhood, given by \(\hat{\pi}(z)\), is sharply picked at \(+\eta_{0}\), for the conservative position, i.e. \(\nu\eta_{0}<1\), or at \(-\eta_{0}\), for the polarized position, i.e. \(\nu\eta_{0}>1\). In the region bellow the surfaces \(\nu\eta_{+}\) and \(\nu\eta_{-}\) we have the same phase separation at \(\nu\eta_{0}=1\) but the contribution from the neighborhood is a mixture of a polarized component plus a conservative component. The circle at coordinates \(\nu\eta_{0}=1\), \(\beta^{-1}=0.824\) and \(\alpha=1.534\) is the critical point presented in figure 4. The phase diagram presented in figure 6 has been obtained by cutting sections at constant \(\alpha\) from this three-dimensional plot, and the the red sphere corresponds to the first value of \(\alpha\) (\(=\alpha^{*}\)) for which the behavior presented in figure 6 c) is observed. full lines separate pure-state areas [in white, for \(R<0\) and in dark gray (orange on-line) for \(R>0\), given by equations (30), (31), and (32)] from mixed-state areas [in gray (yellow on-line), given by equations (35), (36), and (37)]. We also found that for values of \(\alpha<\alpha^{\star}=1.534(1)\) the mixed states are contained into a mixed-triangular-shaped area, with vertexes at \((\beta^{-1}=0,\nu\eta_{0}=1.651(1))\), \((\beta^{-1}=0,\nu\eta_{0}=0.717(1))\), and \((\beta=\beta(\alpha),\nu\eta_{0}=1).\) In particular we observe that \(\beta^{\star}\equiv\beta(\alpha^{\star})\approx 1.214(1)\) and for all \(\alpha^{\star}<\alpha^{\prime}<\alpha\), \(\beta(\alpha)>\beta(\alpha^{\prime})>\beta(\alpha^{\star}).\) The lightly shaded (yellow on-line) region close to the boundary (\(\nu\eta_{0}=1\)) is characterized by a mixture of states that represents a state of dialog, where the influence on the agents from their neighborhoods come from both sides of the argument. The larger the complexity of the agenda (\(\alpha\)) the smaller the size of this region. To complete our analysis and for very low values of \(\beta\) we obtain the following values for the parameters \(R\) and \(q\): \[q \approx \frac{2\alpha}{\pi}\beta^{2}(1-\nu\eta_{0})^{2}\left(\frac{2 \alpha}{\pi}+1\right) \tag{49}\] \[R \approx \frac{2\alpha}{\pi}\beta(1-\nu\eta_{0})[1-2\beta(1-\nu\eta_{0})]. \tag{50}\] Discussion and conclusions During the last decade [6] the application of statistical mechanics techniques to model social problems have produced a number of interesting results, not only providing new insight to the discussion of social phenomena but also showing predictive capabilities [10]. Inspired by these ideas, we have develop a model for the phenomenon of impeachment in presidential democracies. The political agents, represented by perceptrons, interact with an external meta-agent \(B\), which represents the executive, and with peers in the legislative chamber. The model has been tailored to balance the need to explain observed behavior [17], the complexity of the social interactions [19; 31], and the analytical tractability of the mathematical expressions constructed. It is important to note that, for sufficiently large number of alliances \(\nu>O(1)\), the saddle point equations (25 to 28) can be solved in pairs. The first two (25) and (26) involving the distributions \(\pi\) and \(\hat{\pi}\) connected with the distribution of alliances and the pair (27) and (28), connected to the parameters associated with the discussion of the presidential agenda. The solution to (25) and (26) has been expressed using the parameter \(\Lambda\) defined in equation (29), which brings an input from the disorder-from-learning part of the problem into the disorder-from-graph part of the problem. In a similar manner, the parameter \(\kappa\) that helps to express the solution of equations (27) and (28), introduces effects from the disorder-from-graph part of the problem into the disorder-from-learning part of the problem. The constraints that emerged from expressing the solution in terms of these parameters have help constructing the phase diagram presented in figure 7. We obtained a set of sensible results for a topology represented by a directed graph with an average of \(\nu\) links per vertex (the number of co-conspirators). In this setting and considering a steady president, i.e. \(B\) constant, we found that there exist two possible pure positions. One characterized by an overall average attitude in favor of the president characterized by a positive and increasing (with the volume of information) average agreement \(R\), that we dubbed the conservative state, and other with a negative and decreasing value of \(R\), the polarized state. These states are also characterized by a sharp picked distribution of neighbors' influences \(\hat{\pi}\), centered at \(\mathcal{I}_{+}^{\star}\) (\(\mathcal{I}_{-}^{\star}\)) for the conservative (polarized) state. From figure 6 we also showed that for volumes of information bellow a critical value \(\alpha^{\star}=1.534(1)\), there is a region in the plane \((\nu\eta_{0},\beta^{-1})\), in a form of a band around \(\nu\eta_{0}=1\), where mixed states, defined by the equations (35), (36), and (37), exist. The mixture is explicit in equation (35) that presents the influence on an agent by its neighborhood (\(\hat{\pi}\)) as a combination of the two sides of the argument. This band of mixed states collapses into a triangle with vertexes at \((\beta^{-1}=0,\nu\eta_{0}=1.651(1))\), \((\beta^{-}1=0,\nu\eta_{0}=0.71845(1))\), and \((\beta=\beta(\alpha),\nu\eta_{0}=1)\), for values of \(\alpha>\alpha^{\star}.\) We also observed that the larger the volume of information the smaller the triangle area, i.e. \(\alpha>\alpha^{\prime}\) implies that \(\beta(\alpha)>\beta(\alpha^{\prime}).\) The interpretation of this behavior is as follows: when information is limited (low \(\alpha\)) and for values of effective co-conspirators \(\nu\eta_{0}\simeq 1\) the influence from the neighborhood to the agent is formed by a combination of positions in favor and against the executive. In this region the overlap \(R,\) which represents the average agreement with \(B\) still has a well defined sign given by \(\mathrm{sgn}(R)=\mathrm{sgn}(1-\nu\eta_{0}),\) but is the result from two pure-state contributions. In this region coexist the two positions, in pro and against the executive \(B\). Definite positions are not set yet, thus propitiating a state of dialogue. The more information is fed to the system the smaller this region becomes. There is a critical value of information \(\alpha^{\star}=1.534(1),\) beyond which this behavior is only observed for pressures lower than \(\beta^{\star}=1.214(1).\) In other words, the more information is provided the purer the contribution to the agents opinion from their neighborhoods and the lower the chances for a dialog between opposed positions. For very large values of \(\alpha\) and \(\nu\eta_{0}\approx 1\) coexistence exists only if the pressure \(\beta\) is sufficiently high. Thus, only a president with high index of popularity can guarantee a discussion of the topics in the agenda between opposite positions of the legislative. Under the light of the cases used as motivation for our model we observe that there are events, represented by particular items of the executive's agenda, that are so momentous in the formation of opinions that can be considered critical (Collor de Mello's economic plans, Perez's Grate Turn Around), to the point that, immediately after they occur, opposite positions in pro or against the executive's proposals become more consolidated, and the influence of the neighborhood on the agents becomes more polarized (on either position) and the dialogue-prone region gets reduced. If the public rejects the proposals, \(\beta\) diminishes and the executive may find itself in front of a polarized legislative chamber that either supports it (Samper's case) or not (Collor de Mello's case). If the negative information instances persist and neither the public or the chamber supports the president, the executive may find itself facing an impeachment procedure. A natural extension to this work comes through the consideration of the case of a changing \(B\). In a previous work [25] we have study the evolution of opinions in the presence of an adaptive social rule that slowly changes following the average position of the population. As a consequence, the contribution from socially neutral issues (i.e. issues \(\boldsymbol{\xi}_{0}\) such that \(\mathbf{B}\cdot\boldsymbol{\xi}_{0}=0\)) becomes relevant [13], as it can be observed by the presence of the parameter \(W\), which represents the overlap between the representation of different agents (and it is a measure of the level of agreement between them). We expect that, if a similar setting is imposed in the present framework, the free energy functional so obtain should be dependent also on a parameter \(W\), revealing the contribution from the socially neutral issues to the system. **Acknowledgment:** This work received partial support from CNAIPS-NAP USP. ## VII Supplementary Material ### Calculation of the averages over \(\mathcal{A}\), \(\mathbf{B}\) and \(\boldsymbol{G}\) By observing that \(\mathcal{P}(g_{ac})=\int\mathrm{d}\eta_{ac}\mathcal{N}(\eta_{ac}|\eta_{0}, \Delta^{2})\sum_{x_{ac}=0,1}[p\delta_{x_{ac},1}+(1-p)\delta_{x_{ac},0}]\, \delta(g_{ac}-x_{ac}\eta_{ac})\), where the Kronecker's delta is \(\delta_{X,Y}=1\) if \(X=Y\) and \(0\) otherwise and Dirac's delta is \(\int_{\Omega}\mathrm{d}x\delta(x-x_{0})=1\) if \(x_{0}\in\Omega\) and \(0\) otherwise, the replicated partition function is: \[\overline{Z^{n}}(\beta) \equiv \int\mathrm{d}\mathbf{B}\mathcal{P}(\mathbf{B})\int\prod_{\mu} \mathrm{d}\boldsymbol{\xi}_{\mu}\mathcal{P}(\boldsymbol{\xi}_{\mu})\prod_{a} \prod_{c}\int\mathrm{d}g_{ac}\mathcal{P}(g_{ac})\int\prod_{\gamma=1}^{n}\prod _{a}\mathrm{d}\mathbf{J}_{a}^{\gamma}\mathcal{P}(\mathbf{J}_{a}^{\gamma}) \tag{51}\] \[\prod_{\gamma\mu a}\exp\left\{\beta\sum_{c\in\mathbb{N}_{a}}x_{ ac}\eta_{ac}\mathrm{sgn}\left(\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{ \mu}}{\sqrt{N}}\right)\mathrm{sgn}\left(\frac{\mathbf{J}_{c}^{\gamma}\cdot \boldsymbol{\xi}_{\mu}}{\sqrt{N}}\right)\right\}\] \[\prod_{\gamma\mu a}\exp\left\{(1-\nu\eta_{0})\beta\mathrm{sgn} \left(\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}} \right)\mathrm{sgn}\left(\frac{\mathbf{B}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N} }\right)\right\},\] and by defining the variables: \[\lambda_{a,\mu}^{\gamma}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{ \xi}_{\mu}}{\sqrt{N}},\qquad u_{\mu}\equiv\frac{\mathbf{B}\cdot\boldsymbol{ \xi}_{\mu}}{\sqrt{N}} \tag{52}\] and, by defining the overlaps: \[R_{a}^{\gamma}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot\mathbf{B }}{N},\qquad\quad W_{ab}^{\gamma}=\frac{\mathbf{J}_{a}^{\gamma}\cdot\mathbf{J }_{b}^{\gamma}}{N},\] \[q_{a}^{\gamma\rho}\equiv\frac{\mathbf{J}_{a}^{\gamma}\cdot \mathbf{J}_{a}^{\rho}}{N},\qquad\quad t_{ab}^{\gamma\rho}\equiv\frac{\mathbf{J }_{a}^{\gamma}\cdot\mathbf{J}_{b}^{\rho}}{N}, \tag{53}\] we have that the expectation over patterns is: \[\left\langle\cdot\right\rangle_{\mathcal{A}} \equiv \int\prod_{\mu}\mathrm{d}\boldsymbol{\xi}_{\mu}\mathcal{P}( \boldsymbol{\xi}_{\mu})\exp\left(i\sum_{\gamma\mu a}\hat{\lambda}_{a\mu}^{ \gamma}\frac{\mathbf{J}_{a}^{\gamma}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}}+i \sum_{\mu}\hat{u}_{\mu}\frac{\mathbf{B}\cdot\boldsymbol{\xi}_{\mu}}{\sqrt{N}}\right) \tag{54}\] \[= \int\prod_{\gamma a}\frac{\mathrm{d}R_{a}^{\gamma}\mathrm{d}\hat{ R}_{a}^{\gamma}}{2\pi/N}\,\exp\left(i\sum_{\gamma a}\hat{R}_{a}^{\gamma}(NR_{a}^{ \gamma}-\mathbf{J}_{a}^{\gamma}\cdot\mathbf{B})\right)\] \[\int\prod_{\gamma}\prod_{a<b}\frac{\mathrm{d}W_{ab}^{\gamma} \mathrm{d}\hat{W}_{ab}^{\gamma}}{2\pi/N}\,\exp\left(i\sum_{\gamma}\sum_{a<b} \hat{W}_{ab}^{\gamma}(NW_{ab}^{\gamma}-\mathbf{J}_{a}^{\gamma}\cdot\mathbf{J}_ {b}^{\gamma})\right)\] \[\int\prod_{a}\prod_{\gamma<\rho}\frac{\mathrm{d}q_{a}^{\gamma\rho} \mathrm{d}\hat{q}_{a}^{\gamma\rho}}{2\pi/N}\,\exp\left(i\sum_{a}\sum_{\gamma< \rho}\hat{q}_{a}^{\gamma\rho}(Nq_{a}^{\gamma\rho}-\mathbf{J}_{a}^{\gamma}\cdot \mathbf{J}_{a}^{\rho})\right)\] \[\int\prod_{\gamma<\rho}\prod_{a<b}\frac{\mathrm{d}t_{ab}^{\gamma \rho}\mathrm{d}\hat{q}_{ab}^{\gamma\rho}}{2\pi/N}\,\exp\left(i\sum_{a<b}\sum_{ \gamma<\rho}\hat{t}_{ab}^{\gamma\rho}(Nt_{ab}^{\gamma\rho}-\mathbf{J}_{a}^{\gamma }\cdot\mathbf{J}_{b}^{\rho})\right)\] \[\exp\left\{-\frac{1}{2}\sum_{\mu}\left[\sum_{\gamma a}\left(\hat{ \lambda}_{a\mu}^{\gamma}\right)^{2}+2\sum_{\gamma a}\sum_{\gamma<\rho}\hat{ \lambda}_{a\mu}^{\gamma}\hat{\lambda}_{a\mu}^{\rho}q_{a}^{\gamma\rho}+2\sum_{ \gamma a}\sum_{a<b}\hat{\lambda}_{a\mu}^{\gamma}\hat{\lambda}_{b\mu}^{\gamma}W_{ ab}^{\gamma}+\right.\right.\] \[\left.\left.+2\sum_{\gamma a}\sum_{\gamma<\rho}\sum_{a<b}\hat{ \lambda}_{a\mu}^{\gamma}\hat{\lambda}_{b\mu}^{\rho}t_{ab}^{\gamma\rho}+2\sum_{ \gamma a}\hat{u}_{\mu}\hat{\lambda}_{a\mu}^{\gamma}R_{a}^{\gamma}+\hat{u}_{\mu} ^{2}\right]\right\}+O(N^{-1}).\] By considering the distribution of the synaptic vector \(\mathbf{B}\) as \(\mathcal{P}(\boldsymbol{B})=\prod_{k}\delta(B_{k}-1)\) and by defining the matrices: \[[\boldsymbol{\hat{Q}}]_{a,b}^{\gamma,\rho} \equiv i\left\{\delta^{\gamma,\rho}\left(\delta_{a,b}\hat{\ell}_{a}^{ \gamma}+(1-\delta_{a,b})\hat{W}_{a,b}^{\gamma}\right)+(1-\delta^{\gamma,\rho}) \left(\delta_{a,b}\hat{q}_{a}^{\gamma,\rho}+(1-\delta_{a,b})\hat{t}_{a,b}^{ \gamma,\rho}\right)\right\} \tag{55}\] \[[{\mathbf{Q}}]^{\gamma,\rho}_{a,b} \equiv \delta^{\gamma,\rho}\left(\delta_{a,b}+(1-\delta_{a,b})W^{\gamma}_{a, b}\right)+(1-\delta^{\gamma,\rho})\left(\delta_{a,b}q^{\gamma,\rho}_{a}+(1- \delta_{a,b})t^{\gamma,\rho}_{a,b}\right) \tag{56}\] we have that the average over synaptic vectors become: \[\langle\cdot\rangle_{{\bf B},\{{\bf J}^{\gamma}_{a}\}} = \int\prod_{\gamma,a}\frac{{\rm d}\hat{\ell}^{\gamma}_{a}}{4\pi} \exp\left(i\frac{N}{2}\sum_{\gamma,a}\hat{\ell}^{\gamma}_{a}-N\ln|\hat{\mathbf{Q}} |-\frac{1}{2}\sum_{a,b}\sum_{\gamma,\rho}\hat{R}^{\gamma}_{a}[\hat{\mathbf{Q}}^{-1} ]^{\gamma,\rho}_{a,b}\hat{R}^{\rho}_{b}-\frac{nNM}{2}\right), \tag{57}\] which renders the following expression for the partition function: \[\overline{Z^{n}}(\beta) = \int\prod_{\gamma a}\frac{{\rm d}\hat{\ell}^{\gamma}_{a}}{4\pi} \int\prod_{\gamma a}\frac{{\rm d}R^{\gamma}_{a}{\rm d}\hat{R}^{\gamma}_{a}}{2 \pi/N}\int\prod_{\gamma}\prod_{a<b}\frac{{\rm d}W^{\gamma}_{ab}{\rm d}\hat{W}^ {\gamma}_{ab}}{2\pi/N}\int_{a}\prod_{\gamma<\rho}\frac{{\rm d}q^{\gamma\rho}_{ a}{\rm d}\hat{q}^{\gamma\rho}_{a}}{2\pi/N}\int\prod_{\gamma<\rho}\prod_{ab}\frac{{ \rm d}t^{\gamma\rho}_{ab}{\rm d}\hat{t}^{\gamma\rho}_{ab}}{2\pi/N} \tag{58}\] \[\exp\left(\frac{N}{2}\mbox{tr}{\mathbf{Q}}\hat{\mathbf{Q}}-\frac{N}{2} \ln|\hat{\mathbf{Q}}|-\frac{N}{2}\sum_{ab}\sum_{\gamma,\rho}\hat{R}^{\gamma}_{a} \left[\hat{\mathbf{Q}}^{-1}\right]^{\gamma\rho}_{ab}\hat{R}^{\rho}_{b}+iN\sum_{ \gamma a}\hat{R}^{\gamma}_{a}R^{\gamma}_{a}-\frac{nNM}{2}\right)\] \[\int\prod_{\gamma\mu a}\frac{{\rm d}\lambda^{\gamma}_{a\mu}{\rm d }\hat{\lambda}^{\gamma}_{a\mu}}{2\pi}\exp\left(-i\sum_{\gamma\mu a}\hat{ \lambda}^{\gamma}_{a\mu}\lambda^{\gamma}_{a\mu}\right)\] \[\int\prod_{\mu}{\cal D}u_{\mu}\exp\left(i\sum_{\gamma\mu a}\hat {\lambda}^{\gamma}_{a\mu}R^{\gamma}_{a}u_{\mu}+(1-\nu\eta_{0})\beta\sum_{ \gamma\mu a}\mbox{sgn}(\lambda^{\gamma}_{a\mu}u_{\mu})\right)\] \[\exp\left\{-\frac{1}{2}\sum_{\mu}\left[\sum_{\gamma a}\left[1-(R^ {\gamma}_{a})^{2}\right]\left(\hat{\lambda}^{\gamma}_{a\mu}\right)^{2}+2\sum_{ \gamma a}\sum_{\gamma<\rho}\left[q^{\gamma\rho}_{a}-R^{\gamma}_{a}R^{\rho}_{a} \right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{\rho}_{a\mu}+\right.\right.\] \[\left.\left.+2\sum_{\gamma a}\sum_{a<b}\left[W^{\gamma}_{ab}-R^{ \gamma}_{a}R^{\gamma}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{ \gamma}_{b\mu}+2\sum_{\gamma a}\sum_{\gamma<\rho}\sum_{b}\left[t^{\gamma\rho}_ {ab}-R^{\gamma}_{a}R^{\rho}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{ \lambda}^{\rho}_{b\mu}\right]\right\}\] \[\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ac }\eta_{ac}\mbox{sgn}(\lambda^{\gamma}_{a\mu}\lambda^{\gamma}_{c\mu})\right\} \right\rangle_{\!\!\!\!G}+O(N^{-1})\] in the limit of large \(N\) we find that \[\hat{R}^{\gamma}_{a} = i\sum_{\rho,b}[\hat{\mathbf{Q}}]^{\gamma,\rho}_{a,b}R^{\rho}_{b} \tag{58}\] \[\left[\hat{\mathbf{Q}}^{-1}\right]^{\gamma,\rho}_{a,b} = [{\mathbf{K}}]^{\gamma,\rho}_{a,b}\equiv[{\mathbf{Q}}]^{\gamma,\rho}_{a,b}- R^{\gamma}_{a}R^{\rho}_{b} \tag{59}\] and to express the extraction of the asymptotic behavior of integrals of the form \(I_{N}\equiv\int_{x_{1}}^{x_{2}}{\rm d}x\,{\rm e}^{-Ng(x)}\) in the limit \(N\to\infty\) through Laplace's method, we denote: \(\mbox{extr}_{x}\,I_{N}\equiv{\rm e}^{-Ng(x_{0})+O(\log N)}\) where \(x_{0}\) is such that \(g(x_{0})\leq g(x)\) for all \(x\in[x_{1},x_{2}]\), so we can write: \[\overline{Z^{n}}(\beta) = \mbox{extr}_{\mathbf{K}}\left\{\exp\left(\frac{N}{2}\ln|{\mathbf{K}}| \right)\right. \tag{60}\] \[\int\prod_{\gamma\mu a}\frac{{\rm d}\lambda^{\gamma}_{a\mu}{\rm d }\hat{\lambda}^{\gamma}_{a\mu}}{2\pi}\exp\left(-i\sum_{\gamma\mu a}\hat{ \lambda}^{\gamma}_{a\mu}\lambda^{\gamma}_{a\mu}\right)\] \[\int\prod_{\mu}{\cal D}u_{\mu}\,\exp\left(i\sum_{\gamma a}\hat{ \lambda}^{\gamma}_{a\mu}R^{\gamma}_{a}u_{\mu}+(1-\nu\eta_{0})\beta\sum_{\gamma \mu a}\mbox{sgn}(\lambda^{\gamma}_{a\mu}u_{\mu})\right)\] \[\exp\left[-\frac{1}{2}\sum_{\gamma\mu a}\left[1-(R^{\gamma}_{a})^ {2}\right]\left(\hat{\lambda}^{\gamma}_{a\mu}\right)^{2}-\sum_{\gamma\mu a}\sum_ {\gamma<\rho}\left[q^{\gamma\rho}_{a}-R^{\gamma}_{a}R^{\rho}_{a}\right]\hat{ \lambda}^{\gamma}_{a\mu}\hat{\lambda}^{\rho}_{a\mu}-\right.\right.\] \[\left.\left.-\sum_{\gamma\mu a}\sum_{a<b}\left[W^{\gamma}_{ab}-R^{ \gamma}_{a}R^{\gamma}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{\lambda}^{ \gamma}_{b\mu}-\sum_{\gamma\mu a}\sum_{\gamma<\rho}\sum_{b}\left[t^{\gamma\rho}_ {ab}-R^{\gamma}_{a}R^{\rho}_{b}\right]\hat{\lambda}^{\gamma}_{a\mu}\hat{ \lambda}^{\rho}_{b\mu}\right]\] \[\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ac }\eta_{ac}\mbox{sgn}(\lambda^{\gamma}_{a\mu})\mbox{sgn}(\lambda^{\gamma}_{c\mu })\right\}\right\rangle_{\!\!\!\!G}\Bigg{\}},\] where \({\cal D}x\equiv(2\pi)^{-1/2}{\rm d}x\,\exp(-x^{2}/2)\). Also, by imposing the Replica Symmetric (RS) ansatz: \(R_{a}^{\gamma}=R,\,q_{a}^{\gamma,\rho}=q\), and by following [26] we can assume that \(t_{ab}^{\gamma\rho}=W_{ab}^{\gamma}=W,\) then by defining \[{\cal C}_{a\mu}\equiv\frac{\sqrt{W-R^{2}}y_{\mu}+Ru_{\mu}+\sqrt{q-W}y_{a\mu}}{ \sqrt{1-q}} \tag{62}\] and by observing that the logarithm of the matrix \(K\) in the RS approach is \[\ln|\mathbf{K}|=nM\left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)+O(n^{2}), \tag{63}\] we have that, after the integration over the variables \(\{\hat{\lambda}_{a\mu}^{\gamma}\}\), the partition function becomes: \[\overline{Z^{n}}(\beta) = \mathop{\rm extr}_{RqW}\left\{\exp\left[\frac{nNM}{2}\left(\ln(1- q)+\frac{q-R^{2}}{1-q}\right)\right]\right. \tag{64}\] \[\left.\int\prod_{\mu}{\cal D}u_{\mu}\prod_{\mu}{\cal D}y_{\mu} \prod_{\mu a}{\cal D}y_{a\mu}\prod_{\gamma\mu a}\frac{{\rm d}\lambda_{a\mu}^{ \gamma}}{\sqrt{2\pi}}\right.\] \[\left.\exp\left[-\frac{1}{2}\sum_{\gamma\mu a}(\lambda_{a\mu}^{ \gamma}-{\cal C}_{a\mu})^{2}+(1-\nu\eta_{0})\beta\sum_{\gamma\mu a}{\rm sgn} \left(\lambda_{a\mu}^{\gamma}u_{\mu}\right)\right]\right.\] \[\left.\left.\left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a \neq c}x_{ac}\eta_{ac}{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn}(\lambda_{c \mu}^{\gamma})\right\}\right\rangle_{\mathbf{G}}\right\}.\] The average over the graph variables is, by defining the temperature \(\beta^{\prime}\equiv\beta/(1+\nu\eta_{0})\): \[\Upsilon \equiv \left\langle\exp\left\{\beta\sum_{\gamma\mu a}\sum_{a\neq c}x_{ ac}\eta_{ac}{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn}(\lambda_{c\mu}^{\gamma}) \right\}\right\rangle_{\mathbf{G}} \tag{65}\] \[= \int\prod_{ac}\frac{{\rm d}\eta_{ac}}{\sqrt{2\pi\Delta^{2}}}\exp \left[-\frac{(\eta_{ac}-\eta_{0})^{2}}{2\Delta^{2}}\right]\prod_{ac}\left\{1-p +p\,\prod_{\gamma\mu}\exp\left[\beta{\rm sgn}(\lambda_{a\mu}^{\gamma}){\rm sgn }(\lambda_{c\mu}^{\gamma})\right]\right\}\] \[= (1-p)^{M(M-1)}\int\prod_{ac}\frac{{\rm d}\eta_{ac}}{\sqrt{2\pi \Delta^{2}}}\exp\left[-\frac{(\eta_{ac}-\eta_{0})^{2}}{2\Delta^{2}}\right]\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\prod_{\gamma\mu}\left[\cosh( \beta\eta_{ac})+{\rm sgn}(\lambda_{a\mu}^{\gamma}\lambda_{c\mu}^{\gamma})\sinh (\beta\eta_{ac})\right]\right\}\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\cosh(\beta\eta_{ac})^{nP}\prod _{\gamma\mu}\left[1+\tanh(\beta\eta_{ac}){\rm sgn}\left(\lambda_{a\mu}^{ \gamma}\lambda_{c\mu}^{\gamma}\right)\right]\right\}.\] Observe that we are assuming that the number of neighbors, in average, must be \(\nu\ll M:\) \[\sum_{a}x_{ac}{\cal P}(x_{ac}) = p(M-1)=\frac{\nu}{2} \tag{66}\] \[\Upsilon = \left(1-\frac{\nu}{2(M-1)}\right)^{M(M-1)}\int\prod_{ac}\frac{ \mathrm{d}\eta_{ac}}{\sqrt{2\pi\Delta^{2}}}\exp\left[-\frac{(\eta_{ac}-\eta_{0}) ^{2}}{2\Delta^{2}}\right] \tag{67}\] \[\prod_{ac}\left\{1+\frac{p}{1-p}\,\cosh(\beta\eta_{ac})^{nP} \left[1+\tanh(\beta\eta_{ac})\sum_{\gamma\mu}\mathrm{sgn}\left(\lambda_{a\mu}^ {\gamma}\lambda_{\nu\rho}^{\gamma}\right)\right.\right.\] \[\left.\left.\qquad+\tanh(\beta\eta_{ac})^{2}\sum_{\langle\gamma_{ 1}\mu_{1};\gamma_{2}\mu_{2}\rangle}\mathrm{sgn}\left(\lambda_{a\mu_{1}}^{ \gamma_{1}}\lambda_{c\mu_{1}}^{\gamma_{1}}\right)\mathrm{sgn}\left(\lambda_{ a\mu_{2}}^{\gamma_{2}}\lambda_{c\mu_{2}}^{\gamma_{2}}\right)+\ldots\right]\right\}\] \[= \exp\left\{-\frac{\nu}{2}M\right.+\] \[+ \left.\frac{\nu}{2}M\sum_{\ell=0}^{nP}\left\langle\cosh(\beta \eta)^{nP}\tanh(\beta\eta)^{\ell}\right\rangle_{\eta}\sum_{\langle\gamma_{1} \mu_{1};\ldots;\gamma_{\ell}\mu_{\ell}\rangle}\left[\frac{1}{M}\sum_{a} \mathrm{sgn}\left(\lambda_{a\mu_{1}}^{\gamma_{1}}u_{\mu_{1}}\right)\ldots \mathrm{sgn}\left(\lambda_{a\mu_{\ell}}^{\gamma_{\ell}}u_{\mu_{\ell}}\right) \right]^{2}\right\}.\] Observe that \(\Upsilon\) is the part of the replicated partition function that accounts for the interaction between peers and the interaction between peers and graph. If \(\eta_{0}=\Delta=0\) then \(\Upsilon=1.\) We define \[\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}} \equiv \frac{1}{M}\sum_{a}\mathrm{sgn}\left(\lambda_{a\mu_{1}}^{\gamma_{ 1}}u_{\mu_{1}}\right)\ldots\mathrm{sgn}\left(\lambda_{a\mu_{\ell}}^{\gamma_{ \ell}}u_{\mu_{\ell}}\right) \tag{68}\] \[\varrho_{0} \equiv \frac{1}{M}\sum_{a}1, \tag{69}\] where \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) is the average level of agreement per individual, across issues and replicas and \(\varrho_{0}\), which is fancy way to write 1, will be left as a free parameter for the time being until we apply a variational technique (which will confirm its value, see below). Applying Laplace's method to the integrals involving the parameters \(\varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\) and their conjugates \(\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\), we can express the replicated partition function as: \[\overline{Z^{n}}(\beta) = \mathop{\mathrm{extr}}_{q,W,R,\left\{\varrho_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}},\tilde{\varrho}_{\mu_{1}\ldots\mu_{ \ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\right\}}\left\{\exp\left[\frac{nNM}{2} \left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)\right]\right. \tag{70}\] \[\left.\exp\left[-M\sum_{\ell=0}\sum_{\langle\gamma_{1}\mu_{1}; \ldots;\gamma_{\ell}\mu_{\ell}\rangle}\varrho_{\mu_{1}\ldots\mu_{\ell}}^{ \gamma_{1}\ldots\gamma_{\ell}}\tilde{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma _{1}\ldots\gamma_{\ell}}-\frac{\nu}{2}M+\right.\right.\] \[\left.\left.+\frac{\nu}{2}M\sum_{\ell=0}\left\langle\cosh(\beta \eta)^{nP}\tanh(\beta\eta)^{\ell}\right\rangle_{\eta}\sum_{\langle\gamma_{1} \mu_{1};\ldots;\gamma_{\ell}\mu_{\ell}\rangle}\left(\varrho_{\mu_{1}\ldots\mu_ {\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}\right)^{2}\right]\right.\] \[\left.\int\prod_{\mu}\mathcal{D}u_{\mu}\prod_{\mu}\mathcal{D}y_{ \mu}\left(\prod_{\mu}\mathcal{D}t_{\mu}\prod_{\gamma\mu a}\frac{\mathrm{d} \lambda_{\mu}^{\gamma}}{\sqrt{2\pi}}\exp\left[-\frac{1}{2}\sum_{\gamma\mu}( \lambda_{\mu}^{\gamma}-\mathcal{C}_{\mu})^{2}+\right.\right.\right.\] \[\left.\left.\left.+(1-\nu\eta_{0})\beta\sum_{\gamma\mu}\mathrm{ sgn}\left(\lambda_{\mu_{1}}^{\gamma_{1}}u_{\mu_{1}}\right)\right]\right)^{M}\right\}\] where we have disregarded terms of \(O(n^{2})\), \(O(N^{-1})\) and \(O(M^{-1})\) in the argument of the exponential and now: \[\mathcal{C}_{\mu}\equiv\frac{\sqrt{W-R^{2}}y_{\mu}+Ru_{\mu}+\sqrt{q-W}t_{\mu}}{ \sqrt{1-q}}. \tag{71}\] Once more we consider the RS approach by introducing the distribution \(\pi(z)\) and its conjugate \(\hat{\pi}(z):\) \[\hat{\varrho}_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{ \ell}}=\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi}(s)\tanh^{\ell}( \beta s) \varrho_{\mu_{1}\ldots\mu_{\ell}}^{\gamma_{1}\ldots\gamma_{\ell}}=\int \mathrm{d}z\,\pi(z)\tanh^{\ell}(\beta z) \tag{72}\] \[\hat{\varrho}_{0}=\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi }(s) \varrho_{0}=\int\mathrm{d}z\,\pi(z) \tag{73}\] where equations (72) are the definitions of the fields \(\pi\) and \(\hat{\pi}\), and equations (73) are the normalization conditions they must satisfy. By using the symmetry of the RS supposition and by integrating over the variables \(\{\lambda_{\mu}^{\gamma}\}\), the Hubbard-Stratanovich variables \(u\), \(t\) and \(y\) and by applying the scaling condition \(P=\alpha N\), we have that the replicated partition function takes the form of: \[\overline{Z^{n}}(\beta) = \mathop{\rm extr}\limits_{q,R,\varrho_{0},\hat{\varrho}_{0},\pi, \hat{\pi}}\left\{\exp\left[-M\left(\frac{\nu}{2}-\hat{\varrho}_{0}+\varrho_{0} \hat{\varrho}_{0}-\frac{\nu}{2}\varrho_{0}^{2}\right)\right]\right. \tag{74}\] \[\left.\left(1+nMN\frac{\nu}{2}\varrho_{0}^{2}\alpha\left\langle \ln\cosh(\beta\eta)\right\rangle_{\eta}+\frac{nMN}{2}\left(\ln(1-q)+\frac{q-R^ {2}}{1-q}\right)-\right.\right.\] \[\left.\left.-nMN\alpha\mathcal{C}_{\hat{\pi}}\int\mathrm{d}z \,\mathrm{d}s\,\pi(z)\hat{\pi}(s)\ln\left[1+\tanh(\beta s)\tanh(\beta z) \right]+\right.\] \[\left.+\frac{\nu}{2}nMN\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2 }\,\pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z _{1})\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.\left.+nNM\alpha\mathrm{e}^{-\hat{\varrho}_{0}}\sum_{C=0}^ {\infty}\frac{\mathcal{C}_{\hat{\pi}}^{\ell}}{C!}\int_{-\infty}^{\infty}\prod _{\ell=1}^{C}\mathrm{d}s_{\ell}\,\hat{\pi}(s_{\ell})\;2\int_{-\infty}^{\infty }\mathcal{D}x\mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}}\right)\right.\right.\] \[\left.\left.\ln\left[\mathcal{H}\left(\sqrt{\frac{q}{1-q}}x \right)\mathrm{e}^{-\beta(1-\nu\eta_{0})}\prod_{\ell=1}^{C}[1-\tanh(\beta s_{ \ell})]+\right.\right.\] \[\left.\left.\left.+\mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x\right) \mathrm{e}^{\beta(1-\nu\eta_{0})}\prod_{\ell=1}^{C}[1+\tanh(\beta s_{\ell})] \right]\right)\right\}.\] Observe that during the integration process the dependency with respect to \(W\) disappears. By adding up the series, can be re-express as: \[\overline{Z^{n}}(\beta) = \mathop{\rm extr}\limits_{q,R,\varrho_{0},\hat{\varrho}_{0},\pi, \hat{\pi}}\left\{\exp\left[-M\left(\frac{\nu}{2}-\hat{\varrho}_{0}+\varrho_{0 }\hat{\varrho}_{0}-\frac{\nu}{2}\varrho_{0}^{2}\right)\right]\right. \tag{75}\] \[\left.\left(1+nMN\frac{\nu}{2}\varrho_{0}^{2}\alpha\left\langle \ln\cosh(\beta\eta)\right\rangle_{\eta}+\frac{nMN}{2}\left(\ln(1-q)+\frac{q-R ^{2}}{1-q}\right)-\right.\right.\] \[\left.\left.-nMN\alpha\mathcal{C}_{\hat{\pi}}\int\mathrm{d}z\, \mathrm{d}s\,\pi(z)\hat{\pi}(s)\ln\left[1+\tanh(\beta s)\tanh(\beta z)\right]+\right.\] \[\left.+\frac{\nu}{2}nMN\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2 }\,\pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z _{1})\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.+nNM\alpha\left(-\beta(1-\nu\eta_{0})+\mathrm{e}^{-\hat{ \varrho}_{0}}+\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{\pi}(s)\ln[1+ \tanh(-\beta s)]\right)\right.\] \[\left.\left.+2nNM\alpha\int_{-\infty}^{\infty}\mathcal{D}x \mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}}\right)\right.\right.\] \[\left.\left.\int\frac{\mathrm{d}y}{2\pi}\int\mathrm{d}\hat{y} \mathrm{e}^{-iy\hat{y}}\exp\left[\mathcal{C}_{\hat{\pi}}\int\mathrm{d}s\,\hat{ \pi}(s)\mathrm{e}^{i\hat{y}s}-\hat{\varrho}_{0}\right]\ln\left[1+\left(\mathrm{ e}^{2\beta(1-\nu\eta_{0}+y)}-1\right)\mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x \right)\right]\right)\right\}.\] Observe that \[\partial_{\varrho_{0}}\overline{Z^{n}} = \left(-M\hat{\varrho}_{0}+\nu M\varrho_{0}+O(n)\right)\overline{Z^{n}} \tag{76}\] \[\partial_{\hat{\varrho}_{0}}\overline{Z^{n}} = \left(M-M\varrho_{0}+O(n)\right)\overline{Z^{n}} \tag{77}\] which implies in the extreme that \(\varrho_{0}=1\) (as it was expected from equation (69)) and \(\hat{\varrho}_{0}=\nu\). From this last equation we have that \(\mathcal{C}_{\hat{\pi}}=\nu\). By defining the weight function: \[\mathsf{P}(y|\hat{\pi})\equiv\int\frac{\mathrm{d}\hat{y}}{2\pi}\mathrm{e}^{-iy \hat{y}}\exp\left[\nu\left(\int\mathrm{d}s\,\hat{\pi}(s)\mathrm{e}^{i\hat{y}s }-1\right)\right], \tag{78}\] and the distribution \[\mathcal{P}(x|q,R)\equiv 2\mathcal{N}(x)\mathcal{H}\left(-\frac{Rx}{\sqrt{q-R^{2}}} \right), \tag{79}\] the free energy functional \(F\) can be defined as \[F \equiv T\frac{1-\overline{Z^{n}}(\beta)}{nNM} \tag{80}\] \[= T\left\{-\alpha\left(-\beta(1-\nu\eta_{0})+\mathrm{e}^{-\nu}+ \frac{\nu}{2}\left\langle\ln\cosh(\beta\eta)\right\rangle_{\eta}\right)-\frac{1 }{2}\left(\ln(1-q)+\frac{q-R^{2}}{1-q}\right)+\right.\] \[\left.-\frac{\nu}{2}\alpha\int\mathrm{d}z_{1}\,\mathrm{d}z_{2}\, \pi(z_{1})\pi(z_{2})\,\left\langle\ln\left(1+\tanh(\beta\eta)\tanh(\beta z_{1} )\tanh(\beta z_{2})\right)\right\rangle_{\eta}+\right.\] \[\left.+\nu\alpha\int\mathrm{d}z\,\mathrm{d}s\,\pi(z)\hat{\pi}(s) \ln\left[\frac{1+\tanh(\beta s)\tanh(\beta z)}{1-\tanh(\beta s)}\right]- \alpha\beta\int\mathrm{d}y\mathsf{P}(y|\hat{\pi})(1-\nu\eta_{0}+y)\right.\] \[\left.-\alpha\int\mathrm{d}x\mathcal{P}_{Rq}(x)\,\int\mathrm{d} y\mathsf{P}(y|\hat{\pi})\ln\left[\mathrm{e}^{-\beta(1-\nu\eta_{0}+y)}\mathcal{H} \left(\sqrt{\frac{q}{1-q}}x\right)+\mathrm{e}^{\beta(1-\nu\eta_{0}+y)} \mathcal{H}\left(-\sqrt{\frac{q}{1-q}}x\right)\right]\right\}\] density of the system can be expressed as \(f=\mathrm{ext}_{q,R,\pi,\hat{\pi}}\,F.\) ### Fourier Transform of the Distributions By using the Fourier Transform in the saddle point equations (25) and (26) we define the functions: \[\hat{\phi}(\omega) \equiv \int\mathrm{d}s\,\hat{\pi}(s)\mathrm{e}^{is\omega} \tag{81}\] \[= 1+i\mathcal{I}_{0}\omega-\frac{\mathcal{R}_{0}}{2}\omega^{2}+O( \omega^{3})\] \[\phi(\omega) \equiv\int\mathrm{d}s\,\pi(s)\mathrm{e}^{is\omega}. \tag{82}\] Thus: \[\mathsf{P}(y|\hat{\pi}) \equiv \int\frac{\mathrm{d}\omega}{2\pi}\,\mathrm{e}^{-ig\omega+\hat{ \phi}(\omega)-\nu} \tag{83}\] \[\approx \int\frac{\mathrm{d}\omega}{2\pi}\,\exp\left[-\frac{\nu\mathcal{ R}_{0}}{2}\omega^{2}+i(\nu\mathcal{I}_{0}-y)\omega\right]=\mathcal{N}\left(y\,|\nu \mathcal{I}_{0},\nu\mathcal{R}_{0}\right),\] which is consistent with (19) and (20). Consider the definition (29), then the Fourier Transform of \(\pi(s)\) can be expressed as: \[\phi(\omega) = \tag{84}\] \[\approx \int\mathrm{d}y\mathcal{N}\left(y\,|\nu\mathcal{I}_{0},\nu \mathcal{R}_{0}\,\right)\mathrm{e}^{is\omega}\left\langle\exp\left[i\omega \left(1-\nu\eta_{0}+\frac{\Lambda(x)}{2}x^{2}\right)\right]\right\rangle_{x},\] where the expectation over \(x\) is approximated by: \[\left\langle\exp\left\{i\omega\left[1-\nu\eta_{0}+\frac{\Lambda(x )}{2}\right]\right\}\right\rangle_{x} \approx 2\int_{-\infty}^{\infty}\mathcal{D}x\,\Theta(xR)\exp\left[i \omega\left(1-\nu\eta_{0}+\frac{\Lambda(x)}{2}x^{2}\right)\right] \tag{85}\] \[\approx 2\int_{0}^{\infty}\frac{\mathrm{d}x}{\sqrt{2\pi}}\exp\left[- \frac{x^{2}}{2}+i\omega\left(1-\nu\eta_{0}+\frac{\Lambda(R)}{2}x^{2}\right)\right]\] \[\approx \int_{-\infty}^{\infty}\frac{\mathrm{d}x}{\sqrt{2\pi}}\exp\left[ -[1-i\omega\Lambda(R)]\frac{x^{2}}{2}+is\left(1-\nu\eta_{0}\right)\right]\] \[\approx \frac{1}{\sqrt{1-i\omega\Lambda}}\exp\left[i(1-\nu\eta_{0})\omega\right]\] \[\approx \exp\left[-\frac{3}{8}\Lambda^{2}\omega^{2}+i\left(1-\nu\eta_{0}+ \frac{\Lambda(R)}{2}\right)\omega\right],\] where (85) is reached by assuming that \(\Lambda\) sufficiently small. From this point onwards we will refer to the parameter \(\Lambda(R)\) as the amplified thermal noise. In such a case the function \(\phi\) is Gaussian: \[\phi(s)\approx\exp\left[-\frac{\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2}}{2}s^{2 }+i\left(1-\nu\eta_{0}+\nu{\cal I}_{0}+\frac{\Lambda(R)}{2}\right)s\right], \tag{86}\] and so \[\pi(z)\approx\frac{1}{\sqrt{2\pi\left(\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2 }\right)}}\exp\left\{-\frac{\left(z-1+\nu\eta_{0}-\nu{\cal I}_{0}-\frac{1}{2} \Lambda(R)\right)^{2}}{2\left(\nu{\cal R}_{0}+\frac{3}{4}\Lambda(R)^{2}\right) }\right\}. \tag{87}\] The argument in the Dirac's delta function in (26) is mildly sensitive to changes in temperature, therefore we can approximate it by: \[\beta^{-1}{\rm arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right]\ \approx\ \frac{|\eta+z|-|\eta-z|}{2}={\rm sgn}(z\eta)\min\{|\eta|,|z|\}, \tag{88}\] which allows us to write the expression of the Fourier transform of the distribution \(\hat{\pi}\) as: \[\hat{\phi}(\omega)\ =\ \int{\rm d}s\phi(s)\left\langle\int\frac{{\rm d}z}{2\pi} \,\exp\left(-isz+i\omega{\rm sgn}(z\eta)\min\{|\eta|,|z|\}\right)\right\rangle _{\eta}, \tag{89}\] where the expectation over the social strengths can be demonstrated to be, disregarding terms of \(O(\eta_{0}\Delta^{2})\): \[{\cal Q} \equiv\ \left\langle\int\frac{{\rm d}z}{2\pi}\,\exp\left(-isz+i \omega\beta^{-1}\,{\rm arctanh}\left[\tanh(\beta\eta)\tanh(\beta z)\right] \right)\right\rangle_{\eta} \tag{90}\] \[\approx\ \delta(s)+\frac{\omega}{\pi}\exp\left(-\frac{\Delta^{2}s^{2}}{2} \right)\frac{\eta_{0}}{s}-\omega^{2}\frac{\eta_{0}^{2}+\Delta^{2}}{2}\delta(s).\] Expression (90), together with (89) and (81) produce the following expressions for the moments of \(\nu^{-1}\hat{\pi}\) : \[1 = \phi(0) \tag{91}\] \[{\cal I}_{0} = -\frac{i\eta_{0}}{\pi}\int{\rm d}s\frac{\phi(s)}{s}{\rm e}^{- \frac{\Delta^{2}s^{2}}{2}}\] (92) \[{\cal R}_{0} = \eta_{0}^{2}+\Delta^{2}, \tag{93}\] which implies that \[{\cal I}_{0}^{\star}=\eta_{0}{\rm erf}\left(\frac{1-\nu\eta_{0}+\nu{\cal I}_{ 0}^{\star}+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+\Delta^{2})+ \frac{3}{4}\Lambda(R)^{2}\right]}}\right). \tag{94}\] The equation (94) has one, two or three solutions depending on the value of the parameters \(\nu\), \(\eta_{0}\), \(\Delta\) and the function \(\Lambda(R).\) It is easy to see that for the line: \[\eta_{0}=\frac{2+\Lambda(R)}{2\nu} \tag{95}\] \({\cal I}_{0}^{\star}=0\) is a solution. Thus we expect that for \(\nu\eta_{0}>1+\Lambda(R)/2\), \({\cal I}_{0}^{\star}\approx-(\eta_{0}-\epsilon)\) is a solution, and for \(\nu\eta_{0}<1+\Lambda(R)/2\), \({\cal I}_{0}\approx\eta_{0}-\epsilon\) is a solution (in both cases \(0<\epsilon<\eta_{0}\) is a suitable positive number). If the derivative with respect to \({\cal I}_{0}\) of the right-hand-side of (94) evaluated at \({\cal I}_{0}^{\star}\) is equal to 1, and \({\cal I}_{0}^{\star}\) is also a solution of (94), then (94) has two solutions, \({\cal I}_{0}^{\star}\) and \(\eta_{0}-\epsilon\) or \(-(\eta_{0}-\epsilon)\) depending on whether \(\nu\eta_{0}<1+\Lambda(R)/2\) or \(\nu\eta_{0}>1+\Lambda(R)/2\), respectively. If \(\eta_{-}(\Lambda)<\eta_{0}<\eta_{+}(\Lambda)\), where: \[\eta_{\pm} = \frac{2\mp|\Lambda|}{2\nu}\mp\left[\frac{1}{\nu}\sqrt{\left[\nu( \eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]\log\left(\frac{2} {\pi}\frac{\nu^{2}\eta_{\pm}^{2}}{\nu(\eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4} \Lambda(R)^{2}}\right)}-\right. \tag{96}\] \[\left.-\eta_{\pm}{\rm erf}\left(\sqrt{\log\left(\sqrt{\frac{2}{ \pi}\frac{\nu^{2}\eta_{\pm}^{2}}{\nu(\eta_{\pm}^{2}+\Delta^{2})+\frac{3}{4} \Lambda(R)^{2}}}\right)}\right)\right]\] are the superior (\(\eta_{+}\)) and inferior (\(\eta_{-}\)) limits to the area in the plane (\(|\Lambda|,\eta_{0}\)) where three solutions to the equation (94) can be found. Let as define the set \(\mathbb{A}:=\{(|\Lambda|,\eta_{0}):\eta_{-}(\Lambda)<\eta_{0}<\eta_{+}(\Lambda)\}\). (The use of \(|\Lambda|\) instead of \(\Lambda\) is due to the fact that realizable solutions must satisfy \(\mathrm{sgn}(R)=\mathrm{sgn}(1-\nu\eta_{0})\). This point will be clarified when the equations involving \(R\) and \(q\) are contemplated. See bellow.) It is important to note that for very high values of the pressure \(\beta\) we have that: \[\nu\eta_{\pm}\approx\frac{1}{1\pm\sqrt{\frac{1}{\nu}\log\left(\frac{2\nu}{\pi }\right)}\mp\mathrm{erf}\left(\sqrt{\frac{1}{2}\log\left(\frac{2\nu}{\pi} \right)}\right)}, \tag{97}\] which implies that the segment (at very high pressure) of the coexistence of states grows with \(\sqrt{\nu}\). From equation (94) we conclude that, to satisfy the saddle-point equations (25,26), there must be a set of values of \(\eta\) such that, for a very high pressure \(\beta\) there coexist states with different attitudes. For all the points \((\Lambda,\eta_{0})\notin\mathbb{A}\), we have that both conditions (92) and (93) are satisfied for a distribution: \[\hat{\pi}(s)\ =\ \mathcal{N}\left(s\left|\mathcal{I}_{0}^{\star},\eta_{0}^{2}-( \mathcal{I}_{0}^{\star})^{2}+\Delta^{2}\right.\right) \tag{98}\] where \(\mathcal{I}_{0}^{\star}\) is the only solution of (94). In this case we also have that: \[\mathsf{P}(y|\hat{\pi})\approx\mathcal{N}\left(y\left|\nu\mathcal{I}_{0}^{ \star},\nu(\eta_{0}^{2}+\Delta^{2})\right.\right). \tag{99}\] For the points \((\Lambda,\eta_{0})\in\mathbb{A}\) we propose the following form for \(\hat{\pi}(s)=\mathcal{Z}^{-1}\exp[-\Phi(s)]\), where \(\mathcal{Z}\) is a normalization constant and the function \(\Phi(s)\) is defined as: \[\Phi(s) \coloneqq \frac{s^{2}}{2\Delta^{2}}-\frac{\eta_{0}}{\nu}\frac{1-\nu\eta_{ 0}+\nu s+\frac{1}{2}\Lambda(R)}{\Delta^{2}}\mathrm{erf}\left(\frac{1-\nu\eta_ {0}+\nu s+\frac{1}{2}\Lambda(R)}{\sqrt{2\left[\nu(\eta_{0}^{2}+\Delta^{2})+ \frac{3}{4}\Lambda(R)^{2}\right]}}\right)- \tag{100}\] \[-2\frac{\eta_{0}}{\nu}\frac{\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3 }{4}\Lambda(R)^{2}}{\Delta^{2}}\mathcal{N}\left(\nu s\left|\nu\eta_{0}-1- \frac{1}{2}\Lambda(R),\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2} \right.\right).\] Figure 7: Phase diagram of the system in terms of the parameters \(\nu\eta_{0}\) and \(|\Lambda|\). There are two phases separated by the line \(\nu\eta_{0}=1\). For \(\nu\eta_{0}<1\) we have that \(R>0\) and the average consensus is in favor of \(B\). For \(\nu\eta_{0}>1\)\(R<0\) and the average position of the agents is to form local alliances against the president \(B\). In all the points of the plain outside the region \(\mathbb{A}\), the distribution describing the position of the neighborhood, given by \(\hat{\pi}(z)\), is sharply picked at \(+\eta_{0}\), for the conservative position, i.e. \(\nu\eta_{0}<1\), or at \(-\eta_{0}\), for the polarized position, i.e. \(\nu\eta_{0}>1\). In region \(\mathbb{A}\) we have the same phase separation at \(\nu\eta_{0}=1\) but the contribution from the neighborhood is a mixture of a polarized component plus a conservative component. We also observe that the vertexes of \(\mathbb{A}\) are (0,1.651), (0.411,1) and (0,0.717). Observe that \(\Phi^{\prime}(\mathcal{I}_{0}^{*})=0\) is identical to (94) and, for all points in \(\mathbb{A}\) this equations have three roots \(\mathcal{I}_{-}^{*}<\mathcal{I}_{0}^{*}<\mathcal{I}_{+}^{*}\). The asymptotic behavior of \(\lim_{s\rightarrow\pm\infty}s^{-2}\Phi(s)>0\) indicates that \(\mathcal{I}_{\pm}^{*}\) are minima with \(\mathcal{I}_{0}^{*}\) an intermediate maximum. Let us compute the second derivative of \(\Phi\) at the minima: \[\Phi^{\prime\prime}(\mathcal{I}_{\pm}^{*}) = \left.\frac{1}{\Delta^{2}}\left(1-\eta_{0}\ \frac{\mathrm{d}}{\mathrm{d}s} \mathrm{erf}\left(\frac{1-\nu\eta_{0}+\nu s+\frac{1}{2}\Lambda(R)}{\sqrt{2 \left[\nu(\eta_{0}^{2}+\Delta^{2})+\frac{3}{4}\Lambda(R)^{2}\right]}}\right) \right|_{s=\mathcal{I}_{\pm}^{*}}\right), \tag{101}\] therefore \(\Phi^{\prime\prime}(\mathcal{I}_{\pm}^{*})=\mathcal{L}_{\pm}^{2}\Delta^{-2}\), with: \[\mathcal{L}_{\pm}\coloneqq\sqrt{1-2\nu\eta_{0}\mathcal{N}\left(\nu\mathcal{I} _{\pm}^{*}\left|\nu\eta_{0}-1-\frac{1}{2}\Lambda(R),\nu(\eta_{0}^{2}+\Delta^{ 2})+\frac{3}{4}\Lambda(R)^{2}\right.\right)}, \tag{102}\] which is larger than zero for all \((\Lambda,\eta_{0})\in\mathbb{A}\). Thus: \[\hat{\pi}(s) \approx h_{+}\mathcal{N}\left(s\left|\mathcal{I}_{+}^{*},\Delta^{2} \right.\right)+h_{-}\mathcal{N}\left(s\left|\mathcal{I}_{-}^{*},\Delta^{2}\right.\right) \tag{103}\] \[h_{\pm} \equiv \frac{1}{2}\pm\frac{1}{2}\tanh\left(\frac{\Phi(\mathcal{I}_{+}^{* })-\Phi(\mathcal{I}_{-}^{*})}{4}-\frac{1}{2}\ln\frac{\mathcal{L}_{+}}{ \mathcal{L}_{-}}\right)\] (104) \[\int\mathrm{d}s\hat{\pi}(s)s \approx h_{+}\mathcal{I}_{+}^{*}+h_{-}\mathcal{I}_{-}^{*}\] (105) \[\int\mathrm{d}s\hat{\pi}(s)s^{2} \approx h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{-}^{*})^{2}+ \Delta^{2}, \tag{106}\] observe that the expectation (104) represents a convex combination between the solutions to the equation (94), and \(\hat{\pi}^{\prime}(\mathcal{I}_{\pm}^{*})\approx 0\). In order to compute the distribution \(\mathsf{P}(y|\hat{\pi})\) we first need to compute the Fourier transform of \(\hat{\pi}\): \[\hat{\phi}(\omega) \approx h_{+}\exp\left[-\frac{\Delta^{2}}{2}\omega^{2}+i\omega\mathcal{I }_{+}^{*}\right]+h_{-}\exp\left[-\frac{\Delta^{2}}{2}\omega^{2}+i\omega \mathcal{I}_{-}^{*}\right], \tag{107}\] and if we define \[\Upsilon(\omega,y) \coloneqq -iy\omega-\nu+\nu\int\mathrm{d}s\frac{\exp[-\Phi(s)+is\omega]}{ \int\mathrm{d}s^{\prime}\,\exp[-\Phi(s^{\prime})]} \tag{108}\] \[\approx -iy\omega-\nu+\nu\left\{h_{+}\exp\left[-\frac{\Delta^{2}}{2} \omega^{2}+i\omega\mathcal{I}_{+}^{*}\right]+h_{-}\exp\left[-\frac{\Delta^{2} }{2}\omega^{2}+i\omega\mathcal{I}_{-}^{*}\right]\right\}\] \[\approx -iy\omega+\nu\left\{i\omega(h_{+}\mathcal{I}_{+}^{*}+h_{-} \mathcal{I}_{-}^{*})-\frac{h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{ -}^{*})^{2}+\Delta^{2}}{2}\omega^{2}\right\}+O(\omega^{3})\] \[\mathsf{P}(y|\hat{\pi}) = \int\frac{\mathrm{d}\omega}{2\pi}\,\mathrm{e}^{\Upsilon(\omega,y)} \tag{109}\] \[\approx \mathcal{N}\left(y\left|\nu(h_{+}\mathcal{I}_{+}^{*}+h_{-} \mathcal{I}_{-}^{*}),\nu[h_{+}(\mathcal{I}_{+}^{*})^{2}+h_{-}(\mathcal{I}_{-}^ {*})^{2}+\Delta^{2}]\right.\right)\] These distributions should be applied depending on the value of \(\eta_{0}\) and \(\Lambda(R)\), following the diagram presented in figure 7.
現代南米のpresidential民主主義の研究に基づき、私たちは、立法会議における政治者たちの集団的かつ協調的な行動を統計的メカニズムを用いて探索します。この行動が大統領の解任に繋がる可能性があります。神経ネットワークを用いて立法政治者を表し、大統領議題の有効個数が増加すると、党間での対話機会が減少する傾向が見られました。この傾向は、大統領の国民的支持度が低下すると、大統領の解任プロセスを誘発する可能性があります。
2305.20080
Findings of the VarDial Evaluation Campaign 2023
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. Three separate shared tasks were included this year: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages -- True Labels (DSL-TL), and Discriminating Between Similar Languages -- Speech (DSL-S). All three tasks were organized for the first time this year.
Noëmi Aepli, Çağrı Çöltekin, Rob Van Der Goot, Tommi Jauhiainen, Mourhaf Kazzaz, Nikola Ljubešić, Kai North, Barbara Plank, Yves Scherrer, Marcos Zampieri
2023-05-31T17:55:21
http://arxiv.org/abs/2305.20080v1
# Findings of the VarDial Evaluation Campaign 2023 ###### Abstract This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. Three separate shared tasks were included this year: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages - True Labels (DSL-TL), and Discriminating Between Similar Languages - Speech (DSL-S). All three tasks were organized for the first time this year. ## 1 Introduction The workshop series on _NLP for Similar Languages, Varieties and Dialects_ (VarDial), traditionally co-located with international conferences, has reached its tenth edition. Since the first edition, VarDial has hosted shared tasks on various topics such as language and dialect identification, morphosyntactic tagging, question answering, and cross-lingual dependency parsing. The shared tasks have featured many languages and dialects from different families and data from various sources, genres, and domains (Aepli et al., 2022; Chakravarthi et al., 2021; Gaman et al., 2020; Zampieri et al., 2019, 2018, 2017; Malmasi et al., 2016; Zampieri et al., 2015, 2014). As part of the VarDial Evaluation Campaign 2023, we offered three shared tasks which we present in this paper: * **SID4LR:** Slot and intent detection for low-resource language varieties1 Footnote 1: Task organizers: Nömi Aepli, Rob van der Goot, Barbara Plank, Yves Scherrer. * True Labels2 Footnote 2: Task organizers: Marcos Zampieri, Kai North, Tommi Jauhainen. Footnote 3: Task organizers: Cágni Coltekin, Mourhaf Kazzz, Tommi Jauhainen, Nikola Ljubesic. DSL-TL and DSL-S continue the long line of language and dialect identification (Jauhiainen et al., 2019) shared tasks at VarDial, whereas the SID4LR features a task novel to the evaluation campaigns. This overview paper is structured as follows: in Section 2, we briefly introduce the three shared tasks. Section 3 presents the teams that submitted systems to the shared tasks. Each task is then discussed in detail, focusing on the data, the participants' approaches, and the obtained results. Section 4 is dedicated to SID4LR, Section 5 to DSL-TL, and Section 6 to DSL-S. ## 2 Shared Tasks at VarDial 2023 The evaluation campaign took place in January - February 2023. Due to the ACL placing the workshop at the EACL conference in early May, the schedule from the shared tasks' first announcement to completion was relatively tight. The call for participation in the shared tasks was first published in early January, the training data sets for the shared tasks were released on January 23rd, and the results were due to be submitted on February 27th.4 Footnote 4: [https://sites.google.com/view/vardia](https://sites.google.com/view/vardia) ### SID for Low-resource Language Varieties (SID4LR) The SID4LR shared task focused on Slot and Intent Detection (SID) for digital assistant data in three low-resource language varieties: Swiss German (GSW) from the city of Bern, South Tyrolean (DEST), and Neapolitan (NAP). Intent detection is the task of automatically classifying the intent of an utterance and slot detection aims at finding the relevant (labeled) span. Figure 1 illustrates these two tasks with an example. The objective of this shared task is to address the following question: _How can we best do zero-shot transfer to low-resource language varieties without standard orthography?_ The xSID-0.4 corpus5, which includes data from both Snips Coucke et al. (2018) and Facebook Schuster et al. (2019), constitutes the training data, providing labeled information for slot and intent detection in 13 different languages. The original training data is in English, but we also provided automatic translations of the training data into German, Italian, and other languages. These translations are obtained with the Fairseq library Ott et al. (2019), using spoken data for training (more details in van der Goot et al. (2021)). Bleu scores Papineni et al. (2002) were 25.93 and 44.73 for respectively German and Italian. Slot label annotations were transferred using the attention weights. Participants were allowed to use other data to train on as long as it was not annotated for SID in the target languages. Specifically, the following resources were allowed: Footnote 5: [https://bitbucket.org/robvanderg/sid](https://bitbucket.org/robvanderg/sid) 1. annotated data from other (related and unrelated) languages in the xSID-0.4 corpus; 2. raw text data from the target languages, if available (e.g., Wikipedia, web crawls); 3. pre-trained language models containing data from the target languages. It was not mandatory for the participants to provide systems for all tasks and languages; they had the option to only take part in a specific subset. We used the standard evaluation metrics for these tasks, namely the span F1 score for slots and accuracy for intents. ### Discriminating Between Similar Languages - True Labels (DSL-TL) Discriminating between similar languages (e.g., Croatian and Serbian) and national language varieties (e.g., Brazilian and European Portuguese) has been a popular topic at VarDial since its first edition. The DSL shared tasks organized from 2014 to 2017 Zampieri et al. (2017); Malmasi et al. (2016); Zampieri et al. (2015, 2014) have addressed this issue by providing participants with the DSL Corpus Collection (DSLCC) Tan et al. (2014), a collection of journalistic texts containing texts written in groups of similar languages (e.g., Indonesian and Malay) and language varieties (e.g., Brazilian and European Portuguese).6 The DSLCC was compiled assuming each instance's gold label is determined by where the text is retrieved from. While this is a straightforward and primarily accurate practical assumption, previous research Goutte et al. (2016) has shown the limitations of this problem formulation as some texts may present no linguistic marker that allows systems or native speakers to discriminate between two very similar languages or language varieties. Footnote 6: [http://ttg.uni-saarland.de/resources/DSLCC/](http://ttg.uni-saarland.de/resources/DSLCC/) At VarDial 2023, we tackle this important limitation by introducing the DSL True Labels (DSL-TL) shared task. DSL-TL provided participants with the DSL-TL dataset Zampieri et al. (2023), the first human-annotated language variety identification dataset where the sentences can belong to several varieties simultaneously. The DSL-TL dataset contains newspaper texts annotated by multiple native speakers of the included language and language varieties, namely English American and British varieties, Portuguese Brazilian and European varieties), and Spanish (Argentinian and Peninsular varieties). More details on the DSL-TL shared task and dataset are presented in Section 5. Figure 1: Example of the SID tasks. The **three target languages (NAP, GSW, DE-ST)** are in bold, the corresponding high-resource languages (DE and IT) and the translation (EN) are included for comparison. The _slot_ annotations are coloured: datetime and reminder/todo. The _intent_ for this sentence is reminder/set_reminder. ### Discriminating Between Similar Languages - Speech (DSL-S) In the DSL-S 2023 shared task, participants were using the training, and the development sets from the Mozilla Common Voice (CV, Ardila et al., 2020) to develop a language identifier for speech.7 The nine languages selected for the task come from four different subgroups of Indo-European or Uralic language families (Swedish, Norwegian Nynorsk, Danish, Finnish, Estonian, Moksha, Erzya, Russian, and Ukrainian). Footnote 7: Further information available at: https://dsl-s.g \begin{table} \begin{tabular}{l|c c c c} **Team** & **SID4LR** & **DSL-TL** & **DSL-S** & **System Description Paper** \\ \hline UBC & ✓ & & & Kwon et al. (2023) \\ Notre Dame & ✓ & & & Srivastava and Chiang (2023) \\ VaidyaKane & & ✓ & & Vaidya and Kane (2023) \\ ssl & & ✓ & & Hohl and Shim (2023) \\ UnibucNLP & & ✓ & & Gaman (2023) \\ SATLab & & ✓ & & \\ \end{tabular} \end{table} Table 1: The teams that participated in the VarDial Evaluation Campaign 2023. lation, and pre-training on the target languages. For the latter, they made use of additional external data from various sources for all three target languages for the training. Notre Dame:Team Notre Dame (Srivastava and Chiang, 2023) submitted a research paper to the VarDial workshop, within which they also described their participation in the intent detection subtask. The team applied zero-shot methods, i.e., they did not use any data from the target language in the training process. They fine-tuned monolingual language models9 with noise-induced data. The noising technique they applied is similar to that of Aepli and Sennrich (2022) with three main differences: they 1) add an additional noise type: _swapping_ between adjacent letters; 2) they employ higher levels of noise and include multiple copies of the fine-tuning data; and 3) remove the step of continued pre-training to avoid using any target language data. Footnote 9: German BERT: [https://huggingface.co/dbm](https://huggingface.co/dbm) dz/bert-base-german-uncased and Italian BERT: [https://huggingface.co/dbmdz/bert-base-i](https://huggingface.co/dbmdz/bert-base-i) Italian-uncased Baseline:The baseline we provided is the same as in the original xSID paper, trained on the English data, with an updated version of MaChAmp (van der Goot et al., 2021). The model uses an mBERT encoder and a separate decoder head for each task, one for slot detection (with a CRF layer) and one for intent classification. ### Results We evaluated the submitted systems according to accuracy for intents and according to the span F1 score for slots (where both span and label must match exactly). Table 2 contains the scores. For intent classification, the winner for all three languages is the team Notre Dame. Both teams beat the baseline by a large margin. All systems reached the highest scores on DE-ST and the lowest scores on GSW, but both participating teams managed to significantly close the gaps between the languages compared to the baseline. For slot detection, the UBC team outperformed the baseline for DE-ST and GSW but not for NAP. Again, GSW turned out to be the most difficult language variety of the three. We must note, however, that the UBC submission contained a large amount of ill-formed slots. Between 13% (DE-ST, NAP) and 28% (GSW) of predicted slots start with an I- label instead of B-; the evaluation script simply ignores such slots. Furthermore, a small number of predicted spans have inconsistent labels (e.g., I-datetime immediately followed by I-location). This suggests that the model architecture chosen by the UBC team was not appropriate for span labeling tasks and that a different architecture could have led to further improvements compared to the baseline. The baseline system, which uses a CRF prediction layer, did not produce any such inconsistencies. ### Summary The UBC submissions are based on a pre-trained multilingual language model (mT0), which was fine-tuned on the 12 languages of the xSID dataset. Among these languages are Italian and German, but all training sets except the English one have been produced by machine translation. This setup worked better than using only the related languages of xSID (IT and DE) or only English. Also, further data augmentation with paraphrasing and machine translation did not have any positive effect. These findings suggest that task-specific knowledge is more important than having access to linguistic material in the target languages (or even in related high-resource languages). The Notre Dame participation provides a somewhat contrasting result. They start with a monolingual BERT model of the related high-resource language (IT or DE) and use fine-tuning to make the model more robust to character-level noise. The possibility of including unrelated languages was not explored here. The contributions proposed by the participants are thus largely complementary, and it would be interesting to see if their combination leads to further improvements on the task. For instance, task-specific fine-tuning (using all of the xSID data) \begin{table} \begin{tabular}{l l|c c c} & & **Baseline** & **UBC** & **Notre Dame** \\ \hline \multirow{3}{*}{\begin{tabular}{l} **Intent** \\ **detection** \\ \end{tabular} } & **DE-ST** & 0.6160 & 0.8940 & **0.9420** \\ & **GSW** & 0.4720 & 0.8160 & **0.8860** \\ & **NAP** & 0.5900 & 0.8540 & **0.8900** \\ \hline \multirow{3}{*}{ \begin{tabular}{l} **Slot** \\ **detection** \\ \end{tabular} } & **DE-ST** & 0.4288 & **0.4692** & – \\ & **GSW** & 0.2530 & **0.2899** & – \\ & **NAP** & **0.4457** & 0.4215 & – \\ \end{tabular} \end{table} Table 2: Results for intent classification (accuracy) and slot detection (Span-F1 score). UBC submitted several models for intent detection, and here we report their best-performing system for each language. could be combined with language-specific fine-tuning (based on the noise induction task) and complemented with the baseline's CRF architecture to provide consistent slot labels. A striking finding of this shared task are the poor results on Swiss German compared to the other two low-resource varieties, Neapolitan and South-Tyrolean German. This may be due to the particular Swiss German dialect used in this dataset and/or to some translator-specific preferences or biases. Further analysis will be required to fully explain these differences. ## 5 Discriminating Between Similar Languages - True Labels The DSL-TL shared task contained two tracks: * Three-way Classification:** In this track, systems were evaluated with respect to the prediction of all three labels for each language, namely the variety-specific labels (e.g., PT-PT or PT-BR) and the common label (e.g., PT). * Binary Classification:** In this track, systems were scored only on the variety-specific labels (e.g., EN-GB, EN-US). In addition to the two tracks mentioned above, we provided participants with the option of using external data sources (open submission) or only the DSL-TL dataset (closed submission). ### Dataset DataDSL-TL contains 12,900 instances split between three languages and six national language varieties, as shown in Table 3. Instances in the DSL-TL are short extracts (1 to 3 sentences long) from newspaper articles randomly sampled from two sources (Zellers et al., 2019; Tan et al., 2014). Considering the source's ground truth label, the DSL-TL creators randomly selected 2,500 instances for each Portuguese and Spanish variety and 1,500 instances for each English variety. AnnotationDSL-TL was annotated using crowdsourcing through Amazon Mechanical Turk (AMT).10 The annotation task was restricted to annotators based on the six national language variety countries, namely Argentina, Brazil, Portugal, Spain, United Kingdom, and the United States. The annotators were asked to label each instance with what they believed to be the most representative variety label, namely European (pt-PT) or Brazilian Portuguese (pt-BR), Castilian (es-ES) or Argentine Spanish (es-AR), and British (en-GB) or American English (en-US). The label distributions are shown in Table 3. The annotators were presented with three choices: (1) language variety A, (2) language variety B, or (3) both or neither for cases in which no clear language variety marker (either linguistic or named entity) was present in the text. The annotator agreement calculations and filtering carried out after the annotation stage are described in detail in the dataset description paper (Zampieri et al., 2023). Finally, the instances in DSL-TL have been split into training, development, and testing partitions, as shown in Table 4. Footnote 10: [https://www.mturk.com/](https://www.mturk.com/) ### Participants and Approaches Four teams provided submissions to the shared task. VaidyaKane:All submissions from the team VaidyaKane used a pre-trained multilingual XLM-RoBERTa fine-tuned to language identification11 to classify the language of the sentence (Conneau et al., 2020). After the initial language identification, they experimented with several language-specific BERT models to identify the exact variety. Their best submission on track one used "bert-base-uncased"12 for English (Devlin et al., 2019), "bertin-project/bertin-roberta-base-spanish"13 for Spanish (la Rosa et al., 2022), and "neuralmind/bert-base-portuguese-cased"14 for Portuguese (Souza et al., 2020). On track two, the models for Spanish and Portuguese were the same, but "roberta-base"15 was used for English (Liu et al., 2019). Footnote 11: [https://huggingface.co/papluca/xlm-r-oberta-base-language-detection](https://huggingface.co/papluca/xlm-r-oberta-base-language-detection) Footnote 12: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) Footnote 13: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 14: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 15: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 16: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 17: [https://huggingface.co/roberta-base-portuguese-cased](https://huggingface.co/roberta-base-portuguese-cased) Footnote 18: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 19: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 17: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 20: [https://huggingface.co/roberta-base-spanish](https://huggingface.co/roberta-base-spanish) Footnote 21: [https://huggingface.co/roberta-base-portuguese-cased](https://huggingface.co/roberta-base-portuguese-cased) Footnote 22: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 23: [https://huggingface.co/bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) Footnote 24: [https://huggingface.co/neuralmind/be-portuguese-cased](https://huggingface.co/neuralmind/be-portuguese-cased) Footnote 25: [https://huggingface.co/roberta-base](https://huggingface.co/roberta-base) tracks, they also used names of people obtained from Wikidata (Vrandecic and Krotzsch, 2014). UnibucNLP:On track one, the UnibucNLP team submitted a run using an XGBoost stacking ensemble (Chen and Guestrin, 2016). The classifier stack for the ensemble consisted of one SVM and one KRR classifier. For track two, the stack classifiers were the same, but Logistic Regression was used for the stacking ensemble. SATLab:On both tracks, the SATLab team used a Logistic Regression classifier from the LIBLinear package with character n-grams from one to five weighted by BM25 and L2 normalization. The n-grams had to appear in at least two different sentences in the training data. The system was very similar to the one used by Bestgen (2021) in the Dravidian Language Identification (DLI) shared task in 2021 (Chakravarthi et al., 2021). ### Results Tables 5 to 8 show the recall, precision, and F1 scores for the baselines and best submissions for all track combinations. Team UnibucNLP (Gaman, 2023) achieved the first place out of nine submissions on the closed version of track one. Their XGBoost stacking ensemble attained an F1 score of 0.5318. The results were still slightly worse than the multilingual BERT16 (mBERT) (Devlin et al., 2019) and the XLM-RoBERTa17 (XLM-R) (Liu et al., 2019) baselines. All other submissions achieved slightly worse F1 scores. In the second place, team SATLab's logistic regressor obtained an F1 score of 0.4905. In third place, team ssl's SVM produced an F1 score of 0.4817. The similarity between the top three F1 scores shows that automatically differentiating between similar language varieties is a challenging task, especially when taking into consideration neutral labels (EN, ES, or PT), as well as only using the provided data. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline & baseline-ANB & 0.8200 & 0.7990 & 0.7990 \\ & baseline-NB & 0.8110 & 0.7920 & 0.7940 \\ & baseline-XLM-R & 0.7830 & 0.7820 & 0.7800 \\ 1 & run-1-ssl & 0.7521 & 0.7885 & 0.7604 \\ & baseline-mBERT & 0.7600 & 0.7530 & 0.7550 \\ 2 & run-2-SATLab & 0.7520 & 0.7430 & 0.7452 \\ 3 & run-1-UnibucNLP & 0.6502 & 0.7756 & 0.6935 \\ \end{tabular} \end{table} Table 6: The macro average scores of the best run for each team on **closed track 2**. \begin{table} \begin{tabular}{l|l l l|l} **Variable** & **Train** & **Dev** & **Test** & **Total** \\ \hline Portuguese & 3,467 & 991 & 495 & 4,953 \\ Spanish & 3,467 & 985 & 495 & 4,947 \\ English & 2,097 & 603 & 300 & 3,000 \\ \hline Total & & & & 12,900 \\ \end{tabular} \end{table} Table 4: DSL-TL’s train, dev, and test splits are 70/20/10% of the total number of instances, respectively. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & run-1-VaidyaKa & 0.8705 & 0.8523 & 0.8561 \\ & baseline-NB & 0.8200 & 0.8030 & 0.8030 \\ 2 & run-1-ssl & 0.7647 & 0.7951 & 0.7729 \\ \end{tabular} \end{table} Table 7: The macro average scores of the best run for **open track 1**. \begin{table} \begin{tabular}{l|l l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & baseline-mBERT & 0.5490 & 0.5450 & 0.5400 \\ & baseline-XLM-R & 0.5280 & 0.5490 & 0.5360 \\ 1 & run-3-UnibucNLP & 0.5291 & 0.5542 & 0.5318 \\ & baseline-NB & 0.5090 & 0.5090 & 0.5030 \\ 2 & run-1-sATLab & 0.4987 & 0.4896 & 0.4905 \\ 3 & run-1-ssl & 0.4978 & 0.4734 & 0.4817 \\ \end{tabular} \end{table} Table 5: The macro average scores of the best run for each team on **closed track 1**. \begin{table} \begin{tabular}{l|l l l} **Rank** & **Model** & **R** & **P** & **F1** \\ \hline 1 & run-1-VaidyaKa & 0.8705 & 0.8523 & 0.8561 \\ & baseline-NB & 0.8200 & 0.8030 & 0.8030 \\ 2 & run-1-ssl & 0.7647 & 0.7951 & 0.7729 \\ \end{tabular} \end{table} Table 8: The macro average scores of the best run for each team on **open track 2**. Team ssl (Hohl and Shim, 2023) achieved the best performance out of ten submissions on the closed version of track two. Their SVM was able to more effectively differentiate between six labels that did not include the aforementioned neutral labels (en-GB, en-US, es-AR, es-ES, pt-PT, or pt-BR). They achieved an F1 score of 0.7604. Their results were closely followed by the performance of SATLab's logistic regressor, having attained an F1 score of 0.7452, and UnibucNLP's XGBoost stacking ensemble with an F1 score of 0.6935. All submissions were clearly behind the adaptive and traditional Naive Bayes baselines, which were identical to the systems winning the Identification of Languages and Dialects of Italy (ITDI) shared task in 2022 (Jauhiainen et al., 2022; Aepli et al., 2022). SVMs are well-known to perform well when there is a clear distinction between class boundaries. This likely explains why team ssl's SVM has outperformed UnibucNLP's ensemble since neutral labels that contained features of both classes were no longer considered. Team VaidyaKane's (Vaidya and Kane, 2023) submission to the open version of track 1 outperformed all other open and closed submissions for this track. Their two-stage transformer-based model achieved an F1 score of 0.5854. Team ssl was the only other team to submit predictions for open tracks 1 and 2. Their open submission for track 1 achieved an F1 score of 0.4889 which surpassed that of their closed submission for this track. The use of additional data was, therefore, found to improve overall performances. Team VaidyaKane produced the highest F1 score on the open version of track 2. They achieved an F1 score of 0.8561, which was greater than all other open and closed submissions for either track. Team ssl also saw a further improvement in their SVM's model performance when using additional data for track 2. Their SVM model produced an F1 score of 0.7729, which was superior to their closed-track submission. These performances show that the use of additional data is beneficial and further proves that the classification of language varieties is an easier task than the classification of language varieties with neutral labels. ### Summary The DSL-TL shared task introduced a novel problem formulation in language variety identification. The new human-annotated dataset with the presence of the 'both or neither' class represent a new way of looking at the problem. Given the similarity between language varieties, we believe this new problem formulation constitutes a fairer way of evaluating language identification systems, albeit rather challenging in terms of performance as demonstrated in this shared task. ## 6 Discriminating Between Similar Languages - Speech ### Dataset The DSL-S shared task uses Mozilla Common Voice data (version 12 released in Dec 2022) in 9 languages from two language families. The data comes from volunteers reading a pre-selected set of sentences in each language. The audio is recorded through a web-based interface. For training and development sets, we follow the training and development set of the source data. Even though the test data used in this task comes from the Common Voice test data for the nine languages, we do not use the entire test set of the CV release but sample 100 audio files for each language. There is no overlap of sentences and speakers between the data sets. Table 9 presents the test set's statistics. The total amount of unpacked speech data is around 15 gigabytes. The data includes severe class imbalance, as well as substantial differences in the number of speakers. Generalization from a small number of speakers is a known challenge in similar speech data sets, including earlier VarDial evaluation campaigns.18 The CV data set makes this task further challenging since the variety of speakers in the test set is much larger than the training and the development sets. Footnote 18: See Jauhiainen et al. (2018) and Wu et al. (2019) for earlier approaches to this problem. Similar to the earlier VarDial shared tasks with audio data (Zampieri et al., 2017, 2018, 2019), we provided 400-dimensional i-vector and 512-dimensional x-vector features, both extracted using Kaldi (Povey et al., 2011). Unlike earlier tasks, however, the raw audio data was also available to the potential participants. ### Participants and Approaches Two teams registered for the shared task, but neither provided any submissions. In this section, we briefly introduce the baselines we provided. For the closed track, we provided a linear SVM baseline with x-vectors features (Snyder et al., 2018). The SVM baseline was implemented using scikit-learn [20], and tuned only for the SVM margin parameter 'C'. The open track baseline uses two baselines - the XLS-R multilingual pre-trained transformer speech model [14]19 with a classification head for direct speech classification, and a multilingual speech recognition system 20 based on XLS-R [1] to transcribe the speech, and uses Naive Bayes [13, 14] to identify the language.21 Footnote 19: [https://huggingface.co/facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) Footnote 20: [https://huggingface.co/voidful/wav2vc2-xlsr-multilingual-56](https://huggingface.co/voidful/wav2vc2-xlsr-multilingual-56) Footnote 21: [https://github.com/tosaja/TunPRF-NADI](https://github.com/tosaja/TunPRF-NADI) ### Results The scores for the baselines are presented in Table 10. The SVM baseline performs particularly badly on the test set (the development precision, recall, and F1 scores are 0.4088, 0.4011, 0.3777, respectively). The reason behind this is likely due to the fact that, although they were used for language identification in earlier research, the x-vectors are designed for speaker identification. Given the variability of speaker features in the test set, any classifier relying on speaker features are likely to fail. The baselines relying on pre-trained transformer models perform substantially better, with the direct speech classifier being more than 10 points behind the transcription and text classification approach. While the direct speech classification approach could be further improved through hyperparameter optimisation (currently we fine-tune for 3 epochs with a batch size of 24 and a learning rate of 1e-04) and a selection of the layer from which the features are extracted (related work suggests that lower transformer layers are more informative for discriminating between languages [1]), these baseline results show that transcription and text classification might still be a shorter path to a reasonably performing system for discriminating between similar languages than direct speech classification. ### Summary Although we did not have any submissions for this shared task, we believe that the task includes many interesting challenges. Based only on our baseline results, identifying languages from a limited amount of data (without pre-trained speech models) seems challenging, yet this is particularly interesting for low-resource settings and for investigating differences and similarities for closely related language varieties. We hope to see more interest in the community for language/dialect identification from speech. ## 7 Conclusion This paper presented an overview of the three shared tasks organized as part of the VarDial Evaluation Campaign 2023: Slot and intent detection for low-resource language varieties (SID4LR), Discriminating Between Similar Languages - True La \begin{table} \begin{tabular}{l|r r r} **System** & **P** & **R** & **F1** \\ \hline SVM + x-vectors & 0.0914 & 0.1189 & 0.0876 \\ XLS-R & 0.6736 & 0.5953 & 0.5856 \\ XLS-R + NB & 0.7331 & 0.7167 & 0.7031 \\ \end{tabular} \end{table} Table 10: Baseline scores of the DSL-S shared task. \begin{table} \begin{tabular}{l|r r r r r r r r} & \multicolumn{3}{c}{**Train**} & \multicolumn{3}{c}{**Dev**} & \multicolumn{3}{c}{**Test**} \\ \cline{2-9} & n & spk & duration & n & spk & duration & n & spk & duration \\ \hline **DA** & 2734 & 3 & 3:17:38 & 2105 & 10 & 2:50:46 & 100 & 48 & 0:07:50 \\ **ET** & 3137 & 221 & 5:49:04 & 2638 & 167 & 4:57:54 & 100 & 88 & 0:11:12 \\ **FI** & 2121 & 3 & 2:43:47 & 1651 & 13 & 1:59:23 & 100 & 63 & 0:07:46 \\ **MDF** & 173 & 2 & 0:15:39 & 54 & 1 & 0:04:39 & 100 & 7 & 0:08:40 \\ **MYV** & 1241 & 2 & 1:58:26 & 239 & 1 & 0:22:55 & 100 & 9 & 0:09:07 \\ **NO** & 314 & 3 & 0:22:43 & 168 & 4 & 0:13:28 & 100 & 18 & 0:07:35 \\ **RU** & 26043 & 252 & 37:16:50 & 10153 & 394 & 15:23:17 & 100 & 98 & 0:09:15 \\ **SV** & 7421 & 22 & 8:11:54 & 5012 & 73 & 5:32:33 & 100 & 89 & 0:07:24 \\ **UK** & 15749 & 28 & 18:38:31 & 8085 & 103 & 10:58:25 & 100 & 28 & 0:08:22 \\ \end{tabular} \end{table} Table 9: Number of instances (n), number of speakers (spk) and total duration (hour:minute:seconds) for each split of the DSL-S shared task. The speaker numbers are approximated based on client id detection by CV. bels (DSL-TL), and Discriminating Between Similar Languages - Speech (DSL-S). ## Acknowledgements We thank all the participants for their interest in the shared tasks. The work related to the SID4LR shared task has received funding from the Swiss National Science Foundation (project nos. 191934 and 176727) and ERC Grant 101043235. The work related to the DSL-TL and DSL-S shared tasks has received partial funding from the Academy of Finland (funding decision no. 341798). The work related to the DSL-S shared task has received funding from the Slovenian Research Agency within the research project J7-4642 and the research programme P6-0411.
This report presents the results of the shared tasks organized as part of the VarDial Evaluation Campaign 2023. The campaign is part of the tenth workshop on Natural Language Processing (NLP) for Similar Languages, Varieties and Dialects (VarDial), co-located with EACL 2023. この年、三つの共有タスクが開催されました。それは、Low-resource language varieties (SID4LR), Discriminating Between Similar Languages -- True Labels (DSL-TL) と、Discriminating Between Similar Languages -- Speech (DSL-S) です。
2304.00079
Dielectric Barrier Discharge Actuators: Experimental and Numerical Study of Momentum Injection into Co-flow and Counter-flow Freestream
Dielectric barrier discharge (DBD) plasma actuators can generate a wall jet without moving parts by interacting with ionized and neutral molecules in an electric field. The coupling between electrohydrodynamic (EHD), turbulence, inertial and viscous effects in the flow boundary layer remains poorly understood and requires investigation. We present an experimental investigation of momentum injection by DBD actuators into the free stream flow with Re = 35,000 and 75,000 in co-flow and counter-flow scenarios over a range of VAC = 12 kV - 19.5 kV peak-to-peak at a frequency of 2 kHz. In the co-flow configuration, the DBD actuator injects momentum into the boundary layer. In co-flow, the momentum injection results in the thinning boundary layer, while in the counter-flow configuration, flow separation can occur. For the tested condition, a separation bubble is observed at Re = 35,000. The momentum displacement in the counter-flow configuration is six times greater than the EHD jet momentum in a quiescent environment. Both co-flow and counter-flow momentum injections show diminishing effects with increasing external velocities. This work highlights that the resulting flow pattern is not a simple superposition of the EHD jet and the free stream but is determined by the coupling of inertial, viscous, and Coulombic effects in the EHD-driven wall jet and the external flow. The velocity profiles and momentum measurements presented here can be used to validate numerical models and inform the design of DBD actuators for active flow control.
Anthony Tang, Nathan Li, Benjamin Price, Alexander Mamishev, Alberto Aliseda, Igor Novosselov
2023-03-31T19:05:47
http://arxiv.org/abs/2304.00079v1
Dielectric Barrier Discharge Actuators: Experimental and Numerical Study of Momentum Injection into Co-flow and Counter-flow Freestream ###### Abstract Dielectric barrier discharge (DBD) plasma actuators can generate a wall jet without moving parts by interacting with ionized and neutral molecules in an electric field. The coupling between electrohydrodynamic (EHD), turbulence, inertial and viscous effects in the flow boundary layer remains poorly understood and requires investigation. We present an experimental investigation of momentum injection by DBD actuators into the free stream flow with \(\mathrm{Re=35,000}\) and \(\mathrm{75,000}\) in co-flow and counter-flow scenarios over a range of \(\mathrm{V_{AC}=12~{}kV}\) - \(\mathrm{19.5~{}kV}\) peak-to-peak at a frequency of \(\mathrm{2~{}kHz}\). In the co-flow configuration, the DBD actuator injects momentum into the boundary layer. In co-flow, the momentum injection results in the thinning boundary layer, while in the counter-flow configuration, flow separation can occur. For the tested condition, a separation bubble is observed at \(\mathrm{Re=35,000}\). The momentum displacement in the counter-flow configuration is six times greater than the EHD jet momentum in a quiescent environment. Both co-flow and counter-flow momentum injections show diminishing effects with increasing external velocities. This work highlights that the resulting flow pattern is not a simple superposition of the EHD jet and the free stream but is determined by the coupling of inertial, viscous, and Coulombic effects in the EHD-driven wall jet and the external flow. The velocity profiles and momentum measurements presented here can be used to validate numerical models and inform the design of DBD actuators for active flow control. DBD, active flow control, plasma/flow interaction, separation control ## 1 Introduction Non-thermal plasma devices have been proposed as actuators for active flow control [1-7]; these plasma actuators have the potential to instantaneously change flow profiles while staying silent and compact [8-10]. Corona discharge or dielectric barrier discharge (DBD), actuators generate ions when the electric field exceeds the dielectric strength of the working fluid. The interaction between free ions, accelerated by the electric field, the working fluid, and walls can be utilized in aerodynamic drag reduction [11-13], lift augmentation [10, 14], separation control [15, 16], and electric propulsion [17-20]. Despite lower electromechanical efficiency than corona-driven actuators, DBD actuators are more stable and can effectively provide a consistent electrohydrodynamic (EHD) forcing [4, 9]. Current DBD applications are limited to flow control at low-speed conditions due to their relatively weak EHD forces [17, 21, 22]. Scientific studies have explored these multiphysics phenomena to optimize electrical and mechanical effects in DBD systems [16, 23-29]. Modeling DBD discharge from first principles is cost-prohibitive due to the large separation of timescales, so studies of corona-driven ionic flows can be relevant to gain insight into the flow-ion interactions [7, 25, 27, 30]. Early numerical efforts led to the development of simplified DBD ion injection models, including the Suzen & Huang [31], Orlov [32], and Shyy [33] models that were able to predict quiescent EHD flow through the interactions of positive and negative ion charges and the external environment [9, 29]. More recent work has pushed past these earlier models to numerically explore the performance of DBDs, such as the evolution of the velocity field in the DBD-driven jet [28]. In addition, other strategies have been proposed for more robust, computationally-lean momentum injection models for EHD-driven jets [34]. Nearly all these models were developed and validated in quiescent conditions and are yet to be tested in an external flow condition with significant inertial and viscous flow interactions. Several authors have noted and presented varying success in quiescent conditions with different numerical approaches, including direct numerical simulations and turbulence RANS closure models [35-37]. Most reports describing EHD-flow interaction are currently limited to analysis of electroconvective instabilities at very low Reynolds numbers. Electroconvection (EC) phenomenon was first reported by G. I. Taylor in 1966, describing cellular convection in the liquid droplet [38]. Since then, EC has been observed in other systems where electric force interacts with fluids. In nonequilibrium EHD systems [38-59], poorly conductive leaky dielectric fluid acquires unipolar charge injection at the surface interface in response to the electric field. In charge-neutral electrokinetic (EK) systems, EC is triggered by the electro-osmotic slip of electrolyte in the electric double layer at membrane surfaces [60-71]. In 3D shear flow, the EHD addition to crossflow forms streamwise rolling patterns as observed numerically [57, 72-74] and experimentally [75, 76]. 2D and 3D flow analysis of the DBD momentum injection in the shear flow is missing from the published literature. A mechanistic understanding of the interaction between discharge and fluid flow in the presence of an external flow is needed to inform the development of DBD actuators for active flow control. To maximize the effect of DBD actuators, recent experimental work varied actuator geometries such as electrode shapes, number of electrodes, and their placement on an aerodynamic surface. However, most of the fundamental EHD studies were performed in a quiescent environment; these actuators have not been well studied in the presence of an external freestream flow, especially at low to moderate velocity relevant to flow separation control [77, 78]. Several airfoil and small-scale unmanned aerial vehicle (UAV) studies have explored the ability of DBD actuators to manipulate lift and drag forces; however, these studies did not provide insight into the fluid flowfield and the underlying physics responsible for the lift and drag changes [79, 80]. The two traditional external flow conditions include co-flow, when the jet direction is the same as the external flow, and counter-flow when the momentum injection is opposite to the external flow. Pereira et al. reported force measurements in both co- and counter-flow DBD injection. They found that the total EHD thrust (or the difference between EHD thrust and shear stress at the surface) was identical for both co-flow and counter-flow [81]. However, the range of freestream velocities (\(10-60\) m/s) in increments of \(10\) m/s did not address the regime where the EHD velocity equals the external flow (\(0-10\) m/s) [81]. Probing the underlying fluid dynamics requires measurement of the velocity profiles near the momentum injection location. Bernard et al. reported on the velocity profiles of DBD actuators in co-flow and found that the effect of the DBD injection diminished at higher external velocities; however, the authors did not investigate counter-flow conditions [82]. The literature does not report experimental work characterizing velocity profiles in the counter-flow momentum injection by DBDs. Over the past decade, several studies have been conducted on DBD mounted to various airfoils [14, 83, 84], multi-element airfoils [85], flaps [86, 87], and full-scale or near full-scale aircraft [17, 21]. In all studies, the DBD actuator demonstrated its ability to change aerodynamic performance by increasing airfoil lift, decreasing drag, or changing pitching moment. Gu et al. employed a simplified ion injection model to explore the effects of DBD actuators on Gurney flaps by modeling actuators on the pressure and suction side of the trailing edge of an airfoil [36]. Multiple DBD array systems have been tested with moderate success; many of these studies found the need to balance the simultaneous interactions between jets acting in opposite directions [77, 88]. Counter-flow momentum injection potentially manipulates the flow more efficiently by increasing drag, decreasing lift, and changing the pitching moment on an aerodynamic surface. This study explores the performance and effect of EHD momentum injection by the DBD actuator at U\({}_{\infty}\)\(=5\) m/s and U\({}_{\infty}\)\(=11\) m/s in co-flow and counter-flow. The AC voltage was varied in the range V\({}_{\mathrm{AC}}\)\(=12\) kV - \(19.5\) kV, and the frequency was constant at 2 kHz. This study is the first to explore the fluid characteristics of DBD-driven flow in counter-flow and quantify the onset separation due to an adverse pressure gradient. Finally, a simple momentum injection model based on the empirically derived DBD discharge in a quiescent environment was tested in external flow computational fluid dynamics (CFD) simulation. This work provides insight into the interaction between the DBD flow and a freestream flow over a flat plate, informing the potential placement of actuators on airfoils and providing validation for numerical studies. ## 2 Experimental Setup and Diagnostics Traditional metrics to characterize plasma actuators' performance include current, power consumption, forces on the surfaces, and DBD jet velocity. A current measurement shows a superposition of capacitive current, discharge current, and noise. The capacitive current is filtered out or ignored because it corresponds to the transiently stored energy in the dielectric or air, not the energy used to accelerate the fluid [5]. The discharge current indicates the amount of charged species that can participate in the energy transfer to fluid motion. The discharge current comprises of numerous peaks in the positive-going cycle due to streamer propagation with the addition of glow discharge during the negative-going cycle [89]. Recent research characterized the relationship between DBD discharge, capacitance, power consumption, and DBD actuator performance [90, 91]. High-resolution measurements have shown that both positive and negative half-cycles contribute to EHD force, and their relevant contributions are the topic of active scientific discussions [5, 92, 93]. Velocity measurements can be obtained by pitot tubes or particle imaging velocimetry (PIV); these measurements characterize momentum transferred from charged species to neutral molecules. While PIV measurements can capture an entire fluid field, integrating a PIV system with DBD-induced flow can be challenging. In addition, there is a risk of tracer particles being charged and interacting with the electric field, i.e., not following the flow streamlines. DBD wall jet similarity analysis was recently proposed [94]; however, additional experimental data is needed to perform a robust analysis. ### Wind Tunnel Our measurements were conducted in an open-circuit wind tunnel with a 100mm x 100mm cross-sectional test section. The wind tunnel consists of an inlet section followed by a 1000 mm long section and a test section allowing for testing the DBD actuator. The DBD actuator plate surface rests parallel to the bottom wall of the wind tunnel, see Figure 1. The inlet section comprises four 120 mm x 120 mm fans with inlet cows, a large honeycomb screen, and a contraction cone. The contraction cone with a 9:1 contraction ratio results in a 100mm x 100mm wind tunnel section constructed of plexiglass. An aluminum extrusion frame supports the wind tunnel section. As described below, the velocity measurements were obtained using a custom glass pitot tube with a 0.4mm ID and 0.5mm OD. The boundary layer height (\(\delta_{99}\)) at U\({}_{\infty}\)\(=5\) m/s external flow was measured at the location of the actuator to be \(\sim\)8.0 mm. Using the height of the wind tunnel and the characteristic length; the Re is 35,000 at U\({}_{\infty}\)\(=5\) m/s and 75,000 at U\({}_{\infty}\)\(=11\) m/s. The turbulence intensity is \(\sim\)1% measured using a calibrated hot-wire anemometer. The hot-wire anemometer was not used for velocity measurements of the DBD actuator as the hot-wire wires and electronics would introduce a risk of arc discharge from the high-voltage electrode. ### DBD actuator The DBD actuator is comprised of two electrodes separated by a thin dielectric barrier, as shown in Figure 1, similar to previously published work [95]. When high voltage is applied to the active electrode, the electric field is strongest at the edge of the active electrode, resulting in plasma generation [92, 96, 97]. The electrodes' thickness and the dielectric media can impact the actuator's performance [97-101]. A straight-edged DBD actuator with a spanwise uniform electric field produces a two-dimensional forcing on the fluid, resulting in a planar jet. Other actuator designs have been considered, including serrated electrodes that produce a three-dimensional force onto the flow field [102-104]. The spanwise length or width of the electrodes serves as a nominal reference length in the analysis [5]. The DBD actuator was installed on the 6" by 8" acrylic plate for this study. The dielectric material used in this study is Kapton (~3.5 dielectric constant at 1 kHz). Each actuator has one layer of ~0.075mm Kapton-FN (FEP layered Kapton) and four layers of 1 mil Kapton-HN with a total thickness of ~ 0.4mm (including the adhesive and FEP layers). The ground electrode (copper, 50 \(\mathrm{\SIUnitSymbolMicro m}\) thick, 25 mm long, 110 mm wide) is flush-mounted on the acrylic plate. The upper electrode (copper, 50 \(\mathrm{\SIUnitSymbolMicro m}\) thick, 15 mm long, 110 mm wide) is glued onto the top of the Kapton dielectric layer. Both electrodes have straight edges producing a uniform spanwise discharge. The active and ground electrodes' edges are aligned with each other, i.e., there is no overlap or gap between the electrodes in the x-direction. The air-exposed HV electrode is connected to a Trek 615-10 (PM04014) power supply that provides up to 20 kV (peak-to-peak) AC voltage. ### Electrical Measurements The electric current in the DBD circuit is a superposition of a capacitive current and a discharge current. The discharge current is associated with plasma microdischarges, and they appear as a series of fast current pulses [105], as shown in Figure 2(a). DBD current is measured using a 200 MHz bandwidth non-intrusive Pearson 2877 current monitor with a rise time of 2 ns. The current monitor is connected to a Tektronix DPO 7054 oscilloscope with a sampling rate of 1 GS/s to resolve 500 MHz. These conditions are essential for accurately capturing individual discharges that have been shown to occur over a 30 ns duration on average [106]. The high bandwidth and the sampling rate minimize the noise during the current measurements and can be used to compute the time-averaged Figure 1: **Schematic of the experimental setup, DBD actuator is mounted on an acrylic glass plate** **flushed with the wind tunnel bottom. The blue region is the dielectric layer separating the electrodes.** electrical power [52]. To determine the currents associated with the plasma micro-discharges, determining the capacitive current through analytical methods has been explored [32, 107]; removing the capacitive current through signal processing methods, including low-pass filters or Fast-Fourier Transform (FFT) has also been attempted [93, 105, 106]. To determine the power consumed by the actuator, a charge-voltage Lissajous curve is created by introducing a capacitor between the grounded electrode and the ground [108, 109]. The Integrating charge-voltage relationship multiplied by the frequency yields the total power usage of the DBD system. The time-averaged electrical power consumed by the actuator is computed as \[W_{elec}=f_{AC}\int\limits_{t^{*}=0}^{t^{*}=1}VdQ, \tag{1}\] where \(f_{AC}\) is the frequency of the applied voltage in Hz, and \(V\) and \(Q\) are the voltage and charge at each point in the period. The normalized time (\(t^{*}\)) represents a single cycle. We compute the averaged resulting power from at least four separate periods to reduce the noise impact. In the wind tunnel study by Pereira et al., the DBD actuator in co-flow and counter-flow was found not to have significantly different electrical characteristics [81]. Figure 2(a) below shows a typical DBD current measurement with a voltage curve. Figure 2(b) shows the representative filtered Lissajous curve of four consecutive discharge cycles. These data were used to determine the power of the DBD actuator as a function of the operating condition. ### Wall jet and momentum displacement characterization The flowfield induced by EHD depends on the plasma actuators' configuration and operational conditions. We employed a custom-made glass pitot tube with a 0.4 mm inner diameter and 0.5 mm outside diameter to measure the time-averaged x - velocity profile. Compared to traditional stainless steel pitot tubes, the glass tube reduces electrical interaction with the discharge. This method has been previously used to characterize plasma actuators' performance [5, 95, 105, 110]. The pitot tube is mounted on an optical table and controlled on the x and y - axis by linear stages connected to an Ashcroft CXLdp differential pressure transmitter (0 - 25 Pa with 0.25% accuracy). The pressure transducer outputs a 4 - 20 mA current, linear in its pressure range, and it was placed in series with a 1.5 k\(\Omega\) resistor. The pressure within the pitot tube equilibrated nearly instantly after changing the flow condition. The voltage across the resistor is recorded for at least 30 seconds with a Hydra Data Logger II. With the time-averaged pressure (\(P\)), a time-averaged wind velocity (\(v\)) is calculated using Figure 2: (a) Typical DBD current with voltage signals at 18 kV (p-p) and 2 kHz applied frequency (b) 4 consecutive Q-V discharge cycles measured from a 100 nF capacitor. Bernoulli's equation with a calibration correction factor (\(\mathcal{C}\)) that is characteristic for a custom pitot tube expressed as \[\Delta P=\mathcal{C}\rho v^{2}, \tag{2}\] where \(\rho\) is the fluid density. In our experiments, the typical velocity measurements had a standard deviation \(<\) 0.02 m s\({}^{\text{-1}}\) over a 30 s sampling period. X-velocity measurements are taken at varying x and y positions downstream and upstream on the active electrode edge. At each x-position, the y-velocity profile is obtained from the surface to 10 mm above the plate at increments of 0.25 mm or 0.5 mm (at a higher location). The streamwise measurements were taken by holding a constant y-position and spanning in the x - direction at 0.5 mm intervals to complete the datasets over a regular measurement grid. Due to the pitot tube dimension, we could not capture velocity at y \(<\) 0.25 mm. We assume the velocity is linear between the no-slip condition at y = 0 mm and the data at y = 0.25 mm for plotting purposes. Considering a 2D control volume (with a spanwise unit length), integration of the streamwise velocity along a vertical profile in the wind tunnel with the DBD actuator provides the total mass flow rate per meter spanwise, \(Q\), of the total system: \[Q_{system}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U(y)dy, \tag{3}\] where \(U(y)\) is the velocity measured along the vertical profile at a constant x - location. Similarly, the system's total momentum can be found by integrating the square of the velocity along a vertical profile: \[M_{system}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U^{2}(y)dy. \tag{4}\] To identify the momentum produced by only the DBD actuator, the same measurements were taken with the DBD actuator ON and OFF. The difference in momentum integrated from the wall to the point where the two velocity profiles intersect is the momentum injected by the DBD. Above the intersection of the two profiles, there is entrainment due to mass conservation, and this entrainment is confirmed by taking the velocity profile of the entire wind tunnel with and without actuation. The resulting momentum is expressed as \[M_{DBD}\ =\ \rho\int_{y\ =\ 0}^{y\ =\ \text{h}}U_{DBD\ \mathit{ON}}^{2}(y)-\ \mathit{U}_{DBD\ \mathit{OFF}}^{2}(y)dy \tag{5}\] and this approach similarly holds for mass flow rate and mechanical power. The mechanical power of the system (\(W_{mech}\)) can be computed by \[W_{mech}=\ \frac{1}{2}\rho L\int_{y\ =\ 0}^{y\ =\ \infty}U^{3}(y)dy. \tag{6}\] The derived values of mass flow rate, momentum, and power of the DBD in external flow are compared to the results of similar measurements in quiescent flow. ## 3 Results and Discussion ### Power Consumption Figure 3 illustrates the power usage of the DBD for a range of operating voltages, external flow speed, and DBD actuator orientation. Integration of Lissajous Q-V curves yielded an average power consumption of the actuator. The power usage is normalized to the spanwise length of the actuator (0.11 m). In general, the power consumption increases quadratically with applied voltage, which is consistent with previous reports for AC DBD [5, 92, 95, 111] and the EHD flows driven by corona discharge [26, 30, 112]. Power usage between the AC cycles for any configuration had a maximum variance of \(\sim 8\%\), see Figure 3. These data were taken for the DBD actuator in quiescent, counter-flow, and co-flow conditions at the two external flow speeds. Pereira et al. also found power usage to vary less than 10% between co-flow and counter-flow forcing [81]. The magnitude of normalized power usage measured for this study is slightly higher than Pereira et al., which used a thicker 3mm dielectric; however, other studies, such as Forte et al., have found similar power usage for thinner thicknesses and noted that the increase in dielectric thickness decreases DBD capacitance, thus decreasing power usage [5, 92, 95]. ### Operation in Quiescent Condition First, the momentum injection of the AC DBD actuator is considered in a quiescent environment. Figure 4 shows the velocity profile of the DBD jet at \(\mathrm{x=10\;mm}\) and \(\mathrm{x=25\;mm}\) downstream of the active electrode edge [95]. Although the velocity decay and the spreading of the jet are apparent, as expected in wall jets, the presence of Coulombic forces associated with the DBD and the fact that the fluid is being accelerated over the fluid volume rather than the point source make the parameterization of the flow challenging. The addition of co- and counter-flow further complicates the analysis. Figure 3: DBD power usage at U\({}_{\infty}\)= 5 and U\({}_{\infty}\)= 11 m/s external flow in co-flow and counter-flow ### Co-flow EHD Momentum Injection This section discusses the DBD actuator performance in co-flow over the range \(\mathrm{V_{AC}=14-19.5\ kV}\), a frequency of 2 kHz, \(\mathrm{U_{\infty}=5\ m/s}\) (Re=35,000) and \(\mathrm{U_{\infty}=11\ m/s}\) (Re-75,000). The virtual origin and coordinate system are defined in Figure 5. The velocity profiles of the EHD jet are measured at \(\mathrm{x=10\ mm}\) and \(\mathrm{x=25\ mm}\) downstream of the active electrode edge. Due to EHD momentum injection, the velocity in the boundary layer increases; however, the difference in momentum displacement is affected by the thickness of the freestream boundary layer. Figure 6 shows the velocity data versus DBD voltage at two downstream locations. The dotted line is the wind tunnel velocity profile without DBD actuation. Note that in quiescent conditions (Figure 4), the DBD wall jet has a maximum velocity, \(\mathrm{U_{max}\sim 4.8\ m/s}\) at \(\mathrm{x=10\ mm}\), \(\mathrm{y=0.25\ mm}\) when \(\mathrm{V_{AC}=19.5\ kV}\) and \(\mathrm{f_{AC}=2\ kHz}\), and the peak velocity of the jet decays quickly away, downstream and wall-normal, from this maximum. At \(\mathrm{U_{\infty}=5\ m/s}\), EHD-induced velocities are comparable to the freestream, and the effects of DBD actuation are dominant. The increase in boundary layer velocity of \(\sim 2\ m/s\) is nearly identical to that of Bernard et al. [82] for similar conditions. The \(\mathrm{U_{max}\sim 5.4\ m/s}\) is greater than in quiescent condition (\(\sim 4.8\ m/s\)) but located at \(\mathrm{y=0.75\ mm}\) for the same x-position, as viscous effects shift the location of \(\mathrm{U_{max}\ away}\) from the wall. At \(\mathrm{y=0.25\ mm}\), the U velocity is 5.3 Figure 4: DBD with no external flow at \(\mathrm{x=10\ mm}\) (a) and \(\mathrm{x=25\ mm}\) (b) downstream. The maximum velocity at 19.5kV is \(\sim 4.7\ m/s\) at \(\mathrm{y=0.5\ mm}\) at \(\mathrm{x=10\ mm}\) downstream. Figure 5: DBD actuator in co-flow configuration, the first measurement is taken at x=10 mm to avoid plasma region disruption with pitot probe. The plasma region is colored purple. m/s and is greater than that of the quiescent jet at y\(=0.25\) mm; thus, the viscous effects between the DBD jet and freestream can be seen as mixing and entraining fluid from the external freestream into the boundary layer. In co-flow, the DBD-induced momentum does not diffuse into the outer flow as quickly as in the quiescent environment, and this can be observed from the velocity profile at x \(=25\) mm. The interaction of the wall jet with the freestream means that its momentum does not diffuse into a quiescent environment but rather continues to mix and entrain fluid into the boundary layer, high-speed fluid from the freestream. This entrainment can be seen by a slight decrease in the external flow with the energized DBD actuator starting at approximately y \(=3.75\) mm. Conservation of mass in the wind tunnel means that as the boundary layer accelerates by the EHD-added momentum, the freestream velocity decreases slightly (the momentum thickness of the boundary layer is smaller). The complete velocity profile confirms that mass is conserved as the entrainment section eventually merges with the base wind tunnel profile. Figure 7 shows the effect of the EHD wall jet in the boundary layer at U\({}_{\infty}\) = 11 m/s. In this case, the EHD velocity is about half the freestream for the highest DBD settings. The effect of momentum injection is reduced, as the enhanced mixing in this higher Reynolds number case is more effective at spreading the effect of the EHD momentum injection throughout a thinner boundary layer. Even at maximum DBD power, the velocity increase is less than \(\,1\) m/s at x \(=10\) mm. At x \(=25\) mm, the effect of the EHD momentum injection is almost negligible. These results agree with Bernard et al. [82] for U\({}_{\infty}\) = 10 m/s. At higher external flows, the EHD momentum addition results in a lower overall impact on the boundary layer, as the enhanced mixing in the thin boundary layer rapidly restores the boundary layer profile to the un-actuated shape. Figure 6: DBD actuator in U\({}_{\infty}\) = 5 m/s co-flow at (a) x = 10 mm and (b) x = 25 mm downstream. The dashed line shows the freestream profile without plasma injection. The DBD voltage is varied in the 14kV-19.5kV range; the AC frequency is set constant at 2kHz. The EHD momentum addition cannot be treated as a linear superposition of the EHD jet in a quiescent environment and the momentum associated with the boundary layer of the free stream. For external flows compatible with EHD wall jet velocities, the momentum injection into the co-flow leads to effective boundary layer thinning; this effect diminishes at higher freestream velocities (higher Reynolds numbers and thinner boundary layers). The wall jet mixing is influenced by (i) interaction with the freestream and (ii) viscous wall shear increase in the viscous sublayer. This point is explored further in Figure 13. ### Counter-Flow EHD jet This section describes the behavior of counter-flow EHD jet at DBD V\({}_{\mathrm{AC}}\) = 14 kV - 19.5 kV at f\({}_{\mathrm{AC}}\) = 2 kHz and wind speeds of U\({}_{\infty}\)= 5 m/s and U\({}_{\infty}\)= 11 m/s. The virtual origin and coordinate system of the DBD in counter-flow are defined above in Figure 8. The datum for analysis is set at the plasma generation edge of the active electrode; however, the EHD momentum injection is now in the negative x-direction. Figure 9 and Figure 10 show the velocity profiles for the EHD momentum injection into the counter-flow. The dotted line is the measured wind tunnel velocity profile without DBD actuation. At U\({}_{\infty}\) = 5 m/s, the velocity of the EHD jet has a similar magnitude to the external flow resulting in a significant adverse pressure gradient and the formation of a recirculation zone. The exact boundaries Figure 8: DBD actuator in counter-flow configuration. The plasma region is colored purple. Figure 7: DBD actuator in U\(\bullet\) = 11 m/s co-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. of the separation region are difficult to determine experimentally in the plasma region (x \(<\) 0 mm, y \(<\) 2mm) as the insertion of the pitot probe into the plasma interferes with the experiment, see Figure 8. However, to preserve continuity, the EHD jet must entrain fluid from above and behind the jet; thus, transects downstream and along x for constant y can be used to determine the boundaries of the separation bubble. First, we examine the y-scan at the fixed x-position. Figure 9 and Figure 10 show the profiles at x = 10 mm (above the active electrode) and x = 25 mm for U\({}_{\infty}\) = 5 m/s and U\({}_{\infty}\) = 11 m/s, respectively. As with the co-flow experiments, the EHD jet strength is varied by varying V\({}_{\mathrm{AC}}\) = 14 kV - 19.5 kV, while the AC frequency is 2kHz. For all voltages, the DBD in counter-flow creates a more significant deficit in the boundary layer than its co-flow counterpart, e.g., the counter-flow EHD jet creates a \(\Delta\)U \(>\) 5 m/s at V\({}_{\mathrm{AC}}\) = 19.5 kV compared to the \(\Delta\)U \(\sim\) 2 m/s in the co-flow case. Figure 9 (a) also shows that in counter-flow cases at 18kV and 19.5kV, the wall shear stress changes direction due to the increased strength of the EHD jet. In the co-flow scenario and lower voltage counter-flow configurations, the wall shear stress remains opposite of the freestream direction. Note that the maximum negative velocity is likely located in the EHD momentum injection region (x= - 10 mm - 0 mm). However, measurements could not be taken within the plasma region due to the plasma interactions with the pitot tube. Figure 9 shows that in the V\({}_{\mathrm{AC}}\) = 19.5 kV case, the separation bubble extends past x = 10mm downstream of the active electrode edge, while other conditions exhibit flow reattachment. The flow is fully attached at x = 25 mm; however, the pressure gradient in the flow boundary layer has not yet recovered. For higher external flow, the effects of the DBD jet are less dramatic. Within the boundary layer, the largest decrease in velocity in counter-flow with U\({}_{\infty}\) = 11 m/s is approximately 2.0 m/s at y = 0.5 mm and x = 10 mm. No separation was observed in the U\({}_{\infty}\) = 11 m/s counter-flow cases, though the plasma region was not probed. While the effects of the DBD in counter-flow at U\({}_{\infty}\) = 11 m/s are less dominant than for slower freestream experiments, the effects of the EHD wall jet are still more significant than in co-flow at the same U\({}_{\infty}\). Figure 9: DBD actuator in U\({}_{\infty}\) = 5m/s counter-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. The x-scans were performed while holding the y position constant to determine the boundaries of the separation region. Figure 11 shows the velocity profiles for y = 0.5 mm and y = 1.0 mm, while the x position was varied from x = 0 mm (edge of the active electrode) to x = 15 mm. The DBD voltage was V\({}_{\mathrm{AC}}\) = 12, 14, 16, 18 kV at f = 2 kHz. The data for V\({}_{\mathrm{AC}}\) = 19.5 kV is not shown due to the limited range of the pressure gauge. At y = 0.5 mm, immediately above the dielectric layer, separation is observed at all voltages. The x-location, where the velocity direction changes from backward to forward, determines the separation bubble's edge. At VAC = 18 kV, the separation length is approximately 7.5 mm downstream. At y = 1.0 mm, there is no signature of the separation bubble for VAC = 12 kV; however, it exists for the higher voltages. Figure 11: DBD actuator in U\({}_{\infty}\)= 5 m/s counter-flow at y = 0.5 mm (a) and y = 1.0 mm (b). The DBD voltage is varied in the 14 kV-19.5 kV range, the AC frequency is set constant at 2 kHz. Figure 10: DBD actuator in U\({}_{\infty}\)= 11 m/s counter-flow at x = 10 mm (a) and x = 25 mm (b) downstream. The DBD voltage is varied in the 14kV-19.5kV range, the AC frequency is set constant at 2kHz. To better visualize the flow pattern at 5 m/s, additional x-scans were performed, and the 2D velocity fields were reconstructed. Figure 12 shows the velocity contour plots for the counter-flow EHD jet obtained by merging x and y scans at U\({}_{\infty}\) = 5 m/s. Each grid point in the figure is associated with a velocity measurement; the spatial resolution was 0.5 mm in both x and y directions, totaling approximately 400 measurements for each condition. All DBD actuations cause a separated region (for U\({}_{\infty}\) = 5 m/s, RE=35,000) with a negative x-velocity downstream of the plasma injection. With the increase in DBD voltage, the edge of the reversed flow region within the separation bubble extends from 3.0 mm (12 kV) to \(>\)10.0 mm (18 kV) in the x-direction from the edge of the active electrode and from 0.6 mm (12kV) to \(>\) 1.75 mm (18 kV) in the y-direction. This growth in the length and height of the reversed flow region of the separated bubble is nonlinear with increasing voltage. The size and shape of the recirculation bubble are determined by the competition of the EHD jet strength vs. the forward momentum in the boundary layer. As the EHD injects negative momentum at high DBD voltages, it can overtake the momentum in the boundary layer at greater heights. Without sampling within the plasma region, it is challenging to characterize the entire length of the separation regions. It can be expected that the separation region extends into the forcing plasma region. Multiphysics CFD simulations can potentially address this issue; however, robust models need to be developed and validated. Although multiphysics CFD simulations are not available, the separation region is not fully explored. Figure 12: **X-velocity contour plot for EHD jet in counter-flow for U\({}_{\infty}\) = 5 m/s for varying voltages. Gridlines correspond to recorded points spaced 0.5 mm apart in the x- and y-direction.** analysis is beyond the scope of this paper, preliminary results with a simplified momentum model proposed in [95] also suggest that the momentum injection in the counter-flow scenario triggers flow separation, though additional validation is required. ### Momentum Difference This section discusses the momentum difference in the boundary layer due to the EHD momentum injection. Momentum difference is calculated at x = 10 mm downstream of the DBD wall jet by integrating the velocity profiles in the y-direction up to a height where mass and momentum are injected and not entrained. Note that the x = 10 mm location is in the direction of the external flow with the datum at the plasma generation edge of the DBD actuator. Thus, in the co-flow case, the x = 10 mm location is downstream or in front of the plasma region and above the dielectric and grounded electrode (Figure 5); however, in the counter-flow case, the x = 10 mm location is behind the plasma region and above the active electrode (Figure 8). Figure 4 compares the DBD with external flow cases against an EHD jet in a quiescent environment. The absolute value of the momentum difference is shown, as the momentum difference is calculated by subtracting the counter-flow actuation profile from the un-actuated boundary layer profile. The literature does not contain any momentum comparison between the co- and counter-flow DBD injections. In the co-flow scenarios, the momentum addition into the boundary layer is equal to or lower than the momentum injected in the EHD jet. The momentum difference appears linear with V\({}_{\mathrm{AC}}\); however, the change in total momentum is relatively flat, suggesting that momentum dissipation is driven primarily by the inner layer wall jet interaction with the wall. At 11 m/s co-flow, lower momentum differences are found for all voltages, and an increase in turbulent dissipation can explain this. The increase in dissipation is shown in the velocity profiles in Figure 6 and Figure 7 as the effect of the DBD momentum can still be seen at x = 25 mm downstream when U\({}_{\infty}\) = 5 m/s (Re = 35,000), but the effect of the DBD momentum is almost unseen at x = 25mm downstream when U\({}_{\infty}\) = 11 m/s (Re = 75,000). Unlike the experiments in quiescent conditions [95], the fluid momentum of the EHD jet in the co-flow injection is not conserved as it travels downstream. In the counter-flow configuration, the momentum difference is more significant due to reversing flow near the wall. The Figure 13: DBD momentum difference at U\({}_{\infty}\)= 5 m/s and U\({}_{\infty}\)= 11 m/s external flow largest difference is observed in counter-flow, at U\({}_{\infty}\) = 5 m/s. The momentum difference is \(\sim\) 6.5x greater than its co-flow counterpart. The ratio of momenta within the DBD jet heights, \(M^{*}\), is proposed as a nondimensional relation that could predict separation in the counter-flow injection. \(M^{*}\) represents the DBD momentum injection compared to the inertial force in the external flow. \(M^{*}\) is defined as: \[M^{*}=\frac{M_{DBD}}{M_{External}}=\frac{\int_{0}^{h_{\rm jet}}{U_{quiescent~ {}DBD}}^{2}(\nu)dy}{\int_{0}^{h_{\rm jet}}{U_{external~{}flow}}^{2}(\nu)dy} \tag{7}\] The M\({}_{\rm DBD}\) value and the height of the jet (h\({}_{\rm jet}\)) in a quiescent environment can be directly measured or estimated from the empirical relationship as proposed by Tang et al. [95]. E.g., the height of the DBD jet in Figure 4 varies from 2 mm to 2.25 mm at the range of the voltages tested. The M\({}_{\rm External}\) value can be estimated analytically, numerically, or experimentally for a given value of the h\({}_{\rm jet}\). The ratio of the terms can be evaluated to determine flow separation criteria \(M^{*}\); the higher values are likely to result in flow separation. The values of \(M^{*}\) in the 5 m/s and 11 m/s cases are shown in Table 1. In this limited set of experiments, the separation was observed for cases with \(M^{*}>0.1\) (for \(M^{*}<0.1\), counter-flow DBD did not induce separation). Additional testing and numerical simulations at various DBD and external flow conditions could further define separation threshold criteria. Note that MDBD varies with DBD parameters and the x-position within the jet as it expands and loses momentum due to viscous dissipation. At the same time, the value MExternal depends on the external flow conditions and the x-position of the DBD jet as the jet thickness changes with the x-position. ## 4 Momentum Injection Model Numerical modeling of DBD has generally been categorized into three categories with increasing complexity: momentum injection models, simplified ion injection models, and species transport models. Momentum injection models such as that of Yoon _et al._[34] and Kriegseis _et al._[96] have been shown to accurately predict steady-state fluid flowfield of DBD actuators in a few configurations using empirically estimated forcing fields while remaining extremely computationally inexpensive. Simplified ion injection models such as the Orlov [113], Shyy [23], and the Suzen and Huang model [29] employ analytically or semi-empirically estimated charge density boundary conditions or charged density regions to model the transport of electrons and generalized ions. The electric potential and the charge density are then used to calculate the electrostatic Lorentz force with no magnetic field, and this EHD force is then coupled to the Navier-Stokes. In species transport models such as that of Bie _et al._[114] and Soloviev _et al._[115], the transport of the dominant chemical species, the resulting ions, and the radicals is modeled, and the distribution of ions is used to compute the electrostatic force at each time, similar to the simple ion models. Simplified ion and momentum injection models have been popular as they can readily be used for different applications while remaining relatively computationally lean. An early implementation of a momentum injection model based on previously published empirical measurements is tested in co-flow and counter-flow. No published DBD model has been tested in an external co-flow or counter-flow. This supplemental information further supports the previously proposed momentum injection model in an external flow while providing insights into the fluid interactions at the focus of this manuscript. The two-dimensional schematic is identical to Figure 5 and Figure 8. The domain height is set to match the experimental wind tunnel of 10cm. The velocity \begin{table} \begin{tabular}{c c c c c} \hline \hline **Reynolds** & **BL Height (mm)** & **DBD Momentum (mN/m)** & **M*** & **Separation** \\ \hline 35,000 & 8.0 & 6 – 22 & 0.14 – 0.52 & Yes \\ 75,000 & 2.5 & 6 – 22 & 0.02 – 0.07 & No \\ \hline \hline \end{tabular} \end{table} Table 1: **Conditions of Separation due to a DBD Jet in Counter-Flow** profile of the wind tunnel is defined as a custom user-defined velocity profile. A mesh of 332,000 cells is employed with refinement near the forcing region, and courser meshes were tested to ensure mesh independence. Since the DBD actuator does not have a fluid mass flux, the resulting fluid governing equations are the incompressible Navier-Stokes continuity and momentum equation with an added momentum source, defined as \[\nabla\cdot\mathbf{u}=0 \tag{8}\] \[\rho\frac{D\boldsymbol{u}}{Dt}=-\nabla P+\ \mu\nabla^{2}\boldsymbol{u}+\ \vec{f}_{EHD} \tag{9}\] The area of the momentum injection is within a right triangle region, similar to the approach of Shyy et al. [23], which assumes a linear approximation between plasma length and height. The author's previous work outlines that plasma length approaches \(\sim 8\) mm for these electrical parameters, and the length-to-height ratio is approximately constant at L/H = 4. For all simulations, the steady-state k-o turbulence model is used. A steady-state assumption for the DBD forcing is often assumed as the electrostatic and unsteady forcing timescales, and variances are considered sufficiently small. High-temporal resolution PIV data has shown that the time fluctuations of the DBD jet are often approximately 10% within a single voltage period; however, this variance depends on the applied voltage waveform [82, 116]. Yoon _et al._ used the one-equation Spalart-Allmaras turbulence model to model the DBD jet in quiescent flow; however, this significantly underpredicted the separation region in the counter-flow. The resulting velocity profiles and momentum difference calculations are presented below in Figure 14. In the co-flow configuration, the momentum injection model matches the maximum velocity within 10%. However, the location of the maximum velocity in the model is higher than experimentally measured, and the velocity difference appears more compact. This is believed to be due to the inability of a two-dimensional CFD simulation to capture turbulence effects well. In addition, the higher location of the maximum velocity supports that the forcing distribution shape should likely be more concentrated at the near-wall region, possibly similar to the modified Gaussian used in Yoon et al. [34]. The total momentum difference matches the experimental measurement using the velocity profile integrated at x = 10mm. Figure 14: **Numerical and experimental DBD velocity profiles with 19.5kv 2kHz forcing (\(\sim\)22 mn/m) in 5 m/s external co- and counter-flow.** In the counter-flow configurations, the strength of the separation region is overpredicted. This is believed to be due to the k-\(\omega\) turbulence model incorrectly predicting the separation region, which is generally in line with many other adverse-pressure gradient studies, including one on the backward-facing step that over-predicted shear stress and reattachment locations [117]. The momentum deficit measured at the x = 10mm has an error of about 30%. In addition, experimental results show mass entrainment at higher locations in the wind tunnel compared to the numerical results that show entrainment as low as \(\sim\)7mm above the plate. The downstream dissipation is not matched well, as the x = 25mm experimental profile shows higher dissipation than the numerical solution. Overall, the presence of a separation region in the counter-flow case was the most challenging case to match. A basic numerical simulation will unlikely accurately predict flow in a boundary layer with a strong adverse pressure gradient inducing separation. Thus, advanced numerical efforts are needed to predict this DBD-induced separation region accurately. The main strength, yet main limitation, of this model, is its simplicity. However, that potential can only be fulfilled by further understanding the dependencies on turbulent models, unsteady forcing, and plasma forcing volume. More advanced turbulent models such as Large Eddy Simulation (LES), Detached Eddy Simulation (DES), or Direct Numerical Simulation (DNS) may be needed to resolve viscous effects, which is especially important in counter-flow correctly. Time-averaged forcing with a triangular plasma force shape may be appropriate for simple cases such as in co-flow, but in the cases such as the counter-flow or crossflow, shear stress and turbulent effects over the momentum deficit volume are magnified, and proper time-resolution may be required. With these aspects tackled, this model can serve as a valuable DBD design tool while providing accurate results and shedding important insight into the DBD forcing. Figure 15: **Numerical and experimental DBD momentum difference with 19.5 kV 2 kHz forcing (\(\sim\)22 mN/m) in 5 m/s external co-flow and counter-flow** ## 5 Conclusion We have experimentally investigated the performance of a DBD plasma actuator over a range of voltages (12 kV - 19.5 kV) at 2 kHz in co-flow and counter-flow with freestream velocities of 5 m/s and 11 m/s. The power consumption associated with DBD discharge is measured through capacitive measurements, with high temporal resolution throughout several cycles. For all voltages and freestream conditions in this experiment, there was no significant difference in power expenditure between the co-flow, counter-flow, and quiescent conditions, consistent with previous results. The DBD jet increased boundary layer velocity by \(>2.0\) m/s in co-flow and decreased the boundary layer velocity \(>5\) m/s in counter-flow (leading to fully reversed flow near the wall). The momentum difference in counter-flow leads to flow separation; separation zone boundaries and velocity magnitudes were evaluated using velocity magnitude contour plots. At low freestream velocities, the EHD jet significantly influences boundary layer flow, and the dissipation is driven by the interaction of the DBD wall jet inner layer and the wall. However, at the higher freestream velocity, the external flow affects the outer layer of the EHD due to the more effective turbulent mixing. The counter-flow momentum difference is 6.5 times higher than its co-flow counterpart at U\({}_{\infty}\) = 5 m/s. The momentum difference in counter-flow offers promising results for active flow control applications. A non-dimensional flow separation criteria M* is proposed as the ratio of DBD jet momentum to integrated boundary layer momentum. This experimental data set can be used to develop models and validate multiphysics simulations for EHD flow. Future research should extend the understanding of the relationship between the unsteady forcing of the DBD and the turbulent characteristics of the external flow. ## 6 Acknowledgments This work was funded by the Joint Center for Aerospace Technology Innovation (JCATI). **NOMENCLATURE** \begin{tabular}{|l|l|} \hline \(\mathcal{C}\) & Pitot tube correction factor \\ \hline \(E\) & Electric field \\ \hline \(f_{AC}\) & Frequency of the applied voltage \\ \hline \(\vec{f}_{EHD}\) & Electro-hydrodynamic force term \\ \hline \(i(t)\) & Current \\ \hline \(l_{dis}\) & Discharge current \\ \hline \(L\) & Spanwise Length \\ \hline \(M\) & The momentum of the induced jet \\ \hline \(P\) & Pressure reading from the pitot tube \\ \hline \(W\) & Discharge energy consumption \\ \hline \(W_{mech}\) & Mechanical power \\ \hline \(W_{elec}\) & Electrical power \\ \hline \(U(y)\) & Velocity at y height \\ \hline \(U_{max}\) & Maximum velocity of the wall jet \\ \hline \(U_{\infty}\) & External flow velocity \\ \hline \(V_{AC}\) & AC Voltage in the DBD actuator \\ \hline \(v\) & Time-averaged velocity \\ \hline \(t^{*}\) & Normalized time value \\ \hline \(\rho\) & Density \\ \hline \(Q\) & Mass flow rate \\ \hline \end{tabular}
電界の作用でイオン化と中性分子と相互作用することで、絶縁壁放電(DBD)プラズマアクチュエータは、動きを持たない壁ジェットを生成できます。流体の境界層における電気流体力学(EHD)、turbulence、慣性、粘性効果の相互作用に関する研究は、まだ十分に理解されておらず、調査が必要です。本論文では、DBDアクチュエータによる運動注入の試験を実施し、Re=35,000と75,000の条件下で、 co-flowと counter-flowのシナリオにおいて、VAC=12 kV-19.5 kVのピーク-ツー-ピークの周波数で2kHzの周波数で実行しました。 co-flowの配置では、DBDアクチュエータは境界層に運動を注入します。 counter-flowの場合、運動注入は境界
2309.07932
Flat origami is Turing Complete
Flat origami refers to the folding of flat, zero-curvature paper such that the finished object lies in a plane. Mathematically, flat origami consists of a continuous, piecewise isometric map $f:P\subseteq\mathbb{R}^2\to\mathbb{R}^2$ along with a layer ordering $\lambda_f:P\times P\to \{-1,1\}$ that tracks which points of $P$ are above/below others when folded. The set of crease lines that a flat origami makes (i.e., the set on which the mapping $f$ is non-differentiable) is called its \textit{crease pattern}. Flat origami mappings and their layer orderings can possess surprisingly intricate structure. For instance, determining whether or not a given straight-line planar graph drawn on $P$ is the crease pattern for some flat origami has been shown to be an NP-complete problem, and this result from 1996 led to numerous explorations in computational aspects of flat origami. In this paper we prove that flat origami, when viewed as a computational device, is Turing complete. We do this by showing that flat origami crease patterns with \textit{optional creases} (creases that might be folded or remain unfolded depending on constraints imposed by other creases or inputs) can be constructed to simulate Rule 110, a one-dimensional cellular automaton that was proven to be Turing complete by Matthew Cook in 2004.
Thomas C. Hull, Inna Zakharevich
2023-09-13T20:15:49
http://arxiv.org/abs/2309.07932v2
# Flat origami is Turing complete ###### Abstract. _Flat origami_ refers to the folding of flat, zero-curvature paper such that the finished object lies in a plane. Mathematically, flat origami consists of a continuous, piecewise isometric map \(f:P\subseteq\mathbb{R}^{2}\to\mathbb{R}^{2}\) along with a layer ordering \(\lambda_{f}:P\times P\to\{-1,1\}\) that tracks which points of \(P\) are above/below others when folded. The set of crease lines that a flat origami makes (i.e., the set on which the mapping \(f\) is non-differentiable) is called its _crease pattern_. Flat origami mappings and their layer orderings can possess surprisingly intricate structure. For instance, determining whether or not a given straight-line planar graph drawn on \(P\) is the crease pattern for some flat origami has been shown to be an NP-complete problem, and this result from 1996 led to numerous explorations in computational aspects of flat origami. In this paper we prove that flat origami, when viewed as a computational device, is Turing complete. We do this by showing that flat origami crease patterns with _optional creases_ (creases that might be folded or remain unfolded depending on constraints imposed by other creases or inputs) can be constructed to simulate Rule 110, a one-dimensional cellular automaton that was proven to be Turing complete by Matthew Cook in 2004. ###### Contents * 1 Introduction * 2 Conventions and preliminaries * 3 Logic gates and other gadgets * 4 Folding Rule 110 * 5 Conclusion ## 1. Introduction Origami, the art of paper folding, has lately been a source of inspiration for applications in mechanical engineering [11, 12], materials science [13, 14], and architecture [15]. Helping this interest has been the rise of _computational origami_, which studies computational questions that arise in the folding of paper, as a field in computational and combinatorial geometry [16]. Of particular interest has been _flat origami_, where a two-dimensional sheet of paper, or all of \(\mathbb{R}^{2}\), is folded into a flat object, back into the plane without stretching, tearing, or self-intersecting the paper. For example, in 1996 Bern and Hayes proved that the decidability question of whether a given crease pattern can fold flat is NP-hard [1]. However, because of the difficulty in rigorously modeling flat origami, a hole in their proof remained undetected for 20 years until Akitaya et al. repaired and strengthened their proof in 2016 [1]. In this paper we prove that flat origami is also Turing complete. That is, it is theoretically possible to design origami crease patterns that encode logical inputs and then, in the process of folding flat, preform computations equivalent to a Turing machine. We do this by proving that flat origami can simulate the Rule 110 cellular automaton, which has been proven to be Turing complete [16]. Our approach is to make use of _optional creases_ in our crease patterns to help encode Boolean variables and design logic gates, an approach that has been used in prior work to explore the complexity of origami [1]. In Section 2 we will formally define our model of flat-foldable origami, define Rule 110, and establish conventions in our approach. In Section 3 we will define and prove the correctness of the origami gadgets we will use to transmit Boolean signals and simulate logic gates. In Section 4 we will put our gadgets together to simulate cellular automata, in particular Rule 110. ### Acknowledgements The second author is supported in part by NSF DMS-1846767. ## 2. Conventions and preliminaries ### Background We follow a model and terminology for planar, two-dimensional flat origami as presented in [1] and [1].1 A flat-folded piece of paper may be modeled using two structures: an isometric folding map and a layer ordering. An _isometric folding map_ is a continuous, piecewise isometry \(f:P\subseteq\mathbb{R}^{2}\to\mathbb{R}^{2}\) where \(P\) is closed. The _crease pattern_ of \(f\), denoted \(X_{f}\), is the set of points on \(P\) on which \(f\) is non-differentiable, union with the boundary of \(P\). One can prove [11, 12] that Footnote 1: The flat-folding of manifolds in general dimension is also possible and follows many of the properties of the flat, 2D case presented here. See [12] and [11] for more details. * \(X_{f}\) is a plane graph on \(P\) whose interior edges, which we call _creases_, are straight line segments, * every interior vertex of \(X_{f}\) has even degree, * the faces defined by the embedding of \(X_{f}\) on \(P\) are \(2\)-colorable, where one color class is made of regions of \(P\) whose orientation are preserved under \(f\) and the other color class faces are orientation-reversed under \(f\), and * around each interior vertex \(v\) of \(X_{f}\) the alternating sum of the sector angles between the creases at \(v\), say going in order counterclockwise, equals zero (this is called _Kawasaki's Theorem_). We will use Kawasaki's Theorem throughout our proofs, and so we formalize what it states for a single vertex in a crease pattern: **Theorem 1** (Kawasaki's Theorem [11, Theorem 5.37]).: _A collection of line segments or rays that share a common endpoint \(v\in\mathbb{R}^{2}\) and whose consecutive sector angles are \(\alpha_{1},\dots,\alpha_{2n}\) will be flat-foldable (meaning they are part of a crease pattern \(X_{f}\) for some isometric folding map \(f\)) if and only if \(\sum(-1)^{k}\alpha_{k}=0\). Since our crease patterns exist in a flat plane, this is equivalent to_ \[\alpha_{1}+\alpha_{3}+\dots+\alpha_{2n-1}=\alpha_{2}+\alpha_{4}+\dots+\alpha_ {2n}=\pi.\] Modeling flat-folded origami also requires the concepts of layer ordering and mountain-valley creases, which require additional structure be added to an isometric folding map. First, we introduce some terminology. A simply connected subset of \(U\subset P\) is called _uncreased_ under \(f\) if \(f\) restricted to \(U\) is injective. Two simply connected subsets \(U_{1},U_{2}\subset P\)_overlap_ under \(f\) if \(f(U_{1})\cap f(U_{2})\neq\emptyset\), and we say that \(U_{1}\) and \(U_{2}\)_strictly overlap_ under \(f\) if \(f(U_{1})=f(U_{2})\). A _global layer ordering_ for an isometric folding map \(f\) is a function \(\lambda_{f}:A\subset P\times P\to\{-1,1\}\) that records which points of \(P\) are above/below which others when folded under \(f\), with \(\lambda_{f}(p,q)=1\) meaning that \(p\) is below \(q\) and \(\lambda_{f}(p,q)=-1\) meaning \(p\) is above \(q\). Specifically, \(\lambda_{f}\) is a global layer ordering if the following six properties are satisfied (adopted from [1]): * **Existence:** The domain \(A\) is defined as all \((p,q)\in P\times P\) such that \(f(p)=f(q)\). That is, the layer ordering \(\lambda_{f}\) only exists between points that overlap in the folding. * **Antisymmetry:**\(\lambda_{f}(p,q)=-\lambda_{f}(q,p)\) for all \((p,q)\in A\). That is, if \(p\) is above \(q\) then \(q\) is below \(p\). * **Transitivity:** If \(\lambda_{f}(p,q)=\lambda_{f}(q,r)\) then \(\lambda_{f}(p,r)=\lambda_{f}(p,q)\). That is, if \(q\) is above \(p\) and \(r\) is above \(q\), then \(r\) is above \(p\). * **Tortilla-Tortilla Property (Consistency):** For any two uncreased, simply connected subsets \(U_{1},U_{2}\subset P\) that strictly overlap under \(f\), \(\lambda_{f}\) has the same value for all \((p,q)\in(U_{1}\times U_{2})\cap A\). I.e., Figure 1. (a) The tortilla-tortilla condition being satisfied. (b) The taco-tortilla condition _not_ being satisfied. (c) The taco-taco condition _not_ being satisfied. if two regions in \(P\) completely overlap under \(f\), then one must be entirely above the other. This is illustrated in Figure 1(a). * **Taco-Tortilla Property (Face-Crease Non-crossing):** For any three uncreased, simply connected subsets \(U_{1},U_{2},U_{3}\subset P\) such that (a) \(U_{1}\) and \(U_{3}\) are separated by an edge \(e\) in \(X_{f}\) (i.e., adjacent regions in \(X_{f}\)) and strictly overlap under \(f\) and (b) \(U_{2}\) overlaps the edge \(e\) under \(f\), then \(\lambda_{f}(p,q)=-\lambda(q,r)\) for any points \((p,q,r)\in(U_{1}\times U_{2}\times U_{3})\cap A\). I.e., if a region overlaps a nonadjacent internal crease, the region cannot lie between the regions adjacent to the crease in the folding. This is illustrated in Figure 1(b). * **Taco-Taco Property (Crease-Crease Non-crossing):** If we have uncreased, simply connected adjacent subsets \(U_{1}\) and \(V_{1}\) of \(P\) separated by a crease \(e_{1}\) in \(X_{f}\) and \(U_{2}\) and \(V_{2}\) separated by a crease \(e_{2}\) such that the subsets all strictly overlap under \(f\) and the creases \(e_{1}\) and \(e_{2}\) strictly overlap under \(f\), then for any point \((p,q,r,s)\in(U_{1}\times V_{1}\times U_{2}\times V_{2})\cap A\) either \(\{\lambda_{f}(p,r),\lambda_{f}(p,s),\lambda_{f}(q,r),\lambda_{f}(q,s)\}\) are all the same or half are \(+1\) and half are \(-1\). I.e., if two creases overlap in the folding, either the regions of paper adjacent to one crease lie entirely above the regions of paper adjacent to the other crease, or the regions of one nest inside the regions of the other. This is illustrated in Figure 1(c). A global layer ordering ensures that if an actual piece of paper \(P\) is to be folded according to an isometric folding map \(f\) as determined by its crease pattern \(X_{f}\), then this can be done without \(P\) intersecting itself. This is the generally-accepted definition of what it means for a crease pattern to be _globally flat-foldable_[1, 1, 1, 2]. An isometric folding map \(f\) and global layer ordering \(\lambda_{f}\) determine a dichotomy for the creases of \(X_{f}\), called the _mountain-valley (MV) assignment for \((f,\lambda_{f})\)_. Specifically, if a crease \(e\) of \(X_{f}\) is bordered by faces \(U_{1}\) and \(U_{2}\) and \(p\in U_{1}\), \(q\in U_{2}\) are close to \(e\) with \(f(p)=f(q)\), then * if the orientation of \(U_{1}\) is preserved under \(f\) and \(\lambda_{f}(p,q)=1\), then \(e\) is a _valley_ crease, and * if the orientation of \(U_{1}\) is preserved under \(f\) and \(\lambda_{f}(p,q)=-1\), then \(e\) is a _mountain_ crease. Mountain and valley creases correspond to what we see in physically folded paper, where paper bends in either the \(\vee\) (mountain) or \(\wedge\) (valley) direction. A fundamental result about the mountains and valleys that meet at a flat-folded vertex, which we will often use in our proofs, is _Maekawa's Theorem_: **Theorem 2** (Maekawa's Theorem [1, p. 81]).: _If a crease pattern flat-folds around a vertex, the difference between the number of mountain folds and the number of valley folds meeting at that vertex must be \(2\)._ A generalization of Maekawa and Kawasaki's Theorems that we will need in the proof of Proposition 6 below is the following: **Theorem 3** (Justin's Theorem [2, 16]).: _Let \(\gamma\) be a simple, closed, vertex-avoiding curve on a flat-foldable crease pattern. Let \(\alpha_{i}\) be the signed angles, in order, between the consecutive creases that \(\gamma\) crosses, for \(1\leq i\leq 2n\). Also let \(M\) and \(V\) be the number of mountain and valley creases, respectively, that \(\gamma\) crosses. Then,_ \[\alpha_{1}+\alpha_{3}+\cdots+\alpha_{2n-1}\equiv\alpha_{2}+\alpha_{4}+\cdots+ \alpha_{2n}\equiv\frac{M-V}{2}\pi\mod 2\pi. \tag{2.1}\] Computing a global layer ordering, or determining than none exist, for a given isometric folding map is computationally intensive and the main reason why the global flat-foldability problem is NP-hard [1]. A useful tool that we will employ in the proof of Lemma 15 is the _superposition net_ (or _s-net_ for short) that Justin introduced in [2]. The s-net is a superset of the crease pattern \(X_{f}\) of an isometric folding map given by \(S_{f}=f^{-1}(f(X_{f}))\). That is, \(S_{f}\) is the pre-image of the folded image of the crease pattern. This is helpful because the points of \(S_{f}\) are places where the tortilla-tortilla, taco-tortilla, or taco-taco properties might fail. ### Rule 110 and our conventions Rule 110 is an elementary (1-dimensional) cellular automaton using the rule table shown in Figure 2. We model a 1 as TRUE and a 0 as FALSE, or black and white pixels, as in Figure 2 where each 1-dimensional state of the cellular automaton is stacked vertically to show the step-by-step evolution of the system. Note that if all inputs are set to 0 the automaton stays constant at 0. We will simulate Rule 110 in an origami crease pattern by establishing conventions by which the creases can be interpreted as storing and manipulating Boolean variables. In a similar strategy to prior work on original complexity [1, 10, 11, 12, 13], we use _directed pleats_ (sequences of parallel mountain/valley crease lines) to send TRUE/FALSE signals across the folded paper; we call such directed pleats _wires_. Our wires are triplets of parallel creases, with a mandatory mountain in the middle and optional valleys to the left and right. We will orient our crease patterns so that the direction of all wires is in the "downward," decreasing \(y\) direction in \(\mathbb{R}^{2}\). The value of a wire is decided as follows: If the pleat is folded to the right relative to its direction then it is FALSE; if it is folded to the left then it is TRUE. The information in a wire consists of the choice of which valley fold is used. The labeling conventions we will use in the crease pattern figures in this paper are shown in Figure 3, with solid lines depicting mountain creases and dashed lines being valleys, which is standard in much origami literature. Also, black creases will be mandatory and blue will be optional. An illustration of a wire and its direction is also shown in Figure 3. In this paper we do not prove that the given crease patterns will fold flat in accordance to the stated operation; that can be verified directly via folding. In our proofs we will simply verify that the given crease patterns will _not_ fold in other ways. ### Fundamental results **Definition 4**.: In a crease pattern with optional creases, a crease is _active_ in a flat-folding if it is used. **Definition 5**.: In this paper we will be working on an infinite triangular grid of triangles with side-length 1. For any line on the grid, the _next line_ in some direction is the closest line in that direction which is parallel to it; if \(\ell,\ell^{\prime}\) are two lines and \(\ell^{\prime}\) is the next line then \(\ell\) and \(\ell^{\prime}\) are _consecutive_. A _wire_ is three consecutive creases, where the middle crease is a mountain crease, and the two outer creases are optional valley creases. A _gadget_ is a subset of a crease pattern in a region of the plane bounded by a simple closed curve such that the only creases that intersect the boundary are wires. Note that in order for a wire to have a well-defined Boolean value, we need exactly one of its optional valley creases to be active. However, by themselves any wire could have all three of its creases folded, and thus we want the gadgets that wires enter and exit to force the wires to have one, and only one active crease. The below Proposition will help ensure this. Figure 3. A guide to the mountain/valley labeling conventions uses in our figures and an example of a wire. Figure 2. The table for Rule 110 and ten rows of its evolution from a single TRUE pixel. **Proposition 6**.: _Let \(G\) be a gadget with angles \(\theta_{1},\ldots,\theta_{k}\) between consecutive boundary wires (calculated as one transverses the boundary counter-clockwise). Suppose that for all nonempty proper subsets \(J\subsetneq\{1,\ldots,k\}\) we have_ \[\sum_{j\in J}\theta_{j}\neq 0\pmod{\pi}.\] _If we pick optional creases to make a sub-crease pattern \(G^{\prime}\subset G\) be flat-foldable then each wire in \(G^{\prime}\) contains exactly one active valley fold._ Proof.: The angle between consecutive creases in a wire is \(0\). In addition, each wire contains at least one active crease (the mountain crease). Applying Justin's Theorem to our sub-crease pattern \(G^{\prime}\) with \(\gamma\) the gadget's boundary and \(\alpha_{1},\ldots,\alpha_{n}\) the angles in counterclockwise order between the creases in \(G^{\prime}\) that \(\gamma\) crosses,we have \(\alpha_{1}+\alpha_{3}+\cdots+\alpha_{n-1}\equiv\alpha_{2}+\alpha_{4}+\cdots+ \alpha_{n}\equiv m\pi\pmod{2\pi}\) for some constant \(m\). Now, each \(\alpha_{i}\) equals some \(\theta_{j}\) or is zero, so our assumption that no proper subset of the \(\theta_{j}\)'s adds up to a multiple of \(\pi\) must also be true for the non-zero \(\alpha_{i}\)'s. Therefore, the non-zero \(\alpha_{i}\)'s all have either \(i\) being odd or all have \(i\) being even. But if both (or neither) of the valley folds in a wire of \(G\) were active in \(G^{\prime}\), then the angles \(\theta_{j}\) and \(\theta_{j+1}\) surrounding the wire would appear among the \(\alpha_{i}\) angles with one having an odd \(i\) and one an even \(i\), which is a contradiction. Thus every wire in \(G^{\prime}\) contains exactly one active valley fold. As a corollary we get: **Corollary 7**.: _Let \(G\) be a gadget with two input wires and two output wires, with consecutive angles between wires adding up to \(\pi\). If we know that the input wires each contain exactly one active valley fold, the output wires must each contain exactly one active valley fold._ ## 3. Logic gates and other gadgets In this section we show how to construct logical gates (AND, OR, NAND, NOR, NOT) as well as inter-sector, twist, and eater gadgets via origami with optional creases. These will form the building blocks of our Rule 110 flat origami simulation. Our over-all scheme is to build our crease pattern on a triangle lattice, and so most of our gadgets will possess triangle or hexagonal symmetry. We also include gadgets for some logic gates (AND and NOT) merely for completeness, as they are not used in the final construction. For the two-input logic gates we assume that we are given two wires at an angle of \(2\pi/3\) with information coming "in" to the gate from the positive \(y\)-direction. The output is a crease at an angle of \(2\pi/3\) with each of the inputs. The NOT gate (Section 3.3) is somewhat strange, as it requires an "auxilliary" pleat which is not affected by the gate. In addition, in Section 3.4, we show that it is possible to "intersect" two wires which meet at an angle of \(\pi/3\) without affecting their values. Sections 3.5 and 3.6 contain the twist and eater gadgets. In the below Propositions, we say that a logic gadget _works_ if the values of the input wires force the correct output wire value in accordance to the desired logic gate. ### NOR and NAND **Proposition 8**.: _The NOR gate (Figure 4) works._ Proof.: First, suppose that the upper-right input is TRUE. Thus the crease at \(\pi/6\) from \(X\) is active, and therefore so is the crease at \(-5\pi/6\) from \(X\), since only one of the optional valley creases through \(X\) may be used by Kawasaki's Theorem. That is, the crease at \(-\pi/2\) cannot be active, and thus the output is FALSE. Before considering the other two cases, a basic observation. Consider the following crease pattern around a point: (3.1) This will not fold flat, since the wedge which is \(\pi/6\) wide is too narrow: the wedges on either side of it will collide if we try to flat-fold it. Thus such a configuration is impossible. Now suppose that both inputs are FALSE. By Kawasaki's Theorem around \(Y\), the fold \(YY^{\prime}\) must be active while none of the others can be (since the fold at \(\pi/6\) is active and the fold at \(5\pi/6\) is not). By Maekawa's Theorem neither of the other two valley folds through \(Y^{\prime}\) can be active. This means that the folds at \(\pi/3\), \(2\pi/3\) and \(5\pi/6\) through \(O\) are not active. From this it follows that none of the folds through \(O\) can be active, since otherwise they must all be and then they exactly form the impossible configuration discussed in (3.1). Thus the folds at \(0\), \(\pi/3\) and \(\pi/2\) through \(Z\) are not active, and therefore the fold at \(-\pi/2\) through \(Z\) cannot be active. Thus the output is TRUE, as desired. Lastly, suppose that the left-hand input is TRUE and the right-hand input is FALSE. Then the folds at \(\pi/6\) and \(5\pi/6\) through \(Y\) are active, and therefore, for Kawasaki's and Maekawa's Theorems to hold around \(Y\), the folds at \(-\pi/3\) and \(-2\pi/3\) must be active and \(YY^{\prime}\) and \(YX^{\prime}\) not active. Thus the folds at \(\pi/3\) and \(2\pi/3\) through \(O\) are active. In order for Kawasaki's Theorem to hold around \(O\), we must have the fold at \(-\pi/2\) active, and exactly one of the two folds \(OY^{\prime}\) and \(OX^{\prime}\) active. As the configuration in (3.1) is impossible, the fold \(OY^{\prime}\) cannot be the one that is active, and thus \(OX^{\prime}\) must be the one that is active. By Maekawa's Theorem the fold \(Y^{\prime}Z\) must therefore be active, and thus also the fold at \(-\pi/2\) through \(Z\), showing that the output is FALSE, as desired. **Proposition 9**.: _The NAND gate (Figure 5) works._ Proof.: The NAND gate is a reflection in a vertical line of the NOR gate; since reflection is orientation-reversing in the horizontal direction and orientation-preserving in the vertical direction, by Proposition 8, the output of the NAND gate with inputs \(A\) and \(B\) is \[\neg(\neg A\ \textsc{nor}\ \neg B)=\neg A\ \textsc{or}\ \neg B=A\ \textsc{nand}\ B,\] as desired. ### OR and AND **Proposition 10**.: _The OR gate (Figure 6) works._ Proof.: As before, by Proposition 6 exactly one of the output valley folds must be active. First, suppose that the left-hand input is FALSE. Then the crease at \(5\pi/6\) from \(Y\) is not active, so the only creases out of \(Y\) that can be active are \(YY^{\prime}\) and the crease at \(\pi/6\), which are either both active or not. Thus if the right-hand input is TRUE then none of the creases through \(Y\) are active. Since none of the creases through \(Y\) are active, \(YY^{\prime}\) is not active, and thus neither is \(Y^{\prime}Z\). Thus the crease at \(-\pi/2\) through Figure 4. NOR gate Figure 5. NAND gate \(Z\) cannot be active, and the output is TRUE. On the other hand, if the right-hand input is FALSE then \(YY^{\prime}\) must be active, and thus therefore so must \(Y^{\prime}Z\). Thus the crease at \(-\pi/2\) through \(Z\) must be active (since the crease at \(5\pi/6\) through \(Z\) is also active), and the input must be FALSE. Now suppose that the left-hand input is TRUE. Then the crease at \(5\pi/6\) through \(Z\) is not active. Thus \(ZO\) and \(ZZ^{\prime}\) cannot be active, and the crease at \(-\pi/2\) through \(Z\) is active if and only if \(ZY^{\prime}\) is active, which is active if and only if \(YY^{\prime}\) is active. Thus to show that the output is TRUE it suffices to check that \(YY^{\prime}\) is not active. If the right-hand input is TRUE then the crease at \(\pi/6\) through \(Y\) is not active; by Kawasaki's theorem \(YY^{\prime}\) must not be active in this case, as desired. If the right-hand input is FALSE then the crease at \(\pi/6\) through \(Y\) is active. If \(YY^{\prime}\) were active then by Kawasaki's theorem \(YO\) must be active, and none of the other creases through \(Y\) can be. But this violates Maekawa's theorem, since it has 4 valley and no mountain creases meeting at a point. Thus \(YY^{\prime}\) must not be active in this case either, as desired. **Proposition 11**.: _The AND gate (Figure 7) works._ Proof.: The AND gate is a reflection of the OR gate. By the same logic as in the proof of Proposition 9, since the OR gate works, so does the AND. ### NOT gate This NOT gate is not used in our origami construction of Rule 110, but we include it for completeness. **Proposition 12**.: _The NOT gate (Figure 8) works._ Proof.: Suppose the input is TRUE. Then the crease \(L_{4}\) above the point \(D\) is not used, which implies that all of \(L_{2}\) to the left of point \(X\) is used and \(XA\) is not used (in order to make \(X\) flat-foldable). Thus, since the creases \(L_{1}\) above and \(L_{2}\) to the right of \(A\) are both used, we have that both of the short diagonals below \(A\) are used. This implies that only the lower-left-to-upper-right longer diagonal between \(A\) and \(B\) is used (this diagonal is forced by the short diagonals below \(A\); the other diagonal between \(A\) and \(B\) cannot also be used because we cannot have a degree-4 vertex made of only valley creases). This implies that the two short diagonals above point \(B\) are not used, which means the crease \(L_{3}\) to the right of point \(Y\) is used and \(L_{1}\) below \(B\) is not used. Therefore the output is FALSE. By the left-right mirror symmetry of this crease pattern, if the input is FALSE the output must be TRUE. ### Intersector Intersector gadgets will be placed wherever two wires need to cross each other on our sheet of paper. They will ensure that the Boolean signals of the wires will be preserved through the intersection. **Proposition 13**.: _The intersectors (Figures 9 and 10) work._ Figure 6. OR gate Figure 7. AND gate Proof.: It suffices to check the claim for the \(\pi/3\) intersector; the analysis for the \(2\pi/3\) intersector is completely analogous. By Proposition 7, exactly one of the valley folds in each of the output wires will be active. First, suppose that both inputs are FALSE. Then at point \(Z\) one of the angles between active creases is \(\pi\), and thus both creases \(ZY^{\prime}\) and \(ZA\) are not active, and \(ZZ^{\prime}\) is active. Since \(ZY^{\prime}\) is not active, neither is \(Y^{\prime}Y\), and since the crease at \(2\pi/3\) from \(Y\) is not active either, none of the creases through \(Y\) are active. Since \(YX^{\prime}\) is not active, \(X^{\prime}A\) must be active; thus crease \(AZ^{\prime}\) must be active. Since \(AZ^{\prime}\) and \(ZZ^{\prime}\) are active, crease \(Z^{\prime}B\) must not be active, and crease \(Z^{\prime}W\) must be active. Since \(Z^{\prime}W\) and the crease at \(\pi\) through \(W\) are active, \(WB\) must be active and the crease at \(-\pi/3\) through \(W\) must be active. Thus both outputs are FALSE, as desired. Figure 8. NOT gate Figure 10. \(2\pi/3\) intersector Figure 9. \(\pi/3\) Intersector Now suppose that the left-hand input is TRUE and the top input is FALSE. Then creases \(ZY^{\prime}\) and \(ZZ^{\prime}\) must also be active, as well as crease \(Y^{\prime}Y\). Since \(Y^{\prime}Y\) is active but the crease at \(2\pi/3\) from \(Y\) is not, the crease at \(0\) from \(Y\) must be, and the right-hand output is TRUE. Since \(ZZ^{\prime}\) is active, \(Z^{\prime}B\) must not be, but all other valley folds through \(Z^{\prime}\) will be. Since \(Z^{\prime}W\) is active but the crease at \(\pi\) from \(W\) is not, the crease at \(-\pi/3\) at \(W\) must be; thus the bottom output is FALSE and the gadget folds as claimed. Now suppose that the left-hand input is FALSE and the top input is TRUE. Then neither the crease at \(2\pi/3\) nor the crease at \(\pi\) through \(Z\) are active, and thus none of the creases through \(Z\) are active. Since \(ZA\) is not active, crease \(AY\) cannot be active either; thus crease \(YX^{\prime}\) is active, but the crease at \(0\) through \(Y\) is not active; therefore, the right-hand output is FALSE. Since \(YX^{\prime}\) is active, \(X^{\prime}A\) cannot be active, and thus \(X^{\prime}B\) and \(X^{\prime}X\) must be active. But \(AZ^{\prime}\) is not active, adm thus \(Z^{\prime}B\) must be active, while \(Z^{\prime}W\) must not be. Considering point \(W\) we see that since the crease at \(\pi\) through \(W\) must be active, crease \(WW^{\prime}\) must also be active, and therefore so is \(W^{\prime}X\). Since \(X^{\prime}X\) and \(W^{\prime}X\) are active, the crease at \(0\) through \(X\) must also be active. Now there are two possible configurations that seem to work: just the crease at \(-\pi/6\) through \(X\) active (giving the desired output), or else creases \(XB\), \(BW\), and the crease at \(-\pi/6\) through \(W\) active--giving the incorrect output. However, the latter of these is exactly the configuration considered in Lemma 15, which shows that it cannot flat-fold. Thus the only possible configuration that flat-folds is the correct one. Lastly, suppose that both inputs are TRUE. Since the crease at \(\pi\) through \(Z\) is active but the one at \(2\pi/3\) is not, crease \(ZA\) must be active, and therefore so must \(AY\). Since crease \(AY\) is active, three of the valley folds through \(Y\) must be active, and one of them must be the crease at \(0\) through \(Y\); thus the right-hand output is TRUE. We claim that \(YY^{\prime}\) cannot be active. Indeed, suppose that it is. Then \(Y^{\prime}Z\) must also be, and therefore so is \(ZZ^{\prime}\). Then \(Z^{\prime}A\) and \(Z^{\prime}W\) must be active, but \(Z^{\prime}B\) not. Since \(Z^{\prime}W\) is active but the crease at \(\pi\) through \(W\) is not, the crease at \(-\pi/3\) must be, while \(WB\) and \(WW^{\prime}\) (and thus \(W^{\prime}X\)) are not. Since \(Z^{\prime}B\) and \(WB\) are not active, \(BX^{\prime}\) and \(BX\) must not be, either. Since four of the creases at \(Y\) are active, crease \(YX^{\prime}\) must not be; thus \(X^{\prime}A\) must be active, and \(X^{\prime}X\) is not. Thus none of the creases at \(X\) can be active. This is exactly the problematic configuration discussed in Lemma 15, which cannot flat-fold. Thus \(YY^{\prime}\) is not active. Then neither are \(Y^{\prime}Z\) or \(ZZ^{\prime}\). Since \(ZZ^{\prime}\) is not active, \(Z^{\prime}B\) must be active and \(Z^{\prime}A\) and \(Z^{\prime}W\) not active. Since \(Z^{\prime}B\) is active, \(BX^{\prime}\) must be active and \(AX^{\prime}\) must not be. Thus both \(YX^{\prime}\) and \(X^{\prime}X\) are active. Since the crease at \(0\) through \(X\) is not active, the crease at \(-\pi/3\) must be, and the bottom output is TRUE, as desired. Write \(x<y\) to mean that layer \(y\) is above layer \(x\). **Lemma 14**.: _The only flat-foldings of Figure 11 in which region \(e\) is upward-facing have layer orderings (from top to bottom) \(c>d>a>f>e>b\) and \(a>f>c>d>e>b\)._ Proof.: Since the crease pattern is symmetric with respect to a vertical reflection, we can assume without loss of generality that \(c>a\). Now, since \(e\) is upward-facing, so is \(c\) and \(a\), and thus the MV assignment in Figure 11 implies that any flat-folding of this vertex must have \(c>d>e\), \(a>f>e\), and \(c,a>b\); since we assumed \(c>a\), this implies that \(c>a>b\). In order to avoid self-intersections, we note that after folding all of the orange creases will be lined up, and the yellow creases will be lined up. To figure out possible orderings, we build up the layer orderings in stages. Figure 11. Hexagonal folder The first stage is that \(c>d>e\) coming from the mountain-valley pattern. To build the second stage, consider the possibilities for how \(a>f>e\) can interleave with this ordering. Suborderings of the form \(a>d>f>e\) and \(c>f>d>e\) are not allowed, by the taco-taco property (see Section 2.1) around the yellow and orange edges, respectively. Thus \(f\) cannot go between \(c\) and \(d\); since \(a>f\), we must therefore have \(c>f\), and thus the only possible ordering is \(c>d>a>f>e\). For the third stage, consider where \(b\) can be inserted in this ordering. Since \(a>b\) the only possibilities are \[c>d>a>b>f>e\qquad c>d>a>f>b>e\qquad c>d>a>f>e>b.\] The first of these is forbidden by the taco-taco property around the yellow edges; the second is forbidden by the taco-taco property around the orange edges. Thus the only possibility is the last ordering (which is possible by first folding the top half behind the bottom half, then folding the right half down, and then the left). **Lemma 15**.: _The problematic configuration (Figure 12) will not fold flat._ Proof.: This proof will refer to the annotated version, Figure 13. The colored regions in the diagram are upward-facing; the others are downward-facing. The graph on the nodes shows ordering relations enforced on the regions in the graph. (The green and blue lines show a part of the s-net (defined in Section 2.1) consisting of the parts of the s-net which overlap region \(c\) after folding; a flat-folding must give a well-defined ordering on these regions.) The directions in the graph points from a layer that must be lower to one that must be higher. A cycle in the graph thus exhibits a contradiction. The black edges in the graph are drawn from local mountain-valley conditions: an upward-facing layer must be above a neighboring layer if they differ by a mountain fold, and below if they differ by a valley fold. The blue edge in the graph follows from Lemma 14 applied to regions \(a\),\(b\),\(c\),\(d\),\(e\),\(f\). Note that there is a path from \(c\) to \(a\); thus if we wish to avoid cycles, the layer ordering must have \(c<a\) Thus by Lemma 14 the ordering on these layers must be \(a>f>c>d>e>b\), giving the red arrow. Analyzing the black graph with this data, we see that we must have \[g<b<e<d<c<h<i<j<k<f<a.\] However, at the top edge of \(i\) there is a taco-tortilla condition which is violated. Layer \(i\) is between layers \(a\) and \(b\), but does not contain a crease line at the blue line. The blue line is lined up in the folding map with the crease between \(a\) and \(b\), giving the contradiction. Thus the crease pattern cannot flat-fold. Figure 12. Problematic Intersector configuration Figure 13. Problematic Intersector configuration with annotations ### Twists A _twist fold_ is a crease pattern where \(n\) pairs of parallel mountain and valley creases (pleats) meet at an \(n\)-gon such that folding the pleats flat results in the \(n\)-gon rotating when folded flat. In addition, standard twist folds require the pleats to fold in the same rotational direction, either all clockwise (with the pleats folding mountain-then-valley in the clockwise direction) or all counterclockwise (valley-then-mountain in the clockwise direction). Twist folds are ubiquitous in origami tessellations; see, e.g., [10]. Triangle and hexagon twists were pioneered by Fujimoto in the 1970s [11]. Such twists with optional valley creases so as to allow the triangles/hexagons to rotate in either the clockwise or counterclockwise direction are shown in Figures 14 and 15. We will use these triangle and hexagon twists as gadgets to propagate wire signals in various directions, while also negating them. Their forced rotational nature proves the following Proposition; they are simple enough to analyze that we omit the details of the proofs. **Proposition 16**.: _The pure hexagonal and triangle twists (Figures 14 and 15) must flat-fold in rotationally-symmetric ways. In particular, given any designated wire in the top half of the gadget as the input, both twists negate and duplicate the input value in the other, output wires._ ### The eater Sometimes twist folds will produce extraneous wires, or _noise_ wires that are not needed for our construction of Rule 110. In order to eliminate these we have _eater gadgets_ that accept any combination of Boolean wire values. In particular, if it can be arranged that all of the noise wires in adjacent cells either cancel one another out (by colliding in an eater) or match up, then the cells can be tessellated. A modified triangle twist, shown in Figure 16 does the job nicely. **Proposition 17**.: _The values of the three wires entering the eater (Figure 16) are independent. In other words, the eater will flat-fold regardless of the values of the three wires._ Proof.: Since the eater crease pattern is rotationally-symmetric as well as reflectively-symmetric about each mountain input axis, all one needs to do to show that the eater will fold flat for any set of inputs is to check when the inputs are all TRUE or have two TRUE and one FALSE inputs. When all are TRUE the eater turns into a triangle twist and thus can fold flat. The TRUE, TRUE, FALSE case requires using the short optional mountain crease (and its collinear optional valley) that is between the adjacent optional valleys between one of the TRUE and the FALSE inputs. It can be readily checked that this, too, is flat-foldable. ## 4. Folding Rule 110 ### The Main Theorem Figure 17 shows the schematic of a crease pattern that logically simulates Rule 110. This crease pattern is overlaid on the triangle lattice for clarity; the underlying triangle lattice is not part of the crease pattern. The vertical wires labeled A, B, and C along the top of the Figure are the inputs and the center-most vertical wire at the bottom, labeled OUT, is the output. The wires and gadgets drawn Figure 14. Hexagonal twist Figure 15. Triangle twist in color in the Figure control the logical workings of this crease pattern. The wires and gadgets drawn in grey absorb and direct the noise wires generated by the crease pattern. Also note that the numerous gadgets detailed in the previous Section are simply labeled in Figure 17 by their names, like AND, OR, E (for the eater gadget), and so on rather than drawing all the individual creases. The pale yellow hexagons and triangles are hexagon and triangle twists, respectively. **Theorem 18**.: _If the crease pattern in Figure 17 is given mandatory creases for the input wires A, B, and C, then optional creases can be chosen from the rest to make the crease pattern fold flat, and the result will force truth value of the output wire to follow Rule 110 from the inputs A, B, and C._ Proof.: In the crease pattern, the splitter and crossover gadgets lead the three original inputs, or their negatins into a NOR and a pair of NAND clauses and then pass them into triangle twists to produce the following three signals (labeled in Figure 17): \[X=A\vee\neg B\qquad Y=\neg B\wedge C\qquad Z=B\wedge\neg C.\] The outputs of these are then led into a NAND and an OR clause to produce the two signals \[P=\operatorname{TRUE}\wedge X,Q=\neg(Y\lor Z).\] Finally, the values of \(P\) and \(Q\) are led to a NAND gadget, so that the final output signal is \[\neg(P\wedge Q).\] In other words, the output of this crease pattern is: \[\neg((A\vee\neg B)\wedge\neg((\neg B\wedge C)\vee(B\wedge\neg C))).\] This simplifies to \[(\neg A\wedge B)\vee((\neg B\wedge C)\vee(B\wedge\neg C)). \tag{4.1}\] This is exactly what Rule 110 performs. That is, the output is TRUE if \(B\neq C\) (which makes the second clause in (4.1) TRUE), and if \(B=C\) then this second clause will be FALSE and the output will be the value of \((\neg A\wedge B)\). That's Rule 110. We remark that all of the gadgets used in the construction of the flat-folding simulation of Rule 110 generate a unique MV assignment (and thus a unique set of output wire values) for a given set of input wire values. Thus the same is true for our Rule 110 crease pattern (Figure 17) _except_ for the places where two eater gadgets share a wire. Such a shared wire between eater gadgets could be either TRUE or FALSE and not affect the logical constraints of the rest of the crease pattern. Nonetheless, we have demonstrated that _some_ flat-folding of the crease pattern will exist to perform Rule 110 computations, which is all that is required to simulate Rule 110. Figure 16. Eater ### On finiteness, flat-foldable origami tessellations, and Turing machines The goal of this paper is to prove that flat-folding is Turing-complete, but what does it mean to make this statement? A Turing machine is a finitely-defined object (i.e. a machine with a finite number of states) working with infinite storage space (the tape) on which is recorded a finite starting input. In order to translate this into origami, we form a crease pattern made of of finite-state cells that are tessellated onto the infinite plane. Consider the following variation of the global flat-foldability problem: We are given an infinite tessellation of a finite straight-line planar graph \(G\) drawn on our paper \(P=\mathbb{R}^{2}\); the tessellation must have a finite fundamental region, which is the cell in question. All the edges of \(G\) are labeled as either mandatory mountains, mandatory valleys, optional mountains, or optional valleys. \(G\) is equipped with a subgraph \(G_{\mathrm{init}}\), which contains all of the mandatory creases and a subset of the original creases, and which also forms a tessellation (i.e. the chosen subgraph is the same in every cell), such that \(G_{\mathrm{init}}\) is the crease pattern \(X_{f}\) of some isometric folding map \(f:\mathbb{R}^{2}\to\mathbb{R}^{2}\) with a global layer ordering \(\lambda_{f}\) whose MV assignment corresponds to the mountain and valley labeling of \(G_{\mathrm{init}}\). This is the "ground state" setup of our flat-folding Turing machine: the blank tape. To specify a more general problem, we take a graph \(G^{\prime}\) which contains \(G\) and which differs from \(G\) by only (a) the addition of finitely many creases, or (b) the modification of an optional crease into a mandatory crease. The question becomes: is there a subset \(H\) of \(G^{\prime}\), which which contains all of the mandatory creases and a subset of the original creases, such that \(H\) is the crease pattern \(X_{f}\) of some isometric folding map \(f:\mathbb{R}^{2}\to\mathbb{R}^{2}\) with a global layer ordering \(\lambda_{f}\) whose MV assignment corresponds to the mountain and valley labeling of \(H\). The main result of this paper is that this problem is Turing-complete. Theorem 18 shows that Rule 110, a finite cellular automaton which is known to be Turing-complete [10], can be modeled using a tessellating crease pattern satisfying the above conditions. The above-mentioned crease pattern \(H\) is the output of the Turing machine from a given input. The connections between Turing machine computations, Rule 110, and in our flat-foldable crease pattern might seem mysterious to the reader, especially the requirement that such computations be performed with a finite amount of material. Thus, we provide a few details on this in the remainder of this Section. **Definition 19**.: Consider an elementary cellular automaton with input values encoded as a row of cells, colored black for \(1\) and white for \(0\). The computation of the automaton is recorded as a grid with this input row as the top, and each consecutive row determined by the computational rule of the automaton. This gives a coloring of the plane. A _spaceship_ is a self-replicating tile: a finite sequence of cells that, if repeated infinitely, will produce the original pattern back in a finite number of computations. Computation on Rule 110 is done by observing perturbations in a sequence of standard spaceships, which are themselves a tessellation \(14\) cells wide and \(7\) cells tall. (See [10] for more details.) Take a \(14\times 7\) repetition of the basic set of cells as the basis of the origami tessellation; since the spaceships tessellate infinitely, this will fold flat. To compute with Rule 110 a finite number of perturbations are made to the spaceship pattern; this is encoded in the crease pattern by setting the direction of input wires to be TRUE or FALSE by modifying the appropriate optional crease to be a mandatory crease, as desired, and altering the NAND gate above each input wire to be an eater (by adding a finite number of optional creases; the extra creases present in the NAND will not affect the flat-foldability of the gadget as they are all optional). This gives the input value to the computation below it. The part of the pattern below the given inputs will fold flat with finitely many modifications to the original spaceship pattern if and only if the original input reverts to the standard spaceships after finitely many steps. We conclude that since we need a finite amount of our paper to replicate a Rule 110 spaceship, and by Theorem 18, we have that our generalized flat-foldable origami problem is Turing-complete. ## 5. Conclusion We have shown that folding origami crease patterns with optional creases into a flat state can emulate the behavior of the one-dimensional cellular automaton Rule 110, and can therefore perform the tasks of a universal Turing machine. Actually folding a piece of paper to simulate, say, multiple rows of an instance of Rule 110 using the crease pattern presented here would be a gargantuan task, even for expert origami artists. Using these methods to perform the computations of a Turing machine using flat origami would be even more arduous, so this is by no means meant to be a practical way to perform computation via origami. By way of comparison, we note the existence of prior work from the physics and engineering community on using origami for actual computation, e.g. [1, 10, 11]. These studies build logic gates using _rigid origami_, where a stiff material is folded in a continuous motion so that the creases act like hinges and the regions of material between the creases remain planar, or rigid, during the folding process. Determining whether a crease pattern can be rigidly folded in this way has also been proven to be NP-hard [1]. While it is likely that rigid origami is also Turing complete as a computational device, to our knowledge no one has proven this. The crease patterns and gadgets in the present work are not rigidly foldable and therefore could not be used as-is in such a proof. Rather, computation performed by flat origami should be viewed discretely, where only the fully flat-folded state provides the desired computational information. The logic gadgets presented in this paper may be used to simulate other one-dimensional cellular automata. For example, a crease pattern to produce a Sierpinski triangle modulo 2 would be given by iteration of the cell in Figure 18; to make this tessellate it is necessary to reflect consecutive units in each row, and the cells shift half a cell-width in each row. The basic cell, as above, is in a dark green box. To see the Sierpinski effect, one could color the "true" side of the input/output wires blue and the "false" red. Figure 17: Origami crease pattern with optional creases that simulates a Rule 110 cell.
平面Origamiは、平面状の、曲率ゼロの紙を折りたたむことを指します。数学的に言えば、平面Origamiは、連続的で、部分的に等射的な写像$f:P\subseteq\mathbb{R}^2\to\mathbb{R}^2$と、層の順番$\lambda_f:P\times P\to \{-1,1\}$を伴います。これは、折り畳む際に、$P$ の各点が他の点の上に/下に位置しているかどうかを記録します。平面Origamiが作る折り目の線(つまり、写像 $f$ が非微分になる線)は、その折り目のパターンと呼ばれます。平面Origamiの写像と層順序には、驚くほど複雑な構造が存在します。例えば、$P$ 上の直線平面グラフが、ある平面Origamiの折り目のパターンであるかどうかを判定することは、NP完全な問題になり、1
2309.00091
On the local aspect of valleytronics
Valley magnetic moments play a crucial role in valleytronics in 2D hexagonal materials. Traditionally, based on studies of quantum states in homogeneous bulks, it is widely believed that only materials with broken structural inversion symmetry can exhibit nonvanishing valley magnetic moments. Such constraint excludes from relevant applications those with inversion symmetry, as specifically exemplified by gapless monolayer graphene despite its technological advantage in routine growth and production. This work revisits valley-derived magnetic moments in a broad context covering inhomogeneous structures as well. It generalizes the notion of valley magnetic moment for a state from an integrated total quantity to the local field called "local valley magnetic moment" with space-varying distribution. In suitable inversion-symmetric structures with inhomogeneity, e.g., zigzag nanoribbons of gapless monolayer graphene, it is shown that the local moment of a state can be nonvanishing with sizable magnitude, while the corresponding total moment is subject to the broken symmetry constraint. Moreover, it is demonstrated that such local moment can interact with space-dependent electric and magnetic fields manifesting pronounced field effects and making possible a local valley control with external fields. Overall, a path to "local valleytronics" is illustrated which exploits local valley magnetic moments for device applications, relaxes the broken symmetry constraint on materials, and expands flexibility in the implementation of valleytronics.
Zheng-Han Huang, Feng-Wu Chen, Yu-Shu G. Wu
2023-08-31T19:17:49
http://arxiv.org/abs/2309.00091v1
# On the local aspect of valleytronics ###### Abstract Valley magnetic moments play a crucial role in valleytronics in 2D hexagonal materials. Traditionally, based on studies of quantum states in homogeneous bulks, it is widely believed that only materials with broken structural inversion symmetry can exhibit nonvanishing valley magnetic moments. Such constraint excludes from relevant applications those with inversion symmetry, as specifically exemplified by gapless monolayer graphene despite its technological advantage in routine growth and production. This work revisits valley-derived magnetic moments in a broad context covering inhomogeneous structures as well. It generalizes the notion of valley magnetic moment for a state from an integrated total quantity to the local field called 'local valley magnetic moment' with space-varying distribution. In suitable inversion-symmetric structures with inhomogeneity, e.g., zigzag nanoribbons of gapless monolayer graphene, it is shown that the local moment of a state can be nonvanishing with sizable magnitude, while the corresponding total moment is subject to the broken symmetry constraint. Moreover, it is demonstrated that such local moment can interact with space-dependent electric and magnetic fields manifesting pronounced field effects and making possible a local valley control with external fields. Overall, a path to 'local valleytronics' is illustrated which exploits local valley magnetic moments for device applications, relaxes the broken symmetry constraint on materials, and expands flexibility in the implementation of valleytronics. ## I Introduction After pioneering studies on the quantum Hall effect in graphene layers [1, 2, 3], atomically thin 2D hexagonal crystals with broken inversion symmetry, e.g. gapped graphene [4, 5, 6, 7] and monolayer transition metal dichalcogenides [8] have been recognized [9, 10] to form a crucial class of topological materials with significant impacts, due to the presence of two degenerate and inequivalent band structure valleys generally designated by K and K', respectively. The valley degree of freedom has important technological implications for binary information processing and, as such, has inspired the emergence of valleytronics [11]. In addition, extensive research efforts have led to the exciting discovery of a diverse range of novel valley phenomena including valley magnetic moments [9, 10] and those connected with the moments [12, 13, 14], such as robust valley topological currents [15, 16, 17, 18, 19, 20, 21], valley-polarized interface states [15], valley-orbit [22, 23] and valley Zeeman interactions [9, 23], with the findings having also motivated important device proposals for valleytronic applications such as valley filters/valves [11, 18, 24, 25], qubits [23, 26, 27, 28, 29], and FETs [30]. Traditionally, studies of valley magnetic moments have been performed from a homogeneous perspective, with important deductions specifically drawn from investigating moments of homogeneous bulk states as topological quantities [9, 10]. Such a perspective has long guided the field with important influence. For instance, studies have skipped any potential nontrivial spatial dependence in the valley magnetic moment and have been focused primarily on its integrated total value. Constraints such as breaking of the structural inversion symmetry [9] have been established as rules for nonvanishing total moments and widely applied to the selection of materials in experiments and applications, with gapped AB stacked bilayer graphene [12, 13, 14] and monolayer transition metal dichalcogenides [31, 32, 33] being well-known options for experiments. Moreover, when external field-valley magnetic moment interactions are explored, primarily those between homogeneous fields and total moments have been investigated. Within the above perspective, a restricted description of the spatial dependence can in principle be provided, though. In the limit of weak, slowly varying structural inhomogeneity, for example, such description would consist of a suitable partition of the space and application to each region the deduction drawn from the homogeneous case. However, rigorously speaking, the quasihomogeneous treatment of spatial dependence may overall under-describe the spectrum of valley physics, specifically that in the limit of strong inhomogeneity. From the scientific standpoint, the under-description may have overlooked interesting hidden aspects beyond the homogeneous perspective which are worthy of exploration. From the application standpoint, the under-description may raise the issue of validity concerning taking broken inversion symmetry as a universal material constraint, and clarifying such issue is critical as it impacts material options and opportunities for applications with valley magnetic moments. Inspired by both foregoing prospects, this work revisits valley-derived magnetic moments across the spectrum of inhomogeneity covering both weak and strong limits. It generalizes the notion of valley magnetic moment for a quantum state from an integrated total quantity to a local field with space-varying distribution called 'local valley magnetic moment' in the work. In suitable inversion-symmetric structures, e.g., zigzag nanoribbons of gapless graphene, where abrupt boundaries induce strong inhomogeneity, it is shown that even though the total moment of a state vanishes due to inversion symmetry, the state can nevertheless exhibit a sizable, nonvanishing local moment distribution. Moreover, it is demonstrated that such local moment can interact with space-dependent electric or magnetic fields, manifesting pronounced field effects and making possible a local valley control with external fields. Altogether, a path to 'local valleytronics' is opened up with advantages including expanded material options, among which an important one is gapless monolayer graphene. In particular, in view of available routine production with exfoliation or state-of-the-art 2D crystal growth [34, 35, 36, 37, 38, 39] for such graphene, the path considerably relaxes valley-derived magnetic moment based experiments and applications. The presentation is organized as follows. **Sec. II** discusses the notion of local valley magnetic moments in an analytical way. Specifically, it develops a current density formulation for both notional and quantitative discussions of local valley magnetic moments. In addition, it provides a compatibility discussion from the symmetry stand point, for the existence of nonvanishing local valley magnetic moments in inversion-symmetric structures. Last, interactions between local moments and magnetic and electric fields - local valley-orbit interactions, respectively, are presented near the end of **Sec. II**. **Sec. III** performs numerical studies and presents results in connection with and validating analytical discussions in **Sec. II**. **Sec. IV** gives conclusion and outlook. **Appendix A** provides a derivation of the current density used in **Sec. II**. **Appendix B** applies the formulation developed in **Sec. II** to the calculation of local valley magnetic moments in the homogeneous bulk case. **Appendix C** provides a supplement to the compatibility discussion in **Sec. II**. **II. LOCAL VALLEY MAGNETIC Moments** For clarity, we start the discussion with graphene serving as an example, describe the notion of local valley magnetic moments in terms of an intuitive picture, and then support the picture by deriving an analytical expression of local moments in the Dirac model of graphene with inhomogeneity, which also provides in the weak inhomogeneous limit a connection with the current theoretical understanding of valley magnetic moments, as well as goes beyond the limit with an important clue given for the likely existence of nonvanishing local moments in the case of an inversion-symmetric structure. Following it, an exact, symmetry-based argument is presented to explicitly support the compatibility between foregoing existence likelihood and inversion symmetry. Last, built on these foregoing discussions, a generic, operationally defined expression of local valley magnetic moments is developed independent of materials and structures for numerical calculations. **Figure** 1 shows a monolayer graphene crystal structure, where each unit cell consists of two carbon sites denoted by A (red) and B (blue) throughout the work. It also depicts a representative graphene electron state, in the tight-binding model including only nearest neighbor hopping and carbon atomic 2p\({}_{x}\) orbitals [40] with on-site energy \(\varepsilon_{{}_{A}}=\Delta\) for the orbital on A and \(\mathcal{E}_{B}=-\Delta\) for that on B. \(\Delta\) is also the gap parameter characterizing the corresponding graphene band structure, with 2 \(\Delta\) being the gap between conduction and valence bands. In gapless graphene, \(\Delta=0\) and \(\varepsilon_{{}_{A}}=\varepsilon_{{}_{B}}\) giving inversion symmetry between A and B sites and, thus, to the structure, too. As illustrated in **Figure 1**, the local valley magnetic moment of an electron arises out of a spin-like, local electron orbital rotation, as explained in the following. Take a near-K electron state \(\phi_{K}\) as an example. Write the state as \(\phi_{K,A}+\phi_{K,B}\) with \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) B site orbitals. For a conduction (valence) band electron, the component \(\phi_{K,A}\) ( \(\phi_{K,B}\) ) dominates over the other. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry, respectively, in the group-theoretical representation language [41], which means they carry opposite phase increments \(\pm\)2/3\(\pi\), respectively, when moving among sites. Such phase variations lead to corresponding loop currents of opposite senses (orange and blue circles, respectively) that compete with each other. Since each current is weighted by electron probability on the corresponding site, i.e., \(\rho_{A}\) or \(\rho_{B}\) ( \(\rho_{A(B)}=\) local probability on site A (B)), the competition yields a net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and gives a 'local pseudospin' with a local magnetic moment \(\propto\rho_{A}-\rho_{B}\). To facilitate the discussion, we further introduce the term 'probability based inversion symmetry breaking' for an electron state. Irrespective of the actual situation in structural inversion symmetry, when \(\rho_{A}=\rho_{B}\) ( \(\rho_{A}\neq\rho_{B}\) ), the probability based inversion symmetry is said to exist (be broken) locally in the state. Then, the local magnetic moment actually correlates with the local degree of probability based inversion symmetry breaking. For example, when \(\rho_{A}=\rho_{B}\)( \(\rho_{A}\neq\rho_{B}\) ) the moment is zero (nonvanishing) reflecting the existence (breaking) of probability based inversion symmetry. \(\varepsilon_{{}_{B}}=-\Delta\) for the orbital on B site (blue atom), with \(\Delta=0\) in gapless graphene. Overall, the electron performs a spin-like, local orbital rotation (light green circle) while executing a global translation (grey dashed line). Consider a near-K state, for example. Generally, it consists of the two components - \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) composed of B site orbitals. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry in the group-theoretical representation language, with phase increments \(\pm\)2/3\(\pi\), respectively, as well as corresponding loop currents of opposite senses (orange and blue circles, respectively), resulting in the net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and corresponding local valley magnetic moment (green, out-of-plane arrow). Last, we make a note about how probability based inversion symmetry breaking may arise in the presence of structural inversion symmetry. Take a zigzag nanoribbon of gapless graphene for example. While the structure is invariant under structural inversion, it is well known that the bounded structure terminates on A sites on one edge and B sites on the other. Therefore, through boundary conditions on the electron state, an edge-induced AB asymmetry enters the corresponding electron probability distribution, giving distinct \(\rho_{A}\) and \(\rho_{B}\) and resulting in probability based inversion symmetry breaking. Further discussions about zigzag nanoribbons will be given below in this section as well as in **Sec. III**. ## 1 The Dirac model A discussion in the Dirac model of graphene with inhomogeneity is now performed to illustrate the foregoing picture. Consider a simple Q1D inhomogeneous structure in the absence of external fields, with the inhomogeneity derived from a spatial variation in the gap parameter of the model. For simplicity, the varying gap parameter is taken to be \(\Delta(y)\) which preserves translational symmetry in the \(x\) direction. Moreover, we take \(\Delta(y)\) to be a regular function free of singularities and, thus, avoid complications such as those due to abrupt boundaries. Let \(F\) be the Dirac two-component wave amplitude on carbon A and B sites ( \(F=\left(F_{A},\ \ \ F_{B}\right)^{\dagger}\), '\(t^{\prime}=\) transpose), valley index \(\tau=1\) (-1) for valley K (K'), \(E=\) electron energy relative to the mid-gap point, \(k_{x}\)\(=\) wave vector relative to the Dirac point, and (\(x\),\(y\)) \(=\) cell position. \(F\) satisfies the following Dirac equation ( \(h=1\), \(e=\) -1 (electron charge), Figure 1: **Cell-orbital magnetic moment** Monolayer graphene is used for illustration. Each unit cell consists of two carbon sites (A and B). In the tight-binding model used here, on-site energy \(\varepsilon_{{}_{A}}=\Delta\) for the 2p\({}_{x}\) orbital on A site (red atom) and \(\Delta=0\) in gapless graphene. Overall, the electron performs a spin-like, local orbital rotation (light green circle) while executing a global translation (grey dashed line). Consider a near-K state, for example. Generally, it consists of the two components - \(\phi_{K,A}\) composed of A site orbitals and \(\phi_{K,B}\) composed of B site orbitals. The two components carry E\({}^{\prime\prime}\) and A\({}^{\prime\prime}\) symmetry in the group-theoretical representation language, with phase increments \(\pm\)2/3\(\pi\), respectively, as well as corresponding loop currents of opposite senses (orange and blue circles, respectively), resulting in the net loop current \(\propto\rho_{A}-\rho_{B}\) (light green circle) and corresponding local valley magnetic moment (green, out-of-plane arrow). and \(v_{F}=1\) ( Fermi velocity) throughout the work) [3, 23, 42]: \[H_{{}_{\mathit{hom}}}F=EF,\] \[H_{{}_{\mathit{hom}}}=\begin{pmatrix}\Delta(y)&k_{x}-\vec{w}_{y} \\ k_{x}+\vec{w}_{y}&-\Delta(y)\end{pmatrix}. \tag{1}\] Note that the usual \(k_{x}\pm ik_{y}\) in the off-diagonal matrix elements of Dirac Hamiltonian for bulk graphene is now replaced by \(k_{x}\pm\vec{w}_{y}\), with the substitution \(k_{y}\rightarrow-i\partial_{y}\) for the structure considered here following the standard effective mass theory [43]. Eqn. (1) is a generalization of those given in References [23] and [42] for graphene ribbons to the case where \(\Delta(y)\) is space varying. From Eqn. (1), the current density operator '\(j_{x}\)' in the \(x\) direction is easily constructed, giving \[j_{x}=-\frac{k_{x}\rho}{E}-\tau\frac{\partial_{y}\rho_{{}_{\mathit{diff}}}}{2E}\,, \tag{2}\] where \(\rho(x,y)=\rho_{A}(x,y)+\rho_{B}(x,y)\),, \(\rho_{{}_{\mathit{diff}}}(x,y)=\rho_{A}(x,y)-\rho_{B}(x,y)\), and \(\rho_{A,(B)}(x,y)\equiv\mid F_{{}_{A(B)}}(x,y)\mid^{2}\). Details of the derivation of Eqn. (2) are given in **Appendix A**. Note that both \(\rho_{A}\) and \(\rho_{B}\) are actually independent of \(x\) due to the translational symmetry in the \(x\) direction. So is \(j_{x}\). Following the standard theory of magnetostatics [44], where the current density \(\stackrel{{\rightarrow}}{{j}}\) in the presence of a magnetization distribution \(\stackrel{{\rightarrow}}{{m}}\) is written as \(\stackrel{{\rightarrow}}{{j}}=\stackrel{{ \rightarrow}}{{j}}_{{}_{\mathit{pw}}}+\nabla\times\stackrel{{ \rightarrow}}{{m}}\), we identify the first term in Eqn. (2) with \((\stackrel{{\rightarrow}}{{j}}_{{}_{\mathit{pw}}})_{x}\) - a free charge- composed translational current and the second term \((\nabla\times\stackrel{{\rightarrow}}{{m}})_{x}\) - a magnetization current, with the corresponding magnetization distribution \(\stackrel{{\rightarrow}}{{m}}\) given by \[\stackrel{{\rightarrow}}{{m}}=-\frac{\tau\rho_{{}_{\mathit{ diff}}}}{2E}\stackrel{{\rightarrow}}{{z}}. \tag{3}\] Important implications follow Eqn. (3) as given below. 1. As \(\stackrel{{\rightarrow}}{{m}}\propto\rho_{{}_{\mathit{diff}}}\), it confirms the picture of local valley magnetic moments depicted in '\(m\)', with \(m=\stackrel{{\rightarrow}}{{m}}\cdot\stackrel{{ \rightarrow}}{{z}}=-\frac{\tau\rho_{{}_{\mathit{diff}}}}{2E}\). 2. For a homogeneous bulk, where \(\Delta(y)=\Delta_{a}\), Eqn. (3) gives \[m=\rho\ \mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau),\] \[\mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau)\equiv-\frac{\tau \Delta_{0}}{2E^{2}}\,,\] (4) where \(\mu_{{}_{\mathit{bulk}}}(E,\Delta_{0};\tau)=\) total valley magnetic moment of the bulk state. Derivation of Eqn. (4) is given in **Appendix B**. Eqn. (4) agrees exactly with that obtained in the traditional, homogeneous perspective with a topological, valley Berry curvature-based approach [9]. Importantly, it shows the two notable features of \(\mu_{{}_{\mathit{bulk}}}\) traditionally established within the perspective, namely, the one-to-one correspondence between \(\tau\) and \(\mathrm{sgn}(\mu_{{}_{\mathit{bulk}}})\), and vanishing \(\mu_{{}_{\mathit{bulk}}}\) in the presence of structural inversion symmetry (\(\Delta_{0}=0\)) [9]. Such features constitute what we call 'homogeneous perspective-based expectations or constraints. 3. The expression of \(m\) given in Eqn. (4) takes the form of a \(\rho\) -weighted distribution of \(\mu_{{}_{\mathit{bulk}}}\) in the (\(x\),\(y\)) space, which suggests a simple extension to the weak, slowly varying inhomogeneous case, that is, \(m(x,y)=\rho(x,y)\)\(\mu_{{}_{\mathit{bulk}}}(E,\Delta(x,y);\tau)\). Such a quasi-homogeneous extension would, however, subject the local moment to the rule of broken structural inversion symmetry, that is, when \(\Delta(x,y)=0\), \(\mu_{{}_{\mathit{bulk}}}(E,\Delta(x,y);\tau)=0\) and so \(m(x,y)=0\). In contrast, the expression of \(m\) given in Eqn. (3) which is suited to general inhomogeneity is less restricted. It predicts, on the contrary, the likely existence of nonvanishing \(m\) when \(\rho_{{}_{\mathit{diff}}}(x,y)\) is finite, irrespective of the actual situation in structural inversion symmetry. 4. **Local valley magnetic moments and inversion symmetry** The likely existence of nonvanishing local Figure 1: Moreover, a quantitative expression of local valley magnetic moment is provided by the corresponding projection ‘\(m\)’, with \(m=\stackrel{{\rightarrow}}{{m}}\cdot\stackrel{{ \rightarrow}}{{z}}=-\frac{\tau\rho_{{}_{\mathit{diff}}}}{2E}\). moments, even in the presence of structural inversion symmetry, marks an important deviation of the present local valleytronics from the traditional, homogeneous perspective based valleytronics. The likelihood can generally be argued from the standpoint of symmetry, regardless of singularities such as abrupt boundaries in structures, as follows. Inversion-symmetric structures with translational symmetry in the \(x\) direction are considered. Let \(m_{t}(y)\) be the local moment distribution in the transverse (\(y\)) dimension, for a quantum state near one of the two Dirac points with valley index \(\tau\). Note that \(m_{\tau}\) is uniform in the \(x\) direction, given the translational symmetry in the direction. Firstly, we briefly apply the traditional symmetry argument [9] to the total valley magnetic moment \(\int m_{\tau}\left(y\right)dy\) and show that it vanishes in the structure considered here. Denote \(\int m_{\tau}\left(y\right)dy\) by M. Then an apparent conflict would come up when applying the inversion operation (Inv) as follows. 1) With the inversion being a symmetry of the structure, it follows that M remains invariant under Inv. 2) On the other hand, Inv flips the wave vector of the state and, hence, valley index, too, giving \(\mathrm{Inv}(\tau)=\) -\(\tau\). Since valleys \(\tau\) and -\(\tau\) are also time reversal (TR) transforms of each other, i.e., \(\mathrm{TR}(\tau)=\) -\(\tau\), it follows that the corresponding current loop of M reverses sense when going from \(\tau\) to -\(\tau\) thus leading to a sign flip in M, in conflict with the earlier conclusion of M being invariant. The conflict can only be resolved by putting \(\mathrm{M}=0\). However, the above symmetry argument does not forbid the existence of a nonvanishing \(m_{t}(y)\). For example, an oscillating, antisymmetric \(m_{t}(y)\), i.e., \(m_{t}(\)-\(y)\) -\(m_{t}(y)\) with nonvanishing amplitude would not violate the conclusion of vanishing M. Below, we show the compatibility between an antisymmetric \(m_{t}(y)\) and structural inversion symmetry. In **Figure 2**, such \(m_{t}(y)\) is depicted in the middle graph. Applying Inv changes \(m_{t}(y)\) to '\(m_{t}(\)-\(y)\)', with the transformed distribution shown in the left graph. On the other hand, as Inv flips the valley index as TR, it therefore changes \(m_{t}(y)\) to '\(m_{t}(y)\)' or '\(-m_{t}(y)\)', with the transformed distribution shown in the right graph. The agreement between the transformed \(\mathrm{Inv}(m_{t}(y))\) and \(\mathrm{TR}(m_{t}(y))\) demonstrates a consistency in the case of an antisymmetric \(m_{t}(y)\) and, hence, concludes compatibility between such \(m_{t}(y)\) and inversion symmetry. A more detailed argument is presented in **Appendix C**, in the case of zigzag nanoribbons in gapless graphene, where it provides a brief overview of state transformation under Inv and TR, and applies it to \(m_{t}(y)\), as a supplement to the above discussion. It would be worthwhile to note about the role of translational symmetry in the two examples, namely, homogeneous bulks and zigzag nanoribbons, both in gapless graphene. As already concluded, \(\int m_{\tau}\left(y\right)dy=0\) in both cases. But, concerning \(m_{t}(y)\), a distinction resulting from the symmetry may exist between the cases, as follows. In the homogeneous bulk case, with \(m_{t}\) a uniform distribution due to the translational symmetry, it is obvious that only a trivial antisymmetric distribution, i.e., \(m_{t}(y)=0\) everywhere can concur with the vanishing \(\int m_{\tau}\left(y\right)dy\). In contrast, for inhomogeneous structures such as zigzag nanoribbons, \(m_{t}(y)\) is likely space varying due to the lack of translational symmetry in the \(y\) direction. Therefore, even though \(\int m_{\tau}\left(y\right)dy=0\), it leaves plenty of room for \(m_{t}(y)\) to dodge the trivial destiny if it is antisymmetric. As will be illustrated explicitly with numerical results in **Sec. III**, \(m_{t}(y)\) in zigzag nanoribbons indeed oscillates with a nonvanishing amplitude as graphed in **Figure 2**. ## 3 Generic definition A both model- and material- independent, functional derivative expression is given below to define the local valley magnetic moment in terms of the local Zeeman response to a weak probing magnetic field, as follows: \[\begin{split}& m(\vec{r})=-\frac{\delta E_{Z_{zeman\_unit}}[B_{z}^{ (probe)}(\vec{r})]}{\delta B_{z}^{(probe)}(\vec{r})}\Bigg{|}_{B_{z}^{(probe)}( \vec{r})=0},\\ & E_{Z_{zeman\_unit}}[B_{z}^{(probe)}(\vec{r})]=-\int m(\vec{r})B_ {z}^{(probe)}(\vec{r})d^{2}r\end{split} \tag{5}\] Figure 2: **Local valley magnetic moment** in a zigzag graphene nanoribbon of gapless graphene. Middle – antisymmetric distribution \(m_{t}(y)\); left – transformed \(\mathrm{Inv}(m_{t}(y))\); right – transformed \(\mathrm{TR}(m_{t}(y))\)). Thin, short horizontal arrows indicate corresponding wave vectors (\(k\)) of quantum states. ( \(E_{Zeeman\_valley}=\) valley Zeeman energy, \(B_{z}^{(probe)}=\) probing magnetic field). Eqn. (5) exploits the physics of local Zeeman interaction \({}^{*}-m(\vec{r})B_{z}^{(probe)}\)\(\cdot\)\(\cdot\)\(\cdot\) to operationally define \(m(\vec{r})\). Without going into details, we state that it can be shown that such definition when applied to the Q1D inhomogeneous structure earlier considered in the Dirac model reproduces the same expression of \(m\) derived there. Eqn. (5) can be applied to numerical studies, including those with abrupt boundaries. In the graphene case, we perform such studies with the same tight-binding model used in **Figure 1**, with the magnetic field included in the model through the Peierls substitution method [45]. In the case of a Q1D structure, \(B_{z}^{(probe)}\) is taken to be a strip of flux as shown in **Figure 3**. Usage of the strip flux results in \(m(y)\) independent of \(x\), consistent with translational symmetry in the \(x\)-direction in the structure. **Figure 3. B.(probe) A strip of local, vertical magnetic field is used in the case of a Q1D structure.** ## 4 Effects of external fields Interactions between local valley magnetic moments and space-dependent electric and magnetic fields are discussed below. Because derivations of the interactions are somewhat involved, the presentation below takes the following strategy. Previous results in the homogeneous bulk case are briefly mentioned, followed by conjectures for extensions to the inhomogeneous case based on the results. Rigorous results are stated at the end, with derivations given and accessible elsewhere [46]. In the homogeneous bulk case, it is known that for a bulk state the corresponding valley magnetic moment \(\mu_{bulk}\) can interact with a uniform, out-of-plane magnetic field, e.g., \(B_{z}\), shifting the state energy by the valley Zeeman term \(\cdot\)\(-\mu_{bulk}\,B_{z}\), [9, 23]. In the inhomogeneous case, the foregoing result is replaced by the local expression \({}^{*}-m(\vec{r})B_{z}(\vec{r})\), following the earlier discussion in **Sec. II** that defines the local valley magnetic moment. Similarly, it is known that \(\mu_{bulk}\) can also couple with an electric field giving rise to the valley-orbit interaction. For a bulk graphene state with wave vector \(k_{x}\), the corresponding interaction energy is given by \(\cdot\)\(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}\mu_{bulk}\), [23], in the case where \(\varepsilon_{y}\) is a uniform, in-plane electric field in the y direction. This result leads, in the Q1D case, to the conjecture of a corresponding local expression given by \(\cdot\)\(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}(y)m(y)\), for the interaction between the local moment \(m(\mathrm{y})\) and a space-dependent electric field \(\varepsilon_{y}(y)\). In the following, we restrict the attention to Q1D structures and present a rigorous statement in the linear response regime for local valley \(-\) external field interactions. Consider a quantum state with wave vector \(k_{x}\). Let \(m^{(0)}(y)\) and \(E^{(0)}\) be the corresponding field-free local valley magnetic moment and electron state energy, respectively. In the linear response regime, the local valley \(-\) external field interaction energy is given by \[\begin{split} E_{valley-fold}&=\int\limits_{-\infty}^ {\infty}\frac{k_{x}}{E^{(0)}}\varepsilon_{y}(y)m^{(0)}(y)dy\\ &-\int\limits_{-\infty}^{\infty}B_{z}(y)m^{(0)}(y)dy\end{split} \tag{6}\] with \[\frac{k_{x}}{E^{(0)}}\varepsilon_{y}(y)m^{(0)}(y) \tag{7}\] being the _local valley-orbit interaction_ due to the electric field \(\varepsilon_{y}(y)\) and \[-B_{z}(y)m^{(0)}(y) \tag{8}\] the _local valley Zeeman interaction_ due to the magnetic field \(B_{z}(y)\). Both interactions can serve as useful _mechanisms_ for _local valley control_ with space-dependent electric / magnetic fields, in analogy to their bulk counterparts which have already been demonstrated to be useful for valleytronic device applications [34]. Note that in the low energy limit where \(E^{(0)}\rightarrow\Delta_{0}\), Eqn. (7) reduces to the earlier conjecture - \(\frac{k_{x}}{\Delta_{0}}\varepsilon_{y}(y)m(y)\)'based on the homogeneous bulk result. ## III Numerical results This section carries out numerical studies to illustrate i) nonvanishing local valley magnetic moments in the presence of inversion symmetry and ii) local magnetic and electric effects. As the local magnetic moment is dominated by the difference \(\rho_{diff}\), we look for structures with strong modulation on the atomic scale in order to create a pronounced contrast between A and B sites for \(\rho_{diff}\) to be nonvanishing. This leads to the consideration of zigzag graphene nanoribbons, where one boundary abruptly terminates at A sites and the other at B sites thus creating a strong asymmetry between A and B sites. In all figures presented below, the same nanoribbon is studied, with the gap parameter \(\Delta\) = 0 eV and ribbon width W = 65.8 \(a\) (\(a\) = 1.42 A being the bulk lattice constant). Throughout presentations, magnetic moments are expressed in units of the Bohr magneton (\(\mu_{B}\) = 5.79\(\times\)10\({}^{5}\) eV/Tesla). **Figure 4** presents local valley magnetic moments of nanoribbon subband states. **(a)** shows a few valence and conduction subbands in the ribbon. States of opposite Dirac valleys are located near \(k_{x}\) \(\sim\) -2.10 \(a^{\shortmid}\) and \(k_{x}\) \(\sim\) 2.10 \(a^{\shortmid}\), respectively. **(b)** shows VMM\({}_{1/2}\) - local valley magnetic moments accumulated over half width of the ribbon (i.e., VMM\({}_{1/2}\) = \(\int\limits_{0}^{W/2}m(y)dy\) ), for subband states already presented in **(a)**, using the same color index scheme used in **(a)**. Note that for each subband, VMM\({}_{1/2}\) flips in sign for states of opposite valleys (near \(k_{x}\)\(\sim\) -2.10 \(a^{\shortmid}\) and \(k_{x}\)\(\sim\) 2.10 \(a^{\shortmid}\), respectively). Moreover, for each \(k_{x}\), VMM\({}_{1/2}\) flips in sign, too, for corresponding conduction and valence band states related by electron-hole symmetry, for example, second conduction and valence subband states. Both flips can be attributed to the underlying time reversal symmetry and will play a role in **Figures 5** and **6** below when field effects are considered. Note that VMM\({}_{1/2}\) in **(b)** is sizable and can sometimes exceed 10 \(\mu_{B}\) in the nanribbon considered. **(c)** illustrates local valley magnetic moments (LVMMs) of second valence subband states (in the red curve shown in **(a)**) at a few selected \(k_{x}\)'s (-1.88 \(a^{\shortmid}\), -2.10 \(a^{\shortmid}\), and -2.31 \(a^{\shortmid}\)) near the Dirac point at \(k_{x}\) \(\sim\) -2.10 \(a^{\shortmid}\). All LVMMs shown here exhibit antisymmetry in the \(y\)-direction, giving vanishing total valley magnetic moments irrespective of \(k_{x}\). **(d)** presents \(\rho_{A}(y)\) and \(\rho_{B}(y)\) of the second valence subband state at \(k_{x}\) = -2.10 \(a^{\shortmid}\), which implies a sign oscillation in \(\rho_{diff}(y)\) and, hence, in the corresponding LVMM as well in agreement with the oscillation shown by the red curve in **(c)**. **Gapless zigzag graphene nanoribbon** **Figure 5** illustrates effects of the local valley Zeeman interaction given in Eqn. (6) and highlights the usefulness of local valley magnetic moments when total moments vanish. Introduce the magnetic field strength parameter B\({}_{\rm{z0}}\) with \(\mu_{B}\)B\({}_{\rm{z0}}=1\) meV, which corresponds to B\({}_{\rm{z0}}\sim 17\) Tesla. **(a)** compares the field-free subbands (black) with those when a locally varying, step-like magnetic field B\({}_{\rm{z}}(y)\) is applied, with the field flux confined exclusively to the lower half ribbon, i.e., B\({}_{\rm{z}}(y<\) W/2) = B\({}_{\rm{z0}}\) and B\({}_{\rm{z}}(y>\) W/2) = 0 (red). In order to interpret the graph, we apply the expression of local valley Zeeman interaction energy \({}^{*-}\int\limits_{0}^{W/2}B_{z0}m^{(0)}(y)dy\), which yields the product '-B\({}_{\rm{z0}}\) VMM\({}_{1/2}\)'. Since VMM\({}_{1/2}\) carries opposite signs for opposite valleys, as noted earlier in **Figure 4**, the interaction lifts the valley degeneracy resulting in the valley Zeeman splitting \(\sim\) 17 meV shown in the second conduction or valence subband. **(b)** compares the field-free subbands (black) with those when the whole ribbon is immersed in the uniform magnetic field given by B\({}_{\rm{z}}(y)=\) B\({}_{\rm{z0}}\) (blue). The valley Zeeman interaction energy in this case is proportional to the total valley magnetic moment and thus vanishes. As the result, the magnetic field only induces a Landau magnetic energy shift common to both valleys without breaking the valley degeneracy. **Local magnetic effects** In **Figure 6**, we illustrate effects of the Figure 4: **Local valley magnetic moments** in a zigzag nanoribbon of gapless graphene. **(a)** shows a few nanoribbon subbands, with each subband indexed by a corresponding color. The two Dirac points are located near \(k_{x}=-2.10\)\(a^{-1}\) and \(k_{x}=2.10\)\(a^{-1}\), respectively. **(b)** shows local valley magnetic moments integrated over half width of the ribbon (VMM\({}_{1/2}\)) for subband states already presented in **(a)**, using the same color index scheme given in **(a)**. **(c)** Nontrivial, antisymmetric local valley magnetic moments (LVMMs) are obtained for second valence subband states (in the red curve shown in **(a)**) at a few selected \(k_{x}\)’s near the Dirac point with \(k_{x}\)\(\sim-2.10\)\(a^{-1}\). **(d)** depicts the density distributions, \(\rho_{A}(y)\) and \(\rho_{B}(y)\), of the second valence subband state at \(k_{x}=-2.10\)\(a^{-1}\). Figure 5: **Local magnetic effects** in the same zigzag nanoribbon used in **Figure 4**. **(a)** compares field-free subbands (black) with those when a locally varying, step-like magnetic field (B\({}_{\rm{z}}(y)\)) is applied to the lower half ribbon (red). The valley degeneracy is lifted by B\({}_{\rm{z}}(y)\) leading to a valley Zeeman splitting \(\sim\) 17 meV. **(b)** compares field-free subbands (black) with those when a uniform magnetic field is applied (blue). It only introduces a Landau magnetic energy shift without breaking the valley degeneracy. local valley-orbit interaction given in Eqn. (6). We take \(\varepsilon_{y}(y)\) to be generated by a symmetric, piecewise linear, electrical potential \(V(y)\) with the slope given by \(\pm\,\varepsilon_{y0}\) of opposite signs for \(y<\) W/2 and \(y>\) W/2 (i.e., \(\varepsilon_{y}(y)\) = - \(\partial_{y}V\) = \(\varepsilon_{y0}\,\mathrm{sgn}(y\,\)-\(\,W\,/\,2\,\)) ). The figure compares field-free subbands (black) with those in the presence of \(V(y)\) (red). In the field-free case, the subband structure has a direct gap between the second conduction and second valence bands. But in the presence of \(V(y)\), band edge states of each subband shift in \(k_{x}\) in opposite directions, for opposite valleys (near \(k_{x}=\pm 2.10\)\(a\) -1, respectively), giving a 'valley Rashba splitting'. Moreover, band edge states of the two subbands shift in \(k_{x}\) in opposite directions creating a relative wave vector difference \(\delta k_{x}\sim 0.02\)\(a^{-1}\) between the two subbands' edges and, correspondingly, an indirect gap between the two subbands. Both foregoing \(V(y)\)-induced shifts can be explained in terms of the local valley-orbit interaction energy \(\int\limits_{-\infty}^{\infty}\frac{k_{x}}{E^{(0)}}\,\varepsilon_{y}(y)m^{(0) }(y)dy\), as follows. When applied to the present case, the expression reduces to \(\,\,\,\,\,2\,\frac{k_{x}}{E^{(0)}}\,\varepsilon_{y0}\)\(VMM_{1/2}\) ', giving a linear-in-\(k_{x}\) energy shift in the subband dispersion with a sign dependent on both the state energy \(E^{(0)}\) and corresponding \(VMM_{1/2}\). As noted earlier in **Figure 4**, \(VMM_{1/2}\) carries opposite signs for opposite valleys as well as for second conduction and valence subbands, thus resulting in the shifts observed above. In passing, we note that due to the antisymmetry in local valley magnetic moment, a relatively simple linear potential \(V(y)\) corresponding to a uniform \(\varepsilon_{y}\) would not produce the splitting, while it would in the homogeneous bulk case with broken inversion symmetry, where it interacts with a nonvanishing \(\mu_{bulk}\) and produces the splitting. **Local electric effects** In closing the section, we briefly remark on a flexibility brought by local valley - external field interactions for valley control. In the case where the local moment exhibits a sign variation in space, as illustrated in **Figure 4**, alternative local magnetic (electrical) fields with signs and distributions locally correlated to those of the local moment may produce the same valley Zeeman splitting (valley Rashba splitting) and effect the same magnetic (electric) valley control. For example, in **Figure 5 (a)**, an alternative magnetic field given by B\({}_{x}(y<\) W/2) = 0 and B\({}_{x}(y>\) W/2) = - B\({}_{x0}\) would produce the same valley Zeeman splitting, as can verified using the Zeeman term in Eqn. (6). **IV. CONCLUSION** In conclusion, while being valley topological quantities, valley magnetic moments also carry a hidden dimension of local physics. As this work has shown, the presence of inhomogeneity in space makes possible distinct local valley phenomena which are not dictated by the traditional perspective developed from studies of homogeneous bulks. In order to explore the dimension, the notion of local valley magnetic moments has been introduced as a vehicle to address degrees of freedom beyond total valley magnetic moments. An operational definition Figure 6: **Local electric effects** in the same zigzag nanoribbon used in **Figure 4**. We compare field-free subbands (black) with those in the presence of a symmetric \(V(y)\) (or antisymmetric \(\varepsilon_{y}(y)\)) that varies linearly between \(\pm 0.05\)\(eV\) with a piecewise constant slope \(\pm\,\varepsilon_{y0}\). In the field-free case, it shows a direct gap between the second conduction and second valence subbands. However, effects of the electric field shift various subband edges, resulting in a valley Rashba splitting as well as an indirect gap between the two subbands, with the latter characterized by a relative conduction-valence band edge wave vector difference \(\delta k_{x}\sim 0.02\)\(a^{-1}\). is given to such moments in terms of local magnetic response. Both analytical and numerical analysis have been performed for local valley magnetic moments giving interesting findings as summarized below. In graphene, for example, the local valley magnetic moment is shown to be tied to the local site probability difference'\(\rho_{A}-\rho_{s}\) ', which suggests the breaking of local probability-based inversion symmetry, in place of structural inversion symmetry, as the condition for existence of and applications based on valley-derived magnetic moments. By relaxing the structural inversion symmetry constraint on materials, the study has expanded the family of valleytronic materials. In particular, it adds to the list gapless monolayer graphene, an important material which is relatively accessible, for magnetic moment-based experiments and applications. In addition, the local valley magnetic moment variable introduced is also application-suited, as it is directly linked to local valley- external field interactions. Specifically, where total valley magnetic moments vanish, local valley Zeeman and local valley-orbit interactions have been shown to exist and manifest pronounced magnetic and electric effects, respectively. Such effects can be exploited for local valley control and provide a conduit to 'local valleytronics' for the implementation of valleytronics. Last but not the least, the novel local valley phenomena revealed suggest the exciting direction of _valley engineering_ - design and search for inhomogeneous structures to tailor local valley physics for applications. ## Acknowledgment We thank Yen-Ju Lin for technical support in numerical calculations. We acknowledge the financial support of MoST, ROC through Contract No. MOST 110-2112-M-007-038. F.-W. C. acknowledges partial support from Academia Sinica. \({}^{\dagger}\) Corresponding author. Email: yswu@ee.nthub.edu.tw ## Appendix A Current density Consider a simple Q1D inhomogeneous structure exhibiting translational symmetry in the \(x\) direction. The Dirac Eqn. (1) is expanded giving rise to the following: \[\begin{split}\Delta(\gamma)F_{A}+(k_{x}-\widehat{\nu}_{y})F_{B}& =EF_{A}\\ -\Delta(\gamma)F_{A}^{{}^{\ast}}-\Delta(\gamma)F_{B}& =EF_{B}\\ \Delta(\gamma)F_{B}^{{}^{\ast}}-\Delta(\gamma)F_{A}^{{}^{\ast}}- \Delta(\gamma)F_{B}&=EF_{B}\\ \Delta(\gamma)F_{A}^{{}^{\ast}}+F_{B}(k_{x}-\widehat{\nu}_{y})F_ {B}^{{}^{\ast}}&=EF_{B}F_{A}^{{}^{\ast}}\\ -\Delta(\gamma)F_{A}F_{B}^{{}^{\ast}}+F_{A}(k_{x}+\widehat{\nu}_{ y})F_{A}^{{}^{\ast}}&=EF_{A}F_{B}^{{}^{\ast}}\end{split} \tag{10}\] Combining the four wave equations in Eqn. (10), we obtain \[2k_{x}\rho+\widehat{\nu}_{y}\rho_{diff}=2Ej_{x}^{particle}\, \tag{11}\] where \(j_{x}^{particle}=F_{A}^{{}^{\ast}}F_{B}+F_{B}^{{}^{\ast}}F_{A}\) is the particle current density in the Dirac model [3]. This gives the charge current density \[j_{x}=-\frac{k_{x}\rho}{E}-\tau\frac{\partial_{x}\rho_{diff}}{2E} \tag{12}\] shown in Eqn. (2). ## Appendix B Valley magnetic moment in the bulk In the homogeneous bulk case, the Dirac equation is given by [3] \[\begin{split} H_{{}_{\rm max}}F&=EF,\\ H_{{}_{\rm max}}&=\left(\begin{array}{cc}\Delta_{ 0}&k_{x}-i\tau k_{y}\\ k_{x}+i\tau k_{y}&-\Delta_{0}\end{array}\right),\end{split} \tag{13}\] with the following solution \[\begin{split} F_{A}&=\rho^{V2}\ (k_{x}-i\tau k_{y})\,/\left[k^{2} +(E-\Delta_{0})^{2}\right]^{V2},\\ F_{B}&=-\rho^{V2}\ (\Delta_{0}-{\rm E})\,/\left[k^{2}+(E- \Delta_{0})^{2}\right]^{V2}.\end{split} \tag{14}\] where \(E\) is the electron energy given by \((\,k^{2}+\Delta_{0}^{\ 2}\,)^{1/2}\). The substitution of Eqn. (14) into the expression of \(m\) in Eqn. (3) yields \[m=-\frac{\tau\rho_{diff}}{2E}=-\frac{\tau\rho[k^{2}-(\Delta_{0}-{\rm E})^{2}]} {2E[k^{2}+(\Delta_{0}-{\rm E})^{2}]}, \tag{15}\] which can be transformed by straightforward mathematics into the form of Eqn. (4) using the energy dispersion \(E=(\,k^{2}+\Delta_{0}^{\ 2}\,)^{1/2}\) to express \(k^{2}\) in terms of \(E\) and \(\Delta_{0}\). ## Appendix C State transformation under Inv and TR We give an overview of state transformation under Inv and TR for zigzag nanoribbons in gapless graphene with translational symmetry in the \(x\) direction, and then apply it to the local moment. For valley \(\tau\), a nanoribbon state satisfies the following Dirac equation \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleftleft.}}}}\right)}}}}}}}}}}F=EF,\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}}}}}}}}}}}}= \begin{pmatrix}0&k_{x}-\widehat{w}_{y}\\ k_{x}+\widehat{w}_{y},&0\end{pmatrix},\\ & F_{{}_{A}}(W\,/\,2)=F_{{}_{B}}(-W\,/\,2)=0.\end{split} \tag{10}\] The last line in Eqn. (10) provides the boundary condition on \(F\)[42]. For the discussion below, we shall denote the corresponding local valley moment of \(F\) by \(m_{\text{r}}(y)\). Under Inv, \(k_{x}\rightarrow\) -\(k_{x}\) and \(\tau\rightarrow\) -\(\tau\), so Eqn. (10) transforms to the one below \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left.}}}}}}}}}}}}}}(lm)=EF^{(lm)},\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\left.}}}}}}}}}}}}}}}(lm)=\begin{pmatrix}0&-k_{x}+ \widehat{w}_{y}\\ -k_{x}-\widehat{w}_{y}&0\end{pmatrix},\\ & F_{{}_{A}}({}^{lm})(W\,/\,2)=F_{{}_{B}}^{(lm)}(W\,/\,2)=0,\end{split} \tag{11}\] with \(F^{(lm)}\) given by \[\begin{split}& F_{{}_{A}}^{(lm)}(y)=-F_{{}_{B}}(-y),\\ & F_{{}_{B}}^{(lm)}(y)=F_{{}_{A}}(-y),\end{split} \tag{12}\] \(F^{(lm)}\) given above satisfies both the transformed Dirac equation and the boundary condition as can be easily verified. As indicated by \(F^{(lm)}\) in Eqn. (12), Inv switches A and B sites and at the same time induces the mirror reflection \(y\rightarrow\) -\(y\). The site switch effectively flips the valley index of the state and offsets the previous valley flip in the Dirac Hamiltonian. Overall, with only the reflection in effect, it results in \[\begin{split}&\text{Inv}(m_{\text{r}}(y))=m_{\text{r}}(-y). \end{split} \tag{13}\] Under TR, again \(k_{x}\rightarrow\) -\(k_{x}\) and \(\tau\rightarrow\) -\(\tau\), so Eqn. (10) becomes also one for valley '\(-\tau\)' given by \[\begin{split}& H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}\right)}}}}}}}}(TR)F^{(TR)}=EF^{(TR)},\\ & H_{{}_{\text{{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\leftleft.}}}}}}}}\right)}}}}}}}}(TR)=\begin{pmatrix}0&-k_{x}+ \widehat{w}_{y}\\ -k_{x}-\widehat{w}_{y},&0\end{pmatrix},\\ & F_{{}_{A}}^{(TR)}(W\,/\,2)=F_{{}_{B}}^{(TR)}(-W\,/\,2)=0,\end{split} \tag{14}\] with the solution given by \[\begin{split}& F_{{}_{A}}^{(TR)}(y)=F_{{}_{A}}^{{}^{*}}(y)=F_{{} _{A}}(y),\\ & F_{{}_{B}}^{(TR)}(y)=-F_{{}_{B}}^{{}^{*}}(y)=-F_{{}_{B}}(y). \end{split} \tag{15}\] Above, we have used the fact that for the zigzag nanoribbon bound state, \(F_{A}\) and \(F_{B}\) can be taken to be real. As TR produces only a valley flip here, we obtain \[\text{TR}(m_{\text{r}}(y))=m_{\text{r}}(y)=-m_{\text{r}}(y). \tag{16}\] Last, as Eqns. (11) and (14) are identical, with the assumption that the solutions for a given \(k_{x}\) are nondegenerate, indeed as numerical shown in **Figure 4 (a)**, we conclude that \(F^{(lm)}\) and \(F^{(TR)}\) describe the same state leading to Inv(\(m_{\text{r}}(y)\)) = TR(\(m_{\text{r}}(y)\)), that is, \(m_{\text{r}}(-y)\) = -\(m_{\text{r}}(y)\). We thus conclude that \(m_{\text{r}}(y)\) is antisymmetric in zigzag nanoribbons of gapless graphene. ## References * [1] K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature **438**, 197 (2005). * [2] Y. Zhang, Y.-W. Tan, H. L. Stormer, and P. Kim, Nature **438**, 201 (2005). * [3] A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. **81**, 109 (2009). * [4] E. McCann and V. I. Fal'ko, Physical Review Letters **96**, 086805 (2006). * [5] E. V. Castro, K. S. Novoselov, S. V. Morozov, N. M. R. Peres, J. M. B. L. dos Santos, J. Nilsson, F. Guinea, A. K. Geim, and A. H. C. Neto, Physical Review Letters **99**, 216802 (2007). * [6] G. Giovannetti, P. A. Khomyakov, G. Brocks, P. J. Kelly, and J. van den Brink, Physical Review B **76**, 073103 (2007). * [7] B. Sachs, T. O. Wehling, M. I. Katsnelson, and A. I. Lichtenstein, Physical Review B **84**, 195414 (2011). * [8] K. F. Mak, C. Lee, J. Hone, J. Shan, and T. F. Heinz, Physical Review Letters **105**, 136805 (2010). * [9] D. Xiao, W. Yao, and Q. Niu, Phys. Rev. Lett. **99**, 236809 (2007). * [10] D. Xiao, G.-B. Liu, W. Feng, X. Xu, and W. Yao, Physical Review Letters **108**, 196802 (2012). * [11] A. Rycerz, J. Tworzydlo, and C. W. J. Beenakker, Nature Physics **3**, 172 (2007). * [12] R. V. Gorbachev, J. C. W. Song, G. L. Yu, A. V. Kretinin, F. Withers, Y. Cao, A. Mishchenko, I. V. Grigorieva, K. S. Novoselov, L. S. Levitov, and A. K. Geim, Science **346**, 448 (2014). * [13] Y. Shimazaki, M. Yamamoto, I. V. Borzenets, K. Watanabe, T. Taniguchi, and S. Tarucha, Nature Physics **11**, 1032 (2015). * [14] M. Sui, G. Chen, L. Ma, W.-Y. Shan, D. Tian, K. Watanabe, T. Taniguchi, X. Jin, W. Yao, D. Xiao, and Y. Zhang, Nature Physics **11**, 1027 (2015). * [15] F. Zhang, A. H. MacDonald, and E. J. Mele, Proceedings of the National Academy of Sciences **110**, 10546 (2013). * [16] I. Martin, Ya. M. Blanter, and A. F. Morpurgo, Physical Review Letters **100**, 036804 (2008). * [17] L. Ju, Z. Shi, N. Nair, Y. Lv, C. Jin, J. Velasco, C. Ojeda-Aristizabal, H. A. Bechtel, M. C. Martin, A. Zettl, J. Analytis, and F. Wang, Nature **520**, 650 (2015). * [18] J. Li, R.-X. Zhang, Z. Yin, J. Zhang, K. Watanabe, T. Taniguchi, C. Liu, and J. Zhu, Science **362**, 1149 (2018). * [19] P. San-Jose and E. Prada, Physical Review B **88**, 121408 (2013). * [20] S. Huang, K. Kim, D. K. Efimkin, T. Lovorn, T. Taniguchi, K. Watanabe, A. H. MacDonald, E. Tutuc, and B. J. LeRoy, Physical Review Letters **121**, 037702 (2018). * [21] P. Rickhaus, J. Wallbank, S. Slizovskiy, R. Pisoni, H. Overweg, Y. Lee, M. Eich, M.-H. Liu, K. Watanabe, T. Taniguchi, T. Ihn, and K. Ensslin, Nano Letters **18**, 6725 (2018). * [22] P. Gosselin, A. Berard, H. Mohrbach, and S. Ghosh, The European Physical Journal C **59**, 883 (2009). * [23] G. Y. Wu, N.-Y. Lue, and L. Chang, Phys. Rev. B **84**, 195463 (2011). * [24] C. Gold, A. Knothe, A. Kurzmann, A. Garcia-Ruiz, K. Watanabe, T. Taniguchi, V. Fal'ko, K. Ensslin, and T. Ihn, Physical Review Letters **127**, 046801 (2021). * [25] J. Pereira, F. Peeters, R. Costa Filho, and G. Farias, Journal of Physics: Condensed Matter **21**, 045301 (2009). * [26] N. Rohling and G. Burkard, New Journal of Physics **14**, 083008 (2012). * [27] Y. Wu, Q. Tong, G.-B. Liu, H. Yu, and W. Yao, Physical Review B **93**, 045313 (2016). * [28] G. Szechenyi, L. Chirolli, and A. Palyi, 2D Materials **5**, 035004 (2018). * [29] J. Pawlowski, D. Zebrowski, and S. Bednarek, Physical Review B **97**, 155412 (2018). * [30] M.-K. Lee, N.-Y. Lue, C.-K. Wen, and G. Y. Wu, Physical Review B **86**, 165411 (2012). * [31] K. F. Mak, K. He, J. Shan, and T. F. Heinz, Nature Nanotechnology **7**, 494 (2012). * [32] Z. Wu, B. T. Zhou, X. Cai, P. Cheung, G.-B. Liu, M. Huang, J. Lin, T. Han, L. An, Y. Wang, S. Xu, G. Long, C. Cheng, K. T. Law, F. Zhang, and N. Wang, Nature Communications **10**, 611 (2019). * [33] J. Lee, W. Heo, M. Cha, K. Watanabe, T. Taniguchi, J. Kim, S. Cha, D. Kim, M.-H. Jo, and H. Choi, Nature Communications **12**, 1635 (2021). * [34] X. Li, W. Cai, J. An, S. Kim, J. Nah, D. Yang, R. Piner, A. Velamakuni, I. Jung, E. Tutuc, S. K. Banerjee, L. Colombo, and R. S. Ruoff, Science **324**, 1312 (2009). * [35] T. Wu, X. Zhang, Q. Yuan, J. Xue, G. Lu, Z. Liu, H. Wang, H. Wang, F. Ding, Q. Yu, X. Xie, and M. Jiang, Nature Mater **15**, 43 (2016). * [36] Y. Kim, E. Moyen, H. Yi, J. Avila, C. Chen, M. C. Asensio, Y. H. Lee, and D. Pribat, 2D Mater. **5**, 035008 (2018). * [37] D. A. Boyd, W.-H. Lin, C.-C. Hsu, M. L. Teague, C.-C. Chen, Y.-Y. Lo, W.-Y. Chan, W.-B. Su, T.-C. Cheng, C.-S. Chang, C.-I. Wu, and N.-C. Yeh, Nat Commun **6**, 6620 (2015). * [38] M. Wang, M. Huang, D. Luo, Y. Li, M. Choe, W. K. Seong, M. Kim, S. Jin, M. Wang, S. Chatterjee, Y. Kwon, Z. Lee, and R. S. Ruoff, Nature **596**, 519 (2021). * [39] J. Li, M. Chen, A. Samad, H. Dong, A. Ray, J. Zhang, X. Jiang, U. Schwingenschlogl, J. Domke, C. Chen, Y. Han, T. Fritz, R. S. Ruoff, B. Tian, and X. Zhang, Nat. Mater. **21**, 740 (2022). * [40] P. R. Wallace, Phys. Rev. **71**, 622 (1947). * [41] M. S. Dresselhaus, G. Dresselhaus, and A. Jorio, _Group Theory: Application to the Physics of Condensed Matter_ (Springer-Verlag, Berlin, 2008). * [42] L. Brey and H. A. Fertig, Physical Review B **73**, 235411 (2006). * [43] J. M. Ziman, _Principles of the Theory of Solids_ (Cambridge university press, 1972). * [44] J. D. Jackson, _Classical Electrodynamics_, Third Edition (New York, 1998). * [45] R. Peierls, Zeitschrift fur Physik **80**, 763 (1933). * [46] F.-W. Chen, Z.-H. Huang, and Y.-S. G. Wu, "Valley field mechanics: a local perspective beyond valley flavor", arXiv:2208.02915 (2022).
vallée磁気モーメントは、2D六角形材料における谷電子学において重要な役割を果たします。伝統的に、均一な塊における量子状態の研究に基づいて、構造の反転不対称性を持つ材料だけが谷磁気モーメントを持ち、非ゼロとなると考えられてきました。この制限は、反転対称性を持つ材料を対象とした関連アプリケーションから除外し、特に、その技術的な利点を持つ、欠陥のない単層Grapheneを例に挙げることができます。本研究では、不均一な構造を含む広範な背景において、谷由来の磁気モーメントを再考します。谷磁気モーメントの概念を、統合された総量から、空間的に変化する分布を有する「局所谷磁気モーメント」として一般化します。反転対称性を持つ不均一な構造、例えば、欠陥のない単層Grapheneの蛇行状ナノ
2309.10282
Constraining hybrid potential scalar field cosmological model in Lyra's geometry with recent observational data
In the current study, we investigate a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we presupposed a variable displacement vector as an element of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hybrid function of redshift $z$, confirming the essential transition behavior of the universe from a decelerating era to the present accelerated scenario. We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon, taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are $ H_0 = 71.15\pm 0.26$ km/s/Mpc, $ \Omega_{m0}=0.2625\pm 0.0024$, $ \Omega_{\phi0} = 0.676\pm0.038$, $ \alpha=-0.22\pm0.13$, $n = 0.096\pm0.079$, and $k = 0.38\pm0.32$. The model exhibits a flipping nature, and the redshift transition occurs at $z_t = 0.756^{+0.005}_{-0.015}$. The current value of the decelerated parameter for the proposed model is calculated as $q_0 = -0.625^{+0.067}_{-0.085}$ for the combined dataset. Some dynamical properties of the model like energy density ($\rho_{\phi}$), scalar field pressure ($p_{\phi}$), EoS parameter of scalar field ($\omega_{\phi}$), and effective EoS parameter ($\omega_{eff}$) are analyzed and presented. Further, we have also examined the statefinder diagnosis and jerk parameters of the derived model. The total density parameter for the derived model is found to be unity which is in nice agreement with recent standard findings.
Vinod Kumar Bhardwaj, Anil Kumar Yadav, Lalit Kumar Gupta, Rajendra Prasad, Sudhir Kumar Srivastava
2023-09-19T03:11:07
http://arxiv.org/abs/2309.10282v2
###### Abstract ###### Abstract In the current study, we investigate a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we presupposed a variable displacement vector as an element of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hyperbolic function of redshift \(z\), confirming the essential transition behavior of the universe from a decelerating era to the present accelerated scenario. We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon, taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), \(\alpha=-0.22\pm 0.13\), \(n=0.096\pm 0.079\), and \(k=0.38\pm 0.32\). The model exhibits a flipping nature, and the redshift transition occurs at \(z_{t}=0.756^{+0.005}_{-0.015}\). The current value of the decelerated parameter for the proposed model is calculated as \(q_{0}=-0.625^{+0.067}_{-0.085}\) for the combined dataset. Some dynamical properties of the model like energy density (\(\rho_{\phi}\)), scalar field pressure (\(p_{\phi}\)), EoS parameter of scalar field (\(\omega_{\phi}\)), and effective EoS parameter (\(\omega_{eff}\)) are analyzed and presented. Further, we have also examined the statefinder diagnosis and jerk parameters of the derived model. The total density parameter for the derived model is found to be 1 which is in great agreement with recent standard findings. **Current Observation constraints on Hybrid potential scalar field cosmological model in Lyra Geometry** Vinod Kumar Bhardwaj Department of Mathematics, GLA University, Mathura-281 406, Uttar Pradesh, India E-mail:dr.vinodbhardwaj@gmail.com ## 1 Introduction In order to explain how gravity interacts with space and time, Einstein developed the general relativity (GR) theory at the beginning of the 20th century. He makes a clear connection between space-time geometry and matter and radiation. Relating the energy and momentum, fundamental characteristics of matter and radiation, and the curvature of space-time, Einstein presented the field equation for GR as \(R_{ij}-\frac{1}{2}Rg_{ij}=8\pi GT_{ij}\)[1]. Since the theoretical development of general relativity that connect the geometry with gravity, several approaches and theories have been proposed in search of additional geometrization models. The idea in which the geometrization of gravity with fundamental forces has also been proposed. In this direction, Weyl suggested the idea of geometrizing gravity and electromagnetic together using Riemannian geometry to unite them under the umbrella of a "unified field theory" [2]. Weyl suggested the usage of Riemannian geometry and vector potential to describe electromagnetic forces. In general, as a vector goes across space, the net displacement is determined by the initial and ending positions, ignoring the history of the path travelled results in the non-integrability of path followed. Weyl's approach of using vector potential complicated the issue and hence debarred [2, 3]. In the subsequence, several additional theories have also been proposed, either to replace Einstein's theory of relativity or to reduce the complexity of Weyl's theory. Lyra proposed the use of the Gauge function with Riemannian geometry in the progress [4]. Lyra's idea resembles with Weyl's method and maintains the integrability of length transfer, as would be expected in Riemannian geometry. Since Lyra's geometrization behaves in accordance with Einstein's principle, it has frequently been used by many researchers to predict and explain cosmic phenomena [5, 6, 7, 8]. In Lyra's context, Sen suggested a static cosmic model behaving like Einstein's model under static conditions, though model suffers red shift [9]. On the other hand, Halford used Lyra's geometry to explore non-static cosmic events. Halford also pointed that a constant Gauge vector field developed when the Gauge function was introduced into Lyra's geometry and behaves similarly to the cosmological constant \(\Lambda\)[10]. Imposing the Lyra's suggestions, Soleng, additional explored the importance of Gauge vector field as a source of creation in cosmic theories [8]. Under the observational limits, several cosmological theories are developed on the basis of Lyra's geometry depicting similar consequences as anticipated in Einstein's theory of relativity [11, 12, 13]. Several astrophysical experiments have confirmed that the cosmos is expanding at an accelerated rate in present time [14, 15, 16, 17]. According to many recent studies [18, 19, 20], dark energy (DE) is anticipated to play a major role in the universe's expansion, whereas dark matter is anticipated to be a key component in the growth of large-scale structures (LSS). The mysterious kind of energy called as DE, which exerts a tremendous amount of repulsive (negative) pressure, is what causes the cosmos to expand. With the experimentally confirmations of present cosmic reality, theoretical researchers are motivated to create universe models in various frameworks. The cosmological term has been believed to be a suitable replacement of DE because of its repulsive behavior [21]. In literature, to explain the universe's present accelerated expansion, a variety of alternative hypotheses without the cosmological constant (CC) have been suggested. Each of these theories has a different prediction in describing the characteristics of DE and cosmic behaviour of the universe. In order to fit observational data, some of these theories also have extra parameters that can be adjusted. Although the cosmological constant matches the scientific results well and is validated by several experiments, it has failed to characterize the inflationary period of the cosmos. In addition to modified theories, several scalar field models are also introduced in theoretical cosmology, to address these issues of inflationary age and to describe the current expanding era of the cosmos [22, 23]. In these studies, the scalar field (\(\phi\)) is considered as an assumption for the dark energy component which produces the negative pressure along with a reducing potential (\(V(\phi)\)). In literature, various cosmological research depending on the scalar field is suggested to characterize the dynamics of the cosmos [22, 23, 24, 25]. The quintessence is an interesting scalar field model that precisely avoids the conventional issues of fine-tuning and cosmic coincidence and depicts the present cosmic reality [24, 25]. Johri [26] was the first to propose the idea of tracking, indicating a certain direction to explain the current cosmic scenario using the potential of the tracker. This idea was strongly supported by the observational estimates. Numerous quintessence models have been proposed in literary works. For a non-minimal relationship between dark matter and quintessence [27, 28, 29], these concepts include the potential for a scalar field to evolve under the influence of an unconventional kinetic term. The important applications of a variable EoS (Equation of State) parameter in the framework of scalar-tensor theory can be seen in Ref. [30, 31]. The existence of the scalar field in astrophysical investigations is also acknowledged by several fundamental theories. Numerous cosmic models have recently been developed in various scalar field theory frameworks [30, 31, 32, 33, 34]. Kamenshchik et al. examined the Chaplygin gas DE model dark with the aid of a special form of EoS [35]. In the current study, we investigated a scalar field cosmological model with Lyra's geometry to explain the present cosmic expansion in a homogeneous and isotropic flat FRW universe. In Einstein's field equations, we assumed a variable displacement vector as an element of Lyra's geometry. The model parameters are extracted using the most recent observational data sets from OHD, BAO/CMB, and Pantheon. The manuscript is developed in the following manner. The model and its solutions taking hybrid scalar field density are described in section 2. Observational data and methodology for constraining the model parameters are mentioned in section 3. The features and dynamical characteristics of the model are discussed in section 4. A brief concluding summary of the proposed model is declared in section 5. ## 2 Field equations and its solution Following Sen [9], we consider the action proposed for gravitational field equations in Lyra's geometry. \[S_{\psi}=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\psi^{4}R+\mathcal{L}_{m} \bigg{]} \tag{1}\] where \(\mathcal{L}_{m}\) stands for matter Lagrangian and \(8\pi G=1=c\). The field equations in Lyra's geometry [3, 4, 5, 9, 10] are recast as \[R_{ij}-\frac{1}{2}Rg_{ij}+\frac{3}{4}\psi_{i}\psi_{j}-\frac{3}{2}\psi_{i} \psi^{i}=T_{ij} \tag{2}\] where, perfect fluid's energy-momentum tensor \(T_{ij}\) is described by \(T_{ij}=\frac{-2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{\delta g^{ ij}}\), \(R_{ij}\) represents the Ricci tensor, scalar curvature is denoted by R, and displacement vector \(\psi_{i}\) is the function of time and is defined as \(\psi_{i}=(\beta(t),0,0,0)\). For the purpose of modelling structure of cosmos and to study its evolutionary geometry, we consider a 4-dimensional FRW space-time universe which is flat, homogeneous, and isotropic in nature. \[ds^{2}=a(t)^{2}\left(dx^{2}+dy^{2}+dz^{2}\right)-dt^{2} \tag{3}\] where average scale factor \(a(t)\) is used to estimate the cosmic growth of the universe with time. For the above line element, the Ricci scalar can determined in the form \(R=-(\dot{H}+2H^{2})\), here \(H=\frac{\dot{a}}{a}\) is the Hubble parameter. In our study, we assumed that the universe has a flat geometry because this is a usual forecast of the inflationary model which is also confirmed by several experimental observations, such as the LSS surveys and the CMB measurements [16, 36, 37, 20]. The flat universe model gets more attention for cosmological studies at present because it is simple and it only needs a few extra parameters than the \(\Lambda\)CDM base model. For an ideal fluid, the tensor of energy-momentum can be recasts in terms of energy density, velocity, and fluid pressure as \(T_{ij}^{m}=(p_{m}+\rho_{m})u_{i}u_{j}-p_{m}g_{ij}\), where \(\rho_{m}\) and \(p_{m}\) are the energy density and pressure of the matter. In co-moving coordinate system, the field equations (2) for metric (3) are developed as \[3H^{2}-\frac{3}{4}\beta^{2}=\rho_{m} \tag{4}\] \[2\dot{H}+3H^{2}+\frac{3}{4}\beta^{2}=-p_{m} \tag{5}\] A mathematical statement that demonstrates the scalar field's interaction with gravity provides its action. A fictitious field called the scalar field has been proposed to explain a variety of physics phenomena, including inflation, DE, and the Higgs technique. The action tries to generalize the theory of GR using basic concepts of scalar-tensor gravitational theories. In this situation, the scalar field is vital in adjusting the gravitational force's strength, which results in a wide range of physical events. Usually, for a scalar field, the action is expressed in terms of the scalar field and its derivatives as \[S_{\phi}=\int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\delta_{\nu}\phi\delta^{\nu} \phi-V(\phi)\bigg{]} \tag{6}\] where \(\phi\) and \(V(\phi)\) are scalar field and scalar potential respectively. In the scalar field background, Klein-Gordon equation is read as \[\frac{d^{2}\phi(t)}{dt^{2}}+3\frac{d\phi(t)}{dt}H+\frac{dV(\phi)}{d\phi}=0 \tag{7}\] For the scalar field, the energy-momentum tensor is developed as \(T^{\phi}_{ij}=(\rho_{\phi}+p_{\phi})u_{i}u_{j}-p_{\phi}g_{ij}\). The pressure \(p_{\phi}\) and energy density \(\rho_{\phi}\) for the scalar field are expressed as [38, 39] \[p_{\phi}=\frac{1}{2}\dot{\phi}^{2}-V(\phi) \tag{8}\] \[\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}+V(\phi) \tag{9}\] where, \(V(\phi)\) and \(\frac{\dot{\phi}^{2}}{2}\) respectively represent the potential and kinetic energies and both are functions of \(\phi\). The scalar field is frequently coupled with gravitational field using the coupling of scalar curvature, which connect the Ricci curvature to the scalar field. The Einstein equations change when this term affects the gravitational constant. The action due to coupling can be stated as follows: \[A = S_{\psi}+S_{\phi} \tag{10}\] \[= \int d^{4}x\sqrt{-g}\bigg{[}\frac{1}{2}\psi^{4}R+\mathcal{L}_{m} +\bigg{(}\frac{1}{2}\delta_{\nu}\phi\delta^{\nu}\phi-V(\phi)\bigg{)}\bigg{]}\] Thus, the Friedmann field equations considering matter and scalar field as the source may be expressed as \[3H^{2}=\rho_{eff}=\rho_{m}+\rho_{\phi}+\rho_{\beta} \tag{11}\] \[2\dot{H}+3H^{2}=p_{eff}=-(p_{m}+p_{\phi}+p_{\beta}) \tag{12}\] where \(p_{\beta}=\rho_{\beta}=\frac{3}{4}\beta^{2}\). Here \(p_{\beta}\) and \(\rho_{\beta}\) are pressure and energy density due to displacement vector \(\beta\). It is crucial to keep in mind that for the stiff fluid \(p_{\beta}=\rho_{\beta}\). Our universe has already experienced a time when the pressure and matter density were equal. Assuming the universe to be dust filled (\(p_{m}=0\)), the equations of energy conservation for scalar field and matter are established as \[\frac{d\rho_{\phi}}{dt}+3(p_{\phi}+\rho_{\phi})H=0 \tag{13}\] \[\frac{d}{dt}\rho_{m}+3\rho_{m}H=0 \tag{14}\] From Eqs. (13) and (14), we get the following solutions \[\omega_{\phi}=-\bigg{[}1+\frac{1}{3H}\bigg{(}\frac{\rho_{\phi}^{\cdot}}{\rho_{ \phi}}\bigg{)}\bigg{]} \tag{15}\] where \(\omega_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}\) denotes the scalar field's EoS parameter. \[\rho_{m}=\rho_{m0}\frac{1}{a^{3}} \tag{16}\] where integrating constant \(\rho_{m0}\) is referred as the present value of matter energy density. In general, it is difficult to solve the system of Eqs. (11) and (12) involving \(\rho_{\phi},\ \ \rho_{\beta},\ p_{\phi},\) and \(H\) as unknows. Thus, we need some more variable or parametrized relations to provide a solution to the system. It is crucial to investigate cosmic models other than the cosmological constant because it is insufficient to fully explain the universe's accelerated expansion. Although various physical explanations are mentioned for choosing such constraints, model-independent approaches are well-known choices that are based on the specific parametrization of the EoS parameter, deceleration parameter, and energy density [40]. In the case of DE cosmic models, the EoS parameter expresses a relation between energy density and pressure. The model with the cosmological constant is assumed as the standard DE model where the EoS parameter remains constant and its value is found as -1. However, the study of variable EoS parameters provides information on the underlying physics of DE. A simplest two-parameter model known as Chevallier-Polarski-Linder (CPL) parametrization has the potential to detect variations from a fixed EoS value [41, 42]. To examine DE possibilities beyond the cosmological constant, parametrizations that are considerably more complex can also be utilized, including the Jassal-Bagla-Padmanabhan (JBP) [43, 44], the hybrid [45], and the BA [46]. Another approach is to parametrize the DE energy density as a cosmic time t (or, alternatively, redshift z) function. Polynomial expansions and principal component analysis are two techniques that can be used to achieve this [47, 48, 49, 45]. These approaches can provide insight into how DE behaved over different cosmic eras. Here, the scalar field's energy density is considered as the source of dark energy and suitably parametrized in the following form \[\rho_{\phi}=\rho_{\phi 0}(1+z)^{\alpha}e^{nz} \tag{17}\] where \(\alpha\) and \(n\) are constants, and \(\rho_{\phi 0}\) is the present value of universe's critical density. These model parameters will be constrained from observational datasets. In the background of scalar fields theory, several cosmological models and high energy theories are explored using hybrid energy density as an explicit choice [45]. In the present study, our focus is to utilize the hybrid parametrization of the potential as a phenomenological method to examine the evolutionary behavior of the cosmos in the framework of scalar-tensor theory. It is crucial to emphasize that our goal is not to describe certain high-energy theories of physics that anticipate the specific form of the hybrid potential mentioned in Eq. (17). We instead use this parametrization as a phenomenological technique to study the behaviour and implications of the scalar field dark energy hypothesis. For redshift transformation, we can utilize the relation \(a=\frac{a_{0}}{1+z}\), where \(a\) is the average value of scale factor and present value \(a_{0}\) is assumed to be 1. Thus, using Eq.(16), the matter energy density in terms of redshift z can be calculated as \[\rho_{m}=\left(1+z\right)^{3}\rho_{m0} \tag{18}\] Now, from Eqs. (17), (18), and (11), we get \[3H^{2}=\left(1+z\right)^{3}\rho_{m0}+\rho_{\phi 0}(1+z)^{\alpha}e^{nz}+\rho_{\beta} \tag{19}\] Using the gauge function \(\rho_{\beta}=\beta_{0}a^{-2k}\), Thus, Eq.(19) can be recast as \[H(z)=H_{0}\sqrt{\left(1+z\right)^{3}\Omega_{m0}+(1+z)^{\alpha}e^{nz}\,\Omega_ {\phi 0}+\Omega_{k0}(1+z)^{2k}} \tag{20}\] where \(\Omega_{m}=\frac{\rho_{m}}{\rho_{\alpha}}\), \(\Omega_{\phi}=\frac{\rho_{\phi}}{\rho_{\alpha}}\), and \(\Omega_{\beta}=\frac{\rho_{\beta}}{\rho_{\alpha}}\) are the unit-less density parameters for the proposed model. These dimensionless parameters play a key role in explaining the whole content of the cosmos. The \(\rho_{c}=3H^{2}\) is the critical density of the universe. Here, \(H_{0}\) denotes the present value of Hubble constant, subscripted \(\Omega_{i0}\) represents the values of density parameters at \(z=0\). Thus, Eq.(2) can be reduced as follows for \(z=0\). \[\Omega_{m0}+\Omega_{\phi 0}+\Omega_{k0}=1 \tag{21}\] From Eq. (15), the EoS parameter of scalar field can be derived as \[\omega_{\phi}=\frac{1}{3}(\alpha+nz+n-3) \tag{22}\] So, for the proposed model the effective EoS parameter can be read as \[\omega_{eff} = \frac{p_{eff}}{\rho_{eff}}=\frac{p_{\phi}+p_{\beta}}{\rho_{m}+ \rho_{\phi}+\rho_{\beta}}\] \[= \frac{\left(2k-3\right)\left(1-\Omega_{m0}-\Omega_{\phi 0}\right) \left(z+1\right)^{2k}+\Omega_{\phi 0}e^{nz}(\alpha+nz+n-3)(z+1)^{\alpha}}{3 \bigg{[}\left(1-\Omega_{m0}-\Omega_{\phi 0}\right)\left(z+1\right)^{2k}+\Omega_{m0}(z+1)^{3}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\bigg{]}}\] Now, for the proposed model the expressions of density parameters for matter, scalar field, and Lyra factor respectively, are given in the following forms. \[\Omega_{m}(z)=\frac{\rho_{m}}{3H^{2}}=\frac{(z+1)^{3}\Omega_{m0}}{ \left(-\Omega_{m0}-\Omega_{\phi 0}+1\right)(z+1)^{2k}+\Omega_{m0}(z+1)^{3}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}} \tag{24}\] \[\Omega_{\phi}(z)=\frac{\rho_{\phi}}{3H^{2}}=\frac{e^{nz}(z+1)^{ \alpha}\Omega_{\phi 0}}{\left(z+1\right)^{2k}\left(-\Omega_{m0}-\Omega_{\phi 0}+1 \right)+(z+1)^{3}\Omega_{m0}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}}\] (25) \[\Omega_{\beta}(z)=\frac{\rho_{\beta}}{3H^{2}}=\frac{(z+1)^{2k} \left(-\Omega_{m0}-\Omega_{\phi 0}+1\right)}{\left(z+1\right)^{2k}\left(-\Omega_{m0}- \Omega_{\phi 0}+1\right)+(z+1)^{3}\Omega_{m0}+ e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}} \tag{26}\] From Eqs. (8), (9), (13), and (14), the kinetic and potential energies of the scalar field are given through the following expressions. \[\frac{\dot{\phi}^{2}}{2} = -\frac{1}{2}H_{0}^{2}\bigg{[}(2k-3)\left(\Omega_{m0}+\Omega_{\phi 0}- 1\right)(z+1)^{2k}-e^{nz}\Omega_{\phi 0}(z+1)^{\alpha}(\alpha+nz+n)\bigg{]} \tag{27}\] \[V(\phi) = \frac{1}{2}H_{0}^{2}\bigg{[}(2k-3)\left(\Omega_{m0}+\Omega_{\phi 0}- 1\right)(z+1)^{2k}-e^{nz}\Omega_{\phi 0}(z+1)^{\alpha}(\alpha+nz+n-6)\bigg{]} \tag{28}\] The parameters (\(H_{0}\), \(\Omega_{m0}\), \(\Omega_{\phi 0}\), \(\alpha\), \(n\), \(k\)) have a significant impact on the model presented in Eq. (20), which determine the behavior and cosmological characteristics of the model. In the next segment, our aim is to analyze current experimental data to better recognize the significance of the present model. We specifically intend to study how the behavior of cosmological parameters is affected by constraining the values of important parameters (\(H_{0}\), \(\Omega_{m0}\), \(\Omega_{\phi 0}\), \(\alpha\), \(n\), \(k\)). ## 3 Datasets and Cosmological Constraints Methods ### Supernovae type Ia We have taken into consideration, the sample consisting of 1048 points of Pantheon compilation with redshifts between 0.01 and 2.26. 276 SNIa points from the PanSTARRSI Medium Deep Survey, Low-\(z\), and HST samples are included in this sample [50, 51]. For the sample of Pantheon predictions, the \(\chi^{2}\) measure is defined by the following relation. \[\chi^{2}_{SN}=(\mu_{obs}-\mu_{th})^{T}\left(C^{-1}_{SN}\right)(\mu_{obs}-\mu_ {th}) \tag{29}\] where \(\mu_{th}=5\log_{10}\frac{cDL}{H_{0}Mpc}+25\), \(\mu_{obs}\) be the observed distance modulus, and for the Pantheon sample, \(C_{SN}\) denotes the covariance matrix [50]. \(H_{0}\) indicates the Hubble rate while \(c\) corresponds to speed for a particle of light. For a flat FRW universe, the luminosity distance is expressed as \(D_{L}(z)=(1+z)H_{0}\int_{0}^{z}\frac{dx^{\prime}}{H(x^{\prime})}\). To limit the parameters of the proposed model for the Pantheon compilation sample, we use the following statistical measure. \[\chi^{2}_{Pantheon}=\Delta\mu^{T}.C^{-1}_{Pantheon}.\Delta\mu \tag{30}\] in which \(\Delta\mu=\mu_{data}-\mu_{obs}-M\) and \(M\) corresponds to a nuisance parameter. The entire collection of full and binned Pantheon supernova data is available online [52]. ## BAO/CMB data To determine the restrictions on parameters of the model, we took into account the BAO [53, 54, 55] and CMB [56, 57] measurements dataset. Six BAO/CMB data points have been considered (Table 1). For the BAO sample, the predictions from a sample of Galaxy Surveys like SDSS DR7 and 6dF, and WiggleZ have been utilized [53, 54, 55]. However, the CMB measurement under consideration is based on WAMP7 observations [56]. A similar explanation of the given sample can be seen in [58, 45], but [58] provides more information on the approach used and sample to constrain the parameters. The angular diameter distance for the sample is defined as \(D_{A}=\frac{D_{L}}{(1+z)2}\), where \(D_{L}\) indicates the proper angular diameter distance [58], and the dilation scale is described by \(D_{V}(z)=\left[D_{L}^{2}(z)*(1+z)^{2}*\frac{cz}{H(z)}\right]^{1/3}\). \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline \multicolumn{10}{|c|}{Values of \(\Upsilon(z)\) for different points of \(z_{BAO}\)} \\ \hline \(z_{BAO}\) & \(0.106\) & \(0.2\) & \(0.35\) & \(0.44\) & \(0.6\) & \(0.73\) \\ \hline \(\Upsilon(z)\) & \(30.95\pm 1.46\) & \(17.55\pm 0.60\) & \(10.11\pm 0.37\) & \(8.44\pm 0.67\) & \(6.69\pm 0.33\) & \(5.45\pm 0.31\) \\ \hline \hline \end{tabular} Here, \(\Upsilon(z)=d_{A}(z_{*})/D_{V}(z_{BAO})\) and \(z_{*}\approx 1091\). For limiting the parameters of the model, the chi-square estimator for the BAO sample is described in the following form [58, 59, 45]. \[\chi^{2}_{BAO/CMB}=X^{T}C^{-1}X \tag{31}\] where \[X=\begin{pmatrix}\frac{d_{A}(z_{*})}{D_{V}(0.106)}-30.95\\ \frac{d_{A}(z_{*})}{D_{V}(0.20)}-17.55\\ \frac{d_{A}(z_{*})}{D_{V}(0.35)}-10.11\\ \frac{d_{A}(z_{*})}{D_{V}(0.44)}-8.44\\ \frac{d_{A}(z_{*})}{D_{V}(0.60)}-6.69\\ \frac{d_{A}(z_{*})}{D_{V}(0.73)}-5.45\\ \end{pmatrix}\] and \(C^{-1}\) is given by [58] \[C^{-1}=\begin{pmatrix}0.48435&-0.101383&-0.164945&-0.0305703&-0.097874&-0.10 6738\\ -0.101383&3.2882&-2.45497&-0.787898&-0.252254&-0.2751\\ -0.164945&-2.454987&9.55916&-0.128187&-0.410404&-0.447574\\ -0.0305703&-0.0787898&-0.128187&2.78728&-2.75632&1.16437\\ -0.097874&-0.252254&-0.410404&-2.75632&14.9245&-7.32441\\ -0.106738&-0.2751&-0.447574&1.16437&-7.32441&14.5022\\ \hline \end{pmatrix}.\] ## Observational Hubble Data (OHD) We have take-over 57 \(H(z)\) datapoints for z ranging in between 0.07 and 2.36 calculated from cosmic chronometric technique, galaxy clusters [32], and differential age procedure. Then the Hubble constant can be realized in the form of redshift as \((1+z)H(z)=-\frac{dz}{dt}\). Now, the estimator \(\chi^{2}\) is taken into consideration for the purpose of limiting the model's parameters by comparing the model's theoretical predictions (\(E_{th}\)) with experimental values (\(E_{obs}\)).' \[\chi^{2}_{OHD}=\sum_{i=1}^{57}\frac{\left[E_{th}(z_{i})-E_{obs}(z_{i})\right]^{ 2}}{\sigma_{i}^{2}} \tag{32}\] where \(\sigma_{i}\) is the error detected in experimental estimations of \(H(z)\). Thus, the joint estimator for a combined sample of experimental predictions including BAO/CMB, OHD, and SNIa samples, the combined statistic measure is defined in the following manner [45, 58, 59]. \[\chi^{2}_{tot}=\chi^{2}_{Pantheon}+\chi^{2}_{BAo/CMB}+\chi^{2}_{OHD} \tag{33}\] The \(\chi^{2}_{tot}\) statistic can be minimized to find the parameter value that best fits the combined sample of the SNIa, OHD, and BAO/CMB datasets. By taking maximum likelihood approach into account, the total likelihood function \(\mathcal{L}_{tot}=exp(-\chi^{2}_{tot}/2)\) may be calculated as the product of individual likelihood functions of each dataset expressed in the form \(\mathcal{L}_{tot}=\mathcal{L}_{Pantheon}*\mathcal{L}_{BAO/CMB}*\mathcal{L}_{OHD}\). The likelihood function \(\mathcal{L}_{tot}(x*)\) is maximized or, alternatively \(\chi^{2}_{tot}(x^{*})=-2\ln\mathcal{L}_{tot}(x^{*})\) is minimized to get the most plausible values of parameters. For the set of cosmic parameters (pointed at \(x^{*}\)), the \(1\sigma\) and \(2\sigma\) contours are constrained and bounded respectively by \(\chi^{2}_{tot}(x)=\chi^{2}_{tot}(x^{*})+2.3\) and \(\chi^{2}_{tot}(x)=\chi^{2}_{tot}(x^{*})+6.17\). We get best-fit parameter values for the derived model by minimizing the \(\chi^{2}\) statistic. Figure 1: Confidence contour plot for joint data set of OHD, Pantheon and BAO. Figure 1 displays the statistical results in confidence contours with \(1\sigma\) and \(2\sigma\) limits for the proposed model utilizing the joint dataset of SN, BAO/CMB, and OHD. The best plausible values of parameters estimated from the joint dataset are summarised in Table 1. The comparative behavior of the suggested model with the existing standard models and relevant datasets is plotted and presented in Fig.2. The Hubble rate ( \(H(z)/(1+z)\)) as a function of \(z\) is plotted for the purpose. In Figure 2(a), the solid red line depicts the behaviour of our suggested model utilizing the values of parameters obtained from the joint dataset, the 57 data points of OHD are represented by error bars of blue colour, and the dashed black line denotes the findings of the traditional \(\Lambda\)CDM model. For a similar reason, Figure 2(b) plots the distance modulus \(mu(z)\) as a function of \(z\). For an observed supernova the difference between the "apparent and absolute magnitude" describes the luminosity distance which can be expressed as \(dL=a_{0}(1+z)r=(1+z)\int_{0}^{z}\frac{dz}{H(z)}\). This distance parameter, \(\mu(z)\) is numerically equal to \(25+5log10(dL/Mpc)\). Here in Figure 2(b), the blue error bar shows the points of the SN data that were taken into consideration as discussed earlier, the dashed black line shows the output of the traditional \(\Lambda\)CDM model, and the solid red line illustrates the characteristics of our derived model for the joint dataset. In our analysis, for the joint dataset, the estimated values of parameters are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\). These findings are consistent with the most recent results [60, 61, 63, 64, 65]. relation to the Hubble parameter, the DP can be realized as \(q=-\frac{\ddot{a}}{H^{2}a}=-1+\frac{1}{H}(1+z)\frac{d}{dz}H(z)\). Hence, for the proposed model the deceleration parameter can be derived as: \[q = -1+\frac{1}{H(z)}(1+z)\frac{dH}{dz} \tag{34}\] \[= \frac{2(k-1)(z+1)^{2k}\Omega_{k0}+(z+1)^{3}\Omega_{m0}+e^{nz}(z+1 )^{\alpha}\Omega_{\phi 0}(\alpha+nz+n-2)}{2\left((z+1)^{2k}\Omega_{k0}+(z+1)^{3} \Omega_{m0}+e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\right)}\] The major evolutionary features of the expanding universe can be analyzed and explained by the study of the Hubble parameter in consort with DP. The study of these parameters can accurately predict the various cosmic features of the evolutionary universe like age and phase transition (deceleration to acceleration or vice versa). The inflationary behavior of the model can be justified by the sign of the DP \(q\). While a negative sign of q depicts the universe's current accelerated expansion, a positive sign of q corresponds to the universe's decelerated expansion. For a combined sample of SNIa, OHD, and BAO/CMB datasets, the trajectory of the deceleration parameter is plotted in Fig.4. The earlier universe is thought to evolve with deceleration dynamics due to dark energy domination as clearly seen in Fig.4. Here, we see that the Universe in the derived model is currently going through an accelerated scenario of expansion. Additionally, for the derived model a signature flipping is observed at \(z_{t}=0.756^{+0.005}_{-0.015}\) and the model evolves in the acceleration phase in late time. At \(z=0\), the present value of DP is observed as \[q = \frac{2(k-1)\Omega_{k0}+\Omega_{m0}+\Omega_{\phi 0}(\alpha+n-2)}{2 \left(\Omega_{k0}+\Omega_{m0}+\Omega_{\phi 0}\right)} \tag{35}\] From the above analysis, the present value of DP (\(q\)) is observed to be \(-0.625^{+0.067}_{-0.085}\), which greatly agrees with the recent findings. It is interesting to notice that \(\frac{dH}{dt}_{t=t_{0}}=0\) for \(q_{0}=-1\), which predicts a rapid expansion of the universe and the largest value of the Hubble parameter. Therefore, the dynamics of the late-time evolution of the observed Universe may be described using the Universe in the derived model. The derived outcomes of our theoretical model are nicely matched with the recent experimental results [1, 19, 45, 52, 58, 59, 61, 62, 65]. Figure 3: Deceleration parameter \(q\) as a function of \(z\) for OHD + Pantheon + BAO data sets. The proposed model describes an expanding universe with the decrease in the densities of the scalar field and matter as shown in Fig. 3. The density of the scalar field approaches to the least value in late time while the matter density advances to zero. Energy conservation in GR accounts for the drop in densities of the scalar field and matter with the expansion of the universe. The future evolution of the Universe will be significantly impacted by the assumption that in the late cosmos, the density of matter becomes zero. A phenomenon known as "heat death" is when the universe progresses gradually cold and dark, and no matter is left to create new galaxies or stars. The thermodynamics second rule, which stipulates that disorder or entropy can continuously increase over time, leads to this scenario. As the scalar field tends to have a smaller value of DE density in late time, the Universe can also keep expanding at an accelerating rate in the future. The scenario in which all the matter including stars and galaxies pulled apart, and the expansion of the universe grows very fast is recognized as the "big rip" scenario. In the same direction, the EoS parameter is another useful tool, which explains the evolutionary dynamics of the cosmos in terms of its rate of expansion. The EoS parameter (\(\omega\)) can be expressed by a relation of the cosmic fluid's energy density and the pressure in the form \(p=\omega\rho\). Based on the nature of pressure, the EoS parameter can be characterized by different cosmic realities. Dark matter is an example of non-relativistic matter for which \(\omega=0\), while for relativistic matter like radiation \(\omega=1/3\). The accelerated or decelerated expanding nature of the cosmos can be characterized by distinct value of \(\omega\). The scenario of accelerated expansion of the cosmos can be categorized into different conceivable DE scenarios, which include (i) quintessence scenario (\(-1<\omega<-1/3\)), (ii) cosmological constant (\(\omega=-1\)), (iii) phantom scenario (\(\omega<-1\)). We have focused on both the effective EoS parameter and the EoS parameter of the scalar field in our model. According to GR, the only prerequisite for inflation in the cosmos is that which results in the cosmos having repulsive energy and jerk. In our investigation, we have validated the quintom behavior of the model by establishing the current values of the scalar field EoS parameter. The current value of the scalar field EoS parameter is estimated as \(\omega_{\phi 0}=-1.042\) for best-constrained values of model parameters for a combined sample of datasets. This conclusion supports prior research in the field and lends substantial support to the quintom behavior of Figure 4: (a) Energy density, (b) EoS. DE [66, 67, 68, 69, 45]. Additionally, we have displayed the reconstructed evolution history of the effective EoS parameter \(\omega_{eff}\) for this model using the combined sample of SN+OHD+BAO/CMB datasets in Figure 4. From the figure, it has been noticed that the model does not suffer any kind of 'future singularity' because, at low redshift \(\omega_{eff}\) attains a negative value (\(\omega_{eff}<-1\)) while at high z, it approaches to zero. Thus, from the trajectory of \(\omega_{eff}\), it has been detected that the model proceeds in the quintessence era during the evolution of the cosmos and approaches to a phantom scenario in late time. Hence, the profile of \(\omega_{eff}\) depicts a quintom-like behavior of the cosmos. Due to the dominance of non-relativistic matter like baryonic and dark matter in the early universe, the value of the scalar field's density parameter is low whereas the value of the matter density parameter is high i.e., \(\Omega_{m}>\Omega_{\phi}\) which renders a strong physical background for the decelerated scenario of the early universe. But, with the evolutionary growth of the universe, the density parameter decreases due to the volumetric increase of the universe and it tends to zero in late time whereas the density parameter of the scalar field becomes dominant in late time and leads to the universe's accelerated expansion. The best-estimated values of density parameters of the proposed model for the combined sample of observation datasets (SN, OHD, and BAO) are found as \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), and \(\Omega_{k0}=0.0615\pm 0.0404\). These outcomes are nicely matched with the findings of the Planck measurements [70, 34]. The total density parameter for the proposed model tends to unity in late time i.e., at the current era. Thus, in the present study, a scalar field is taken as a substitute for DE to describe the accelerated expansion of the cosmos. The development of the scalar field's kinetic and potential energy is also depicted in Figures 8 and 9. From the graphical representation of potential and kinetic energies, we observed that both are positive and decreasing, and the scalar field transit to a low energy scenario from a high energy era over time. Figure 5: Deceleration parameter \(q\) as a function of \(z\) for OHD + Pantheon + BAO data sets. The energy associated with a scalar field, which has a single value at each space point, is described by the theory of the scalar potential in physics [22, 23, 24]. The nature of the scalar field and its potential is highly influenced by the specific physical system taken into consideration. Obviously, a positive scalar potential is associated with stable structures in physics because negative energy can result in non-physical or unstable solutions. For the proposed model, the positive behavior of scalar potential during the entire evolution can be seen in Figure. The nature of kinetic energy is positive and decreasing in the evolution of the universe as depicted in the figure. Thus, the derived model shows a quintessence-like behavior in which massless scalar field \(\phi\) can be assumed with a potential \(V(\phi)\) that mimics the gravitational field and efficiently describes the current inflation of universe [24, 25, 26]. **Statefinders:** Cosmological parameters like Hubble and deceleration are combined sufficiently to depict the evolutionary dynamics of the universe. Scale factor \(a\), its first order derivative (\(\dot{a}\)), and second order derivative (\(\ddot{a}\)) are used to describe both of these parameters. The accuracy characteristics of the proposed theoretical models are suppressed as a result of this dependency, which causes Figure 6: (a)KE, (b)PE. Figure 7: (a)Trajectory in \((r-s)\) plane, (b) Trajectory in \((r-q)\) plane. all the models to converge around the same value of \(q\) and other revealing parameters. Important theoretical model predictions about the accuracy of the outcomes are lost in the process. Two new parameters named as statefinder \(r,s\) are presented to identify the degree of precision among various dark energy cosmic models. This pair of state-finders assists us in enhancing model prediction accuracy by identifying evolutionary trajectory in the \(r-s\) plane. Assuming various forms of dark energies, as mentioned in the literature [71, 72, 73], it is possible to distinguish between the suggested cosmological model and the \(\Lambda\)CDM model from the \((r-s)\) plotting [71, 74]. In terms of \(q\) and \(z\), the parameters \(r\) and \(s\) for the currently suggested model can be elaborated as follows: \[r = q(2q+1)+(1+z)\frac{dq}{dz} \tag{36}\] \[= \left[2(\Omega_{m0}+\Omega_{\phi 0}-1)(2k^{2}-3k+1)(z+1)^{2k}-2(1 +z)^{3}\Omega_{m0}+\Omega_{\phi 0}\bigg{(}2\right.\] \[- \left.e^{nz}(z+1)^{\alpha}\{\big{(}(\alpha-1)+n(z+1)\big{)}^{2}- (\alpha-1)\}\bigg{)}\right]\] \[/ 2\bigg{[}(\Omega_{\phi 0}+\Omega_{m0}-1)(z+1)^{2k}-\Omega_{\phi 0 }e^{nz}(z+1)^{\alpha}-(z+1)^{3}\Omega_{m0}\bigg{]}\] \[s = \frac{r-1}{3(q-\frac{1}{2})} \tag{37}\] \[= \left[2k(3-2k)(z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1)+e^{nz }(z+1)^{\alpha}\Omega_{\phi 0}\bigg{(}\{n(z+1)+(\alpha-)\}^{2}-(\alpha+1) \bigg{)}\right]\] \[/ 3\bigg{[}(3-2k)(z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1)+e^{nz }(z+1)^{\alpha}\Omega_{\phi 0}\{\alpha-3+n(z+1)\}\right]\] Figure 7(a) depicts, the suggested model's evolutionary behavior in \(r-s\) plane utilizing the derived expressions for \(r\) and \(s\). For the suggested model, the present values of \((r,s)\) are calculated as (1.09673, -0.028773) by taking the combined sample of observational datasets into account. Thus, the given model starts with the matter-dominated era (decelerating scenario) cross \(\Lambda\)CDM \((r=1,s=0)\) enter quintessence era \((r<1,s>0)\) and finally approaches toward Chaplygin gas model \((r>1,s<0)\) in late time. Therefore, the suggested model behaves like the standard \(\Lambda\)CDM in the present scenario. Using the best plausible values of parameters estimated from a combined sample of observational datasets, we have plotted the \((q,r)\) trajectory in Figure 6(b) for the proposed model. In the figure horizontal red line \((r=1)\) indicates the \(\Lambda\)CDM line which divides the evolutionary plane into two regions namely the chaplygin gas \((r>1)\) and quintessence DE \((r<1)\). The \((q,r)\) trajectory for our proposed model starts from SCDM (0.5, 1) and shows a chaplygin gas-like behavior in late time. The model exhibits a flipping from a deceleration era to an acceleration scenario as depicted by the trajectory in \((q-r)\) plane. **Jerk Parameter** Another diagnostic that is utilized widely in astrophysical studies is the jerk parameter \((j)\). The cosmic jolt (or jerk) is the basic idea behind the concept of the jerk parameter that creates a transition of the universe to an accelerating scenario from the decelerating era. A physical jerk is the pace at which acceleration changes in relation to time. It is derived by using the third-order term of Tylor's expansion of the scale factor about \(a_{0}\) in cosmology. This parameter offers us an additional edge in identifying kinematically degraded cosmic models [75]. It provides greater accuracy in understanding the expansion of cosmos in comparison to the Hubble parameter because of the involvement of the scale factor's third-order derivative. It is possible to define \(j\) in the suggested model as [76, 77]. \[j(z)=(2q+1)q+(z+1)\frac{dq}{dz} \tag{38}\] From Eqs. (35) and (38), the expression of jerk diagnostic is developed as \[j(z) = \left[2\left(2k^{2}-3k+1\right)(z+1)^{2k}\left(\Omega_{m0}+\Omega _{\phi 0}-1\right)-2(z+1)^{3}\Omega_{m0}\right. \tag{39}\] \[- \left.e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\bigg{(}((\alpha-1)+n(z+1) )^{2}-(\alpha-1)\bigg{)}\right]\] \[/ 2\bigg{[}\left((z+1)^{2k}\left(\Omega_{m0}+\Omega_{\phi 0}-1 \right)-(z+1)^{3}\Omega_{m0}-e^{nz}(z+1)^{\alpha}\Omega_{\phi 0}\right) \bigg{]}\] We have presented the graphical behavior of jerk parameter \(j\) in Figure 8, using the best plausible values of free parameters estimated using the combined sample of Pantheon, BAO, and OHD datasets. The positive nature of the jerk parameter during the entire evolution indicates a smooth transition of the cosmos into an accelerated phase (see Fig. 8). The suggested model reflects the characteristics of the \(\Lambda\)CDM model of the universe because, in our study, the jerk parameter's current value is found to be \(j_{0}=1.11^{+0.23}_{-0.16}\) which is \(\approx 1\). It can be also observed from Figure, that our model acts like the standard \(\Lambda\)CDM model in the early time and represents another DE model that is distinct from \(\Lambda\)CDM in late time because \(j_{0}\neq 1\). Figure 8: Jerk paramter ## 5 Concluding remarks In the current study, we investigated a scalar field cosmological model with Lyra's geometry to explain the current cosmic expansion in a flat FRW universe. We assumed a variable displacement vector as a component of Lyra's geometry. In the context of the conventional theory of gravity, we suggest a suitable parameterization of the scalar field's dark energy density in the hyperbolic function of redshift \(z\), confirming the essential transition behavior of the universe. The main highlights are summarized as follows: * We present constraints on model parameters using the most recent observational data sets from OHD, BAO/CMB, and Pantheon and taking Markov Chain Monte Carlo (MCMC) analysis into account. For the proposed model, the best estimated values of parameters for the combined dataset (OHD, BAO/CMB, and Pantheon) are \(H_{0}=71.15\pm 0.26\) km/s/Mpc, \(\Omega_{m0}=0.2625\pm 0.0024\), \(\Omega_{\phi 0}=0.676\pm 0.038\), \(\alpha=-0.22\pm 0.13\), \(n=0.096\pm 0.079\), and \(k=0.38\pm 0.32\). * The model exhibits a flipping nature, and the redshift transition occurs at \(z_{t}=0.756^{+0.005}_{-0.015}\) and the current value of the decelerated parameter for the proposed model is calculated as \(q_{0}=-0.625^{+0.067}_{-0.085}\) for the combined dataset. * In our investigation, we have validated the quintom behavior of the model by establishing the current values of the scalar field EoS parameter. The current value of the scalar field EoS parameter is estimated as \(\omega_{\phi 0}=-1.042^{+0.068}_{-0.069}\) for combined datasets. From the trajectory of \(\omega_{eff}\), it has been detected that the model stays in the quintessence era during the evolution of the cosmos and approaches to a phantom scenario in late time. Hence, the profile of \(\omega_{eff}\) depicts a quintom-like behavior of the cosmos. * The total density parameter for the proposed model tends to unity in late time i.e., at the current era. Thus, in the present study, a scalar field is taken as a substitute for DE to describe the accelerated expansion of the cosmos. * The nature of kinetic energy is positive and decreasing in the evolution of the universe as depicted in the figure. Thus, the derived model shows a quintessence-like behavior in which massless scalar field \(\phi\) can be assumed with a potential \(V(\phi)\) that mimics the gravitational field and efficiently describes the current inflation of the universe [24, 25, 26]. * The given model starts with the matter-dominated era (decelerating scenario) cross \(\Lambda\)CDM (\(r=1,s=0\)) enter quintessence era (\(r<1,s>0\)) and finally approaches toward Chaplygin gas model (\(r>1,s<0\)) in late time. Therefore, the suggested model behaves like the standard \(\Lambda\)CDM in the present scenario. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \hline Parameters & \(q_{0}\) & \(z_{t}\) & \(\omega_{\phi 0}\) & \(j_{0}\) & \(\omega_{eff0}\) \\ \hline SN+BAO/CMB+OHD & \(-0.625^{+0.067}_{-0.085}\) & \(0.756^{+0.005}_{-0.015}\) & \(-1.042^{+0.068}_{-0.069}\) & \(1.11^{+0.043}_{-0.040}\) & \(-0.750^{+0.045}_{-0.057}\) \\ \hline \end{tabular} \end{table} Table 2: The numerical findings of derived cosmological model for joint dataset * The suggested model reflects the characteristics of the \(\Lambda\)CDM model of the universe because, in our study, the jerk parameter's current value is found to be \(j_{0}=1.11^{+0.23}_{-0.16}\) which is \(\approx 1\). It can be also observed from Figure, that our model acts like the standard \(\Lambda\)CDM model in the early time and represents another DE model that is distinct from \(\Lambda\)CDM in late time because \(j_{0}\neq 1\). Hence, the current study describes a model of a transitioning universe using the scalar field as a substitute for dark energy. Recent findings support the outcomes of the proposed model.
現在の研究では、Lyraの幾何学を利用して、同質的かつ等方的な平坦FRW宇宙における現在の宇宙の拡張を説明する、スカラー場宇宙論モデルを調査しています。エインシュタインの場の式において、Lyraの幾何学の一要素として可変の位相ベクトルを仮定しています。従来の重力理論の枠組みにおいて、紅SHIFT$z$のヒブリッド関数のScalar場Darkエネルギー密度の適切なパラメータ化を提案し、宇宙が減速期から現在の加速化現象への移行を明確に示しています。最新の観測データセット(OHD、BAO/CMB、Pantheon)を用いて、このモデルのパラメータの制約を適用しています。このモデルでは、OHD、BAO/CMB、Pantheonの結合データセットの最良の推定値は、$ H_0 = 7
2305.19615
The non-Hermitian landscape of autoionization
We report on the existence of exceptional points (EPs) in single-resonance autoionization and provide analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We additionally propose a reliable method for the experimental determination of EPs, based solely on information about their ionization probability as a function of the system parameters. The links between EPs, the maxima of the asymmetric profile and the effective decay rate of the ground state are investigated in detail. Quantitative numerical examples pertaining to the doubly excited $2s2p({}^1P)$ state of Helium confirm the validity of our formulation and results. In addition to unveiling hidden aspects of autoionization, our treatment and results provide a benchmark for the exploration of EPs and their properties in a variety of materials exhibiting Fano profiles with a broad perspective of possible applications.
G. Mouloudakis, P. Lambropoulos
2023-05-31T07:31:05
http://arxiv.org/abs/2305.19615v2
# The non-Hermitian landscape of autoionization ###### Abstract We report on the existence of exceptional points (EPs) in single-resonance autoionization and provide analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We additionally propose a reliable method for the experimental determination of EPs, based solely on information about their ionization probability as a function of the system parameters. The links between EPs, the maxima of the asymmetric profile and the effective decay rate of the ground state are investigated in detail. Quantitative numerical examples pertaining to the doubly excited \(2s2p(^{1}P)\) state of Helium confirm the validity of our formulation and results. _Introduction \(-\)_ Autoionization (AI) belongs to a broad class of quantum phenomena involving discrete states (resonances) embedded in continua into which they decay. Examples, among others, are the Breit-Wigner resonance in nuclear physics [1], in particle physics [2; 3], in photonics [4] and of course in atoms and molecules [5; 6], where the continuum is ionization or even dissociation; hence the term autoionization. The literature on autoionization spans a vast range of topics, including the time-dependent formation of the autoionization profile [7; 8; 9; 10], strong driving of autoionizing resonances (ARs) [11; 12; 13; 14; 15; 16; 17], the dynamics of doubly-resonant autoionization [18; 19], and the effects of phase [20; 21] and statistical fluctuations [22; 23; 24; 25] of the laser field on the process. ARs can be excited by radiation absorption or collisions and are infinite in number, with the spacing between them decreasing with increasing excitation energy. Yet, there are cases in which one or more resonances are separated in energy by significantly more than their width, qualifying as isolated resonances, with the doubly excited \(2s2p(^{1}P)\) state of Helium being the prototype of an isolated AR, which continues revealing novel aspects, as attested by the ongoing streams of papers to this day [13; 14; 15; 16; 17]. It is in addition a perfect example of an open quantum system, with its dynamics governed by a non-Hermitian effective Hamiltonian. Surprisingly, the natural connection of AI to non-Hermitian physics, a field in the process of explosive activity, has escaped attention. Non-Hermitian physics and its connection to parity-time (\(\mathcal{PT}\)) symmetry, was introduced as an axiomatic theory in the seminal papers of C. Bender et al. [26; 27; 28; 29; 30]. Soon thereafter, it was pointed out that effective Hamiltonians describing the dynamics of open quantum systems, inevitably are non-Hermitian [31]. The boundary between the unbroken and broken \(\mathcal{PT}\) symmetry of such Hamiltonians [32; 33] is marked by the presence of exceptional points (EPs) [34; 35; 36; 37], i.e., points in the parameter space where two or more eigenvalues coalesce, while their corresponding eigenvectors become parallel. Tracking the positions of these points in the parameter space of an open quantum system is crucial, as they provide insight into the range of parameters where the system undergoes abrupt phase transitions [38] and enhanced sensitivity [39; 40; 41; 42; 43]. Several approaches for understanding phenomena related to quasi-bound states embedded in continua using complex spectral analysis have been presented in the past, applied to various systems such as two-channel quantum wires [44; 45], semi-infinite superlattices with embedded impurities [46], discrete states coupled to continua containing Van Hove singularities at their threshold [47], as well as systems involving laser-induced population trapping via strong coupling of ARs in atoms [48]. In this paper, we employ the powerful analysis of EPs in order to unveil hidden aspects of ARs. Focusing on the conditions for encountering EPs in single-resonance autoionization, we derive analytical expressions revealing their positions in parameter space. Moreover, we show how the amount of ionization of the atom, which can be determined experimentally, contains information about the positions of EPs, documented by numerical examples for the \(2s2p(^{1}P)\) state of Helium. Finally, we demonstrate the connection between the presence of EPs, the maxima of the typical asymmetric profile of autoionization and the effective decay rate of the atomic ground state. _Theory \(-\)_ We begin by considering an atom whose ground state \(\left|g\right\rangle\) is coupled to an isolated autoionizing resonance \(\left|a\right\rangle\) through a linearly polarized field with frequency \(\omega\), as well as a continuum of states denoted by \(\left|E\right\rangle\), both coupled to \(\left|g\right\rangle\) and \(\left|a\right\rangle\). The wavefunction of the system at times \(t\geq 0\) is given by: \[\left|\psi(t)\right\rangle=c_{g}(t)\left|g\right\rangle+c_{a}(t)\left|a\right\rangle +\int dEc_{E}(t)\left|E\right\rangle. \tag{1}\] Introducing the transformations \(\tilde{c}_{g}(t)=c_{g}(t)e^{i\omega_{g}t}\), \(\tilde{c}_{a}(t)=c_{a}(t)e^{i(\omega_{g}+\omega)t}\) and \(\tilde{c}_{E}(t)=c_{E}(t)e^{i(\omega_{g}+\omega)t}\) in the time-dependent Schrodinger equation, eliminating as usual the continuum adiabatically and adopting the rotating-wave approximation, as detailed in the Supplementary material (SM), we show that the dynamics of the system under the above conditions, are described by the effective Hamiltonian (\(\hbar=1\)): \[\hat{\mathcal{H}}_{\text{eff}}\equiv\begin{bmatrix}S_{g}-i\frac{\gamma}{2}&\tilde{ \Omega}\left(1-\frac{i}{q}\right)\\ \tilde{\Omega}\left(1-\frac{i}{q}\right)&-\Delta-i\frac{\Gamma}{2}\end{bmatrix}, \tag{2}\] where \(S_{g}\) and \(\gamma\) are, respectively, the light-induced shift and the direct into the continuum ionization rate of the ground state, \(\Gamma\) is the autoionization rate of quasi-bound state \(|a\rangle\), \(\tilde{\Omega}\) the generalized Rabi frequency of the \(|g\rangle\longleftrightarrow|a\rangle\) transition (see SM), and \(q\) the Fano asymmetry parameter [49], expressing the relative strength of the direct transition from \(|g\rangle\) to the continuum compared to the transition to \(|a\rangle\). \(\Delta\equiv\omega-\left(\omega_{a}-F_{a}-\omega_{g}\right)\) is the detuning between the frequency of the driving field and the frequency of the \(|g\rangle\longleftrightarrow|a\rangle\) transition, including the self-energy shift \(F_{a}\) of \(|a\rangle\). Note that the asymmetry parameter is related to the parameters of \(\hat{\mathcal{H}}_{\text{eff}}\) through the strict equation \(q^{2}=4\Omega^{2}/(\gamma\Gamma)\) (See SM). The light-induced shift of the ground state is hereafter neglected as it is of no relevance to our study. A schematic representation of our system is depicted in Fig. 1. The effective Hamiltonian of Eq. (2) is obviously non-Hermitian, not only due to the presence of the diagonal decay terms in the energies of the ground state and \(|a\rangle\), but also due to the presence of non-zero imaginary parts in the off-diagonal terms reflecting the driving of the \(|g\rangle\longleftrightarrow|a\rangle\) transition. Diagonalization of \(\hat{\mathcal{H}}_{\text{eff}}\) leads to the following set of eigenvalues: \[\begin{split}\lambda_{1,2}&=-\frac{1}{2}\left[\Delta+ i\frac{(\gamma+\Gamma)}{2}\right]\\ &\pm\frac{1}{4}\sqrt{16\left(1-\frac{i}{q}\right)^{2}\tilde{ \Omega}^{2}-\left(\gamma-\Gamma+2i\Delta\right)^{2}}.\end{split} \tag{3}\] At first sight, owing to the presence of imaginary parts in the radicands, the spectra of \(\hat{\mathcal{H}}_{\text{eff}}\) appear not to exhibit EPs. However, if the detuning is set to \(\Delta=\Delta^{s}\equiv 2q\gamma\Gamma/(\Gamma-\gamma)\), \(\gamma\neq\Gamma\) and we eliminate \(\gamma\) via the relation \(\gamma=4\Omega^{2}/(q^{2}\Gamma)\), we obtain: \[\lambda_{1,2}=\frac{4\tilde{\Omega}^{2}q\Gamma}{4\tilde{\Omega}^{2}-q^{2} \Gamma^{2}}-i\left(\frac{\tilde{\Omega}^{2}}{q^{2}\Gamma}+\Gamma/4\right)\pm \frac{1}{4q|q|\Gamma}\sqrt{-\left(\frac{4\tilde{\Omega}^{2}+q^{2}\Gamma^{2}} {4\tilde{\Omega}^{2}-q^{2}\Gamma^{2}}\right)^{2}\left[16\tilde{\Omega}^{4}-8 \tilde{\Omega}^{2}\Gamma^{2}q^{2}\left(1+2q^{2}\right)+q^{4}\Gamma^{4}\right]},\tilde{\Omega}\neq\frac{|q|\Gamma}{2} \tag{4}\] Observe now that choosing \(\Delta=\Delta^{s}\) results to a set of eigenvalues with real radicands. Note that Eq. (4) holds for \(\tilde{\Omega}\neq|q|\Gamma/2\) which is equivalent to \(\gamma\neq\Gamma\). For \(\tilde{\Omega}=|q|\Gamma/2\), i.e. \(\gamma=\Gamma\), the radicand is complex for every value of \(\Delta\). The details of the physical significance of \(\Delta^{s}\) for our system will become clear later. We should also note that the value of \(\Delta^{s}\) resulting to real radicands depend on the intensity of the driving field, which in turn determines the value of \(\tilde{\Omega}\). The relation between \(\Delta^{s}\) and \(\tilde{\Omega}\) is \(\frac{\Delta^{s}(\tilde{\Omega})}{\Gamma}=\frac{8q\left(\tilde{\Phi}\right)^{ 2}}{q^{2}-4\left(\frac{\tilde{\Phi}}{\tilde{\Phi}}\right)^{2}}\), \(\tilde{\Omega}\neq|q|\Gamma/2\), which results upon substitution of \(\gamma=4\Omega^{2}/(q^{2}\Gamma)\) in the expression \(\Delta^{s}\equiv 2q\gamma\Gamma/(\Gamma-\gamma)\), \(\gamma\neq\Gamma\). We are interested in the values of the coupling \(\tilde{\Omega}\) that nullify the radicands of Eq. (4). The radicands become zero when \[16\tilde{\Omega}^{4}-8\tilde{\Omega}^{2}\Gamma^{2}q^{2}\left(1+2q^{2}\right)+ q^{4}\Gamma^{4}=0, \tag{5}\] and the positive roots of the above equation are \[\frac{\tilde{\Omega}_{\pm}}{\Gamma}=\frac{1}{2}\left(|q|\sqrt{1+q^{2}}\pm q^{ 2}\right). \tag{6}\] It is easy to verify that for both \(\tilde{\Omega}=\tilde{\Omega}_{+}\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}\), given that \(\Delta=\Delta^{s}\), the eigenvectors of \(\hat{\mathcal{H}}_{\text{eff}}\) coalesce, respectively, to the states \(\left|\psi_{+}\right\rangle=\left(-i\left|g\right\rangle+|a\rangle\right)/\sqrt {2}\) and \(\left|\psi_{-}\right\rangle=\left(i\left|g\right\rangle+|a\rangle\right)/\sqrt {2}\). Therefore the points \(\left(\tilde{\Omega}_{\pm},\Delta_{\pm}^{s}\right)\) in parameter space, where \(\Delta_{\pm}^{s}\equiv\Delta^{s}(\tilde{\Omega}_{\pm})\), are EPs of \(\hat{\mathcal{H}}_{\text{eff}}\). _Results & Discussion -_ Interestingly, the EPs of the system measured in units of the autoionization width \(\Gamma\), depend solely on the asymmetry parameter \(q\), and there are two for any given value of the latter (Fig. 2). Note that the value of \(q\) for a given AR is fixed, as it depends solely upon the corresponding matrix elements of the transitions involved in the process. In particular, for the process involving the driving of the Figure 1: Schematic representation of the system at study. The ground state \(|g\rangle\) of an atom that is ionized with a rate \(\gamma\), is coupled to an AR \(|a\rangle\) via a linearly polarized field that drives the \(|g\rangle\longleftrightarrow|a\rangle\) transition with a generalized Rabi frequency \(\tilde{\Omega}\). The frequency of the driving field is detuned by \(\Delta\) from the energy separation of the two states and the AR decays into the continuum with an autoionization rate \(\Gamma\). \(1s^{2}(^{1}S)\longleftrightarrow 2s2p(^{1}P)\) transition in Helium and the associated autoionization of the \(2s2p(^{1}P)\) AR, it is well established that \(q\approx-2.79\)[23; 50]. Focusing hereafter on that isolated AR, we note that for \(q=-2.79\), according to Eq. (6) and the relation between \(\Delta^{s}\) and \(\tilde{\Omega}\), the theory indicates the existence of two EPs at the positions \((\tilde{\Omega}_{-},\Delta^{s}_{-})=(0.2424\Gamma,-0.1738\Gamma)\) and \((\tilde{\Omega}_{+},\Delta^{s}_{+})=(8.0265\Gamma,5.7538\Gamma)\) in parameter space. In Fig. 3 we plot the real and imaginary parts of the eigenvalues as a function of \(\tilde{\Omega}\) and \(\Delta\) for \(q=-2.79\) and indeed confirm the coalescence of the eigenvalues at the above positions in parameter space. As noted above, tuning \(\Delta\) to \(\Delta^{s}\) is essential in order to ensure that the radicands appearing in the expressions of the eigenvalues become real. We can get a glimpse on the physical significance of \(\Delta^{s}\) in the vicinity of an EP, by solving the time-dependent Schrodinger equation using the effective Hamiltonian \(\hat{\mathcal{H}}_{\text{eff}}\), and plotting the ionization probability of the atom (\(P(t)=1-\left|c_{q}(t)\right|^{2}-\left|c_{a}(t)\right|^{2}\)) as a function of the detuning for \(\tilde{\Omega}=\tilde{\Omega}_{-}\) (Fig. 4). Note that the ionization probability is calculated on \(t=T\), where \(T\) is the interaction time between the atom and the driving field. As expected, the ionization profile is asymmetric, transforming gradually to a "window" profile for sufficiently large interaction times, a phenomenon labelled "time saturation" in [11], reconfirmed most recently in [15]. Interestingly, the position of the maximum of the asymmetric profile, denoted by \(\Delta_{m}\), which is initially increasing as \(T\) increases, eventually stabilizes at \(\Delta^{s}_{-}\), as shown in the inset of Fig. 4. Therefore, for \(\tilde{\Omega}=\tilde{\Omega}_{-}\), \(\Delta^{s}(\tilde{\Omega}_{-})\equiv\Delta^{s}_{-}\) is the detuning which maximizes the ionization probability (to unity) for sufficiently large interaction times, which for the field intensity considered, translates to \(T\approx 20\Gamma^{-1}\) or larger. It is important to note that this occurs only by tuning the parameters of the system to the exceptional point \((\tilde{\Omega}_{-},\Delta^{s}_{-})\). For example, if we choose an intensity such that \(\tilde{\Omega}=0.1\tilde{\Omega}_{-}\), the position of the maximum of the asymmetric profile stabilizes to \(\Delta_{m}\approx-0.195\Gamma\), whereas \(\Delta^{s}(0.1\tilde{\Omega}_{-})=-0.0016\Gamma\). Although in most cases, the EPs of a system can be explored theoretically through diagonalization of the relevant effective Hamiltonian, the experimental determination of EPs most often is quite a challenging task, since in general the eigenenergies of a Hamiltonian are not amenable experimentally. Therefore one needs to identify EPs indirectly by studying their footprints on system observables. To that end, we employ a quantity widely used in the context of the Quantum Zeno effect in open quantum systems, namely, the effective decay rate of a state [51], defined as \(\Gamma_{\text{eff}}^{j}(t)\equiv-\frac{1}{t}\ln[P_{j}(t)]\), \(j=g,a\), where \(P_{j}(t)=\left|c_{j}(t)\right|^{2}\) is the population of state \(\left|j\right>\), \(j=g,a\). The effective decay rate provides information about how the couplings between a given state and a set of other states or a continuum, modify the time evolution of that state's population. It turns out that the effective decay rate of the ground state, which can be readily determined experimentally, is remarkably sensitive to the EPs of our system, pinpointing their positions in parameter space. In Fig. 5(a) we plot the effective decay rate of the ground state as a function of \(\tilde{\Omega}\) for \(\Delta=\Delta^{s}(\tilde{\Omega})\), which implies setting each time the detuning to a different value, depending on the value of \(\tilde{\Omega}\) considered. Note that the effective decay rate is calculated at an interaction time \(t=T\), which should be sufficiently large for the rate to be no longer modified with further increase of \(T\). For \(q=-2.79\), the effective decay rate is stabilized for \(T\approx 20\Gamma^{-1}\) or larger, which is the same time scale as the one discussed in the results of Fig. 4. At such time scales it is easy to show that the population of \(\left|a\right>\) is practically negligible. Therefore the effective decay rate of the ground state is directly related to the measurable ionization probability \(P(t)\), because \(\Gamma_{\text{eff}}^{q}(t)\equiv-\frac{1}{t}\ln[P_{g}(t)]\cong\frac{1}{t}\ln[1 -P(t)]\). Clearly, the effective decay rate of the ground sate provides direct evidence for the positions of the EPs of the system (Fig. 5(a)), in agreement with our theoretical predictions based on diagonalization of \(\hat{\mathcal{H}}_{\text{eff}}\). A short note regarding the experimental detection of the EPs related to the autoionization of the Helium \(2s2p(^{1}P)\) AR, is in place at this point. The EP at \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{-},\Delta_{-}^{s})=(0.2424\Gamma,-0.173 8\Gamma)\) lies in a parameter region that is well within the current capabilities of synchrotron sources and seeded Free-electron lasers [52; 53] of short wavelength radiation, sufficient intensity and small bandwidth that can excite the AR. However, the EP at \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{+},\Delta_{+}^{s})=(8.0265\Gamma,5.753 8\Gamma)\) would require a source of high intensity, as it lies in the strong field regime where \(\tilde{\Omega}>\Gamma\)[11]. Although the required intensity, which is estimated to be around \(1.3\times 10^{16}\) W/cm\({}^{2}\), is available with current Free-electron laser sources, issues such as intensity fluctuations [54; 55] known to affect the excitation of ARs [22; 23; 24; 25] and large bandwidth need to be addressed. Their interplay with EPs pose interesting followup studies. Finally, in Fig. 5(b) we plot the effective decay rate of the ground state as a function of \(\tilde{\Omega}\) and \(\Delta\) at the vicinity of the EP that lies in the weak field regime. The effective decay rate maxima lie on the \(\Delta=\Delta^{s}(\tilde{\Omega})\) line (curved dashed line) over which the eigenvalues have real radicands. At the tip of this maxima curve we find the weak field EP at the position \((\tilde{\Omega},\Delta)=(\tilde{\Omega}_{-},\Delta_{-}^{s})=(0.2424\Gamma,-0.173 8\Gamma)\) in parameter space. _Concluding Remarks and Outlook \(-\)_ In summary, we have unveiled the existence of EPs in single-resonance autoionization and provided analytical expressions for their positions in parameter space, in terms of the Fano asymmetry parameter. We have further demonstrated the connection between EPs and the maxima of the asymmetric ionization profile, through a numerical study of the \(2s2p(^{1}P)\) resonance in Helium and proposed a reliable method for the observation of EPs, based solely on information about the ionization probability as a function of the parameters of the system, well within the capabilities of current radiation sources. Our results lead to further questions related to the role of pulse shape or field fluctuations in the observation of EPs in autoionization, as well as questions related to the influence of neighboring ARs, beyond the single-resonance autoionization. At the same time, the investigation of potentially impactful effects related to phase changes associated with the encircling of EPs in the parameter space of autoionization, based on the complex topology of the Riemann surfaces in the vicinity of the latter, is a further challenging issue. Overall, our results offer new insights into the interplay between autoionization and non-Hermitian \(\mathcal{PT}\) physics, opening up a novel and potentially fruitful territory for further exploration. _Acknowledgments \(-\)_ GM would like to acknowledge the Hellenic Foundation for Research and Innovation (HFRI) for financially supporting this work under the 3rd Call for HFRI PhD Fellowships (Fellowship Number: 5525). Figure 4: Ionization probability as a function of \(\Delta\) for various interaction times \(T\), \(q=-2.79\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}=0.2424\Gamma\). The vertical dashed line marks the position of the detuning \(\Delta_{-}^{s}=-0.1738\Gamma\). Inset: Position of the peak of the asymmetric profile (\(\Delta_{m}\)) as a function of the interaction time \(T\) (logarithmic scale) for \(q=-2.79\) and \(\tilde{\Omega}=\tilde{\Omega}_{-}\). The horizontal dotted line marks the position of the detuning \(\Delta_{-}^{s}\). We are also grateful to D. Kaltsas for useful discussions concerning this work.
Exceptional point (EP) の存在について報告し、単一共鳴自動イオン化においてEPの位置を、Fano 偏平パラメータを用いて解析的な式で表現する。EP の測定方法を確立し、この方法では、EP のイオン化確率に関する情報のみを用いて実験的に測定する。EP と非対称プロファイルの最大値、そして基底状態の有効崩壊率との関係を詳細に調査する。ヘリウムの doubly excited $2s2p({}^1P)$状態に関する定量的な数値例は、私たちの構成と結果の有効性を確認する。EP の隠された側面を明らかにし、私たちの考察と結果が、Fano プロファイルを持つ多様な材料の探索のための基準となる。
2309.07926
COMPASS: High-Efficiency Deep Image Compression with Arbitrary-scale Spatial Scalability
Recently, neural network (NN)-based image compression studies have actively been made and has shown impressive performance in comparison to traditional methods. However, most of the works have focused on non-scalable image compression (single-layer coding) while spatially scalable image compression has drawn less attention although it has many applications. In this paper, we propose a novel NN-based spatially scalable image compression method, called COMPASS, which supports arbitrary-scale spatial scalability. Our proposed COMPASS has a very flexible structure where the number of layers and their respective scale factors can be arbitrarily determined during inference. To reduce the spatial redundancy between adjacent layers for arbitrary scale factors, our COMPASS adopts an inter-layer arbitrary scale prediction method, called LIFF, based on implicit neural representation. We propose a combined RD loss function to effectively train multiple layers. Experimental results show that our COMPASS achieves BD-rate gain of -58.33% and -47.17% at maximum compared to SHVC and the state-of-the-art NN-based spatially scalable image compression method, respectively, for various combinations of scale factors. Our COMPASS also shows comparable or even better coding efficiency than the single-layer coding for various scale factors.
Jongmin Park, Jooyoung Lee, Munchurl Kim
2023-09-11T14:05:18
http://arxiv.org/abs/2309.07926v1
# COMPASS: High-Efficiency Deep Image Compression with Arbitrary-scale Spatial Scalability ###### Abstract Recently, neural network (NN)-based image compression studies have actively been made and has shown impressive performance in comparison to traditional methods. However, most of the works have focused on non-scalable image compression (single-layer coding) while spatially scalable image compression has drawn less attention although it has many applications. In this paper, we propose a novel NN-based spatially scalable image compression method, called COMPASS, which supports arbitrary-scale spatial scalability. Our proposed COMPASS has a very flexible structure where the number of layers and their respective scale factors can be arbitrarily determined during inference. To reduce the spatial redundancy between adjacent layers for arbitrary scale factors, our COMPASS adopts an inter-layer arbitrary scale prediction method, called LIFF, based on implicit neural representation. We propose a combined RD loss function to effectively train multiple layers. Experimental results show that our COMPASS achieves BD-rate gain of -58.33% and -47.17% at maximum compared to SHVC and the state-of-the-art NN-based spatially scalable image compression method, respectively, for various combinations of scale factors. Our COMPASS also shows comparable or even better coding efficiency than the single-layer coding for various scale factors. ## 1 Introduction Recently, image compression has become increasingly important with the growth of multimedia applications. The exceptional performance of neural network (NN)-based methods in computer vision has led to active research on NN-based image compression methods [38, 3, 36, 4, 27, 18, 8, 19, 21, 28, 14, 2, 26], resulting in remarkable improvements in coding efficiency. However, although the same content is often consumed in various versions in multimedia systems, most existing NN-based image compression methods must separately compress an image into multiple bitstreams for their respective versions, thus leading to low coding efficiency. To resolve this issue, there have been a few recent studies [37, 34, 16, 43, 13, 24, 23, 25] on NN-based scalable image compression, where various versions of an image are encoded into a single bitstream in a hierarchical manner with multiple layers. Each layer is in charge of en/decoding one corresponding version of the image, and typically, redundancy between adjacent layers is reduced by a prediction method for higher coding efficiency. The scalable coding methods are divided into two classes: quality scalable codecs for the images of different quality levels and spatially scalable codecs for the images of different sizes. In this paper, we focus on the spatially scalable coding that has not been actively studied compared with the quality scalable coding. Upon our best knowledge, only one previous study [25] deals with the spatially scalable coding in the recent deep NN-based approach. In conventional tool-based scalable coding, SVC [31] and SHVC [6] have been standardized by MPEG [17] for video coding standards, as extensions to H.264/AVC [40] and H.265/HEVC [35], respectively. Despite of significant coding efficiency improvement compared with separate single-layer compression of different versions (simulcast coding), the scalable coding has not yet been widely adopted for real-world applications [6, 32]. One reason may be lower coding efficiency of the accumulated bitstream for the larger version compared with the single-layer coding of the same size. The scalable coding often yields lower coding efficiency due to its insufficient redundancy removal capability between the layers. In addition, for the existing NN-based method [25], only one fixed scale factor 2 is used between adjacent layers as shown in Figure 1. This limitation makes it not practical for real-world applications that require a variety of scale combinations. For example, an image of 4,000\(\times\)2,000 size needs to be encoded into SD (720\(\times\)480), HD (1,280\(\times\)720) and FHD (1,920\(\times\)1,080) versions which are not in powers of 2 scales compared to the input size. Therefore, in order to support for the one-source-multiple-use (OSMU) with spatially scalable image compression, it is worthwhile for spatially scalable image compression to support arbitrary scale factors between the different layers. To address the aforementioned issues, we propose a novel NN-based image COMPression network with Arbitrary-scale Spatial Scalability, called COMPASS. Our COMPASS supports spatially scalable image compression that encodes multiple arbitrarily scaled versions of an image into a single bitstream in which each version of the image is encoded with its corresponding layer. Inspired by LIIF [7] and Meta-SR [15], we adopt an inter-layer arbitrary scale prediction method in the COMPASS, called Local Implicit Filter Function (LIFF), based on implicit neural representation that can effectively reduce the redundancy between adjacent layers and also supports arbitrary scale factors. In addition, it should be noted that our COMPASS exploits only one shared prediction/compression module for all the enhancement layers, thus it effectively provides the extensibility in terms of the number of layers and also reduces the number of model parameters. For effective and stable optimization of the hierarchically recursive architecture of COMPASS, we introduce a combined RD loss function. Based on its superior inter-layer prediction capability, our COMPASS significantly improves the coding efficiency compared to the existing scalable coding methods [6, 25], and achieves comparable or even better coding efficiency compared to the single-layer coding for various scale factors. Note that the coding efficiency of the single-layer coding has been regarded as the upper bound of the scalable coding efficiency. Furthermore, to the best of our knowledge, our COMPASS is the first NN-based spatially scalable image compression method that supports arbitrary scale factors with high coding efficiency. Our contributions are summarized as: * The COMPASS is the first NN-based spatially scalable image compression method for _arbitrary_ scale factors. * The COMPASS adopts an inter-layer arbitrary scale prediction, called LIFF, which is based on implicit neural representation to reduce redundancy effectively as well as to support the arbitrary scale factors. Additionally, we propose a combined RD loss function to effectively train multiple layers. * Our COMPASS significantly outperforms the existing spatially scalable coding methods [6, 25]. Furthermore, to the best of our knowledge, the COMPASS is the first work that shows comparable or even better performance in terms of coding efficiency than the single-layer coding for various scale factors, based on a same image compression backbone. ## 2 Related Work **Neural Network-based Image Compression.** Recently, there have been proposals to optimize neural network (NN)-based image compression methods in an end-to-end manner. Toderici _et al_. [38] first proposed a deep convolutional NN-based image compression method, while Balle _et al_. [3] and Theis _et al_. [36] adopted the entropy model-based approaches that jointly minimize the rate and distortion terms in the optimization phase. Subsequent models, such as hyperprior [4], auto-regressive models [27, 18], Gaussian Mixture Models [8, 19], non-local attention modules [21], channel-wise auto-regressive entropy models [28] and the checkerboard context model [14], have improved coding efficiency. There are also a few generative model-based studies [2, 26] for human perception-oriented compression. Recently, several NN-based variable-rate compression models [9, 10, 30, 33, 22, 20] have been studied to support the multiple compression quality levels with a single trained model. Despite the significant improvements in coding efficiency and functionality brought about by the NN-based image compression networks, there remains an issue with coding efficiency when encoding different versions of an image as described in Sec. 1. **Spatially Scalable Image Compression.** For OSMU applications that supports various-sized display devices, images often need to be compressed and transmitted to target devices with appropriate spatial sizes. To meet this requirement, the scalable extensions of traditional coding standards, H.264/AVC [40] and H.265/HEVC [35] have been developed as SVC [31] and SHVC [6], respectively. Recently, NN-based approaches for scalable image compression [37, 34, 16, 43, 13, 24, 23, 25] have also been proposed. However, most of these works focus on quality scalability and only Mei _et al_. [25] deals with spatial scalability. Mei _et al_. [25] proposed a hierarchical architecture which outperforms the simulcast coding and SVC [31], and shows comparable performance with SHVC [6] in terms of coding efficiency. However, it can only support fixed integer scale factors with powers of 2. Moreover, they didn't provide any experimental evidence on the extended multiple enhancement layers more than 2, although they proposed the layer extension concept. **Arbitrary Scale Super-Resolution.** With the advancement of neural networks, several recent works have proposed super-resolution with arbitrary scale factors, such as [15, 7, 12, 39, 41]. Hu _et al_. [15] introduced Meta-SR, the first neural network-based method for super-resolution with arbitrary scales. In Meta-SR, the Meta-Upscale module takes the relative coordinate and scale factor as input to dynamically predict the upscaling filters. Wang _et al_. [39] proposed an asymmetric super-resolution method using conditional convolution. Cheng _et al_. [7] presented a continuous image representation method with Local Implicit Image Function (LIIF), and achieved outstanding performance for large scale (\(\times 30\)) super-resolution which is out of training distribution. Xu _et al_. [41] used periodic encoding with the implicit function. Inspired by these arbitrary scale super-resolution methods [7, 15], we adopt them for the inter-layer arbitrary scale prediction in our COMPASS. We refer to this method as the Local Implicit Filter Function (LIFF), which can effectively reduce redundancy between adjacent layers with arbitrary scale factors. ## 3 Proposed Method ### Overall architecture Figure 2 depicts a flow diagram of our COMPASS. The COMPASS comprises of two types of layers: a base layer (BL) that encodes the lowest resolution image, and one or more enhancement layers (ELs) that sequentially encode multiple higher resolution images of arbitrary scales. For spatially scalable coding of (\(K\)+1)-scaled images \(\{I^{0},...,I^{K}\}\) of gradually increasing sizes with arbitrary scale factors, the COMPASS operates with multiple coding in the BL and \(K\) ELs, each of which encodes the correspondingly scaled input image. It should be noted that the COMPASS exploits the shared modules for all the ELs, each of which recursively operates as depicted in Figure 2. In the BL, the smallest-sized input image \(I^{0}\) is fed into a CNN-based image compression module to reconstruct \(\hat{I}^{0}\). In the EL-\(k\), the corresponding input image \(I^{k}\) and the reconstructed image \(\hat{I}^{k-1}\) of the previous layer are fed into the current enhancement layer to reconstruct \(\hat{I}^{k}\). Specifically, in the EL-\(k\), the inter-layer arbitrary scale prediction module can effectively estimate and reduce the spatial redundancy between \(\hat{I}^{k-1}\) and \(I^{k}\) for arbitrary scale factor. Therefore, the residual compression module only encodes the resulting essential residues in reconstructing \(\hat{I}^{k}\) with high coding efficiency. Figure 3 shows the overall architec Figure 2: The COMPASS supports spatially scalable coding of \(K\)+1 arbitrary scaled versions of an image using a base layer (BL) and one or more enhancement layers (ELs). The EL-\(k\) (\(1\leq k\leq K\)) exploits a shared subnetwork that the Inter-layer Arbitraty Scale Prediction and Residual Compression modules. \(I_{0}\) indicates the smallest-sized input image in the BL. \(I_{1},...,I_{K}\) are the input images in the ELs in an increasing order of scale factors where \(I_{K}\) is the largest-sized input image. Note that the scale factor between two adjacent layers can be any arbitrarily positive value. ture of our COMPASS. We describe the operation of COMPASS with \(K\)+1 layers as: \[\hat{I}^{k}=\begin{cases}IC(I^{k}),&\text{if }k=0\ \ (\text{BL})\\ \tilde{I}^{k}+\hat{I}^{k}_{res},&\text{if }k>0\ \ (\text{EL-}k)\end{cases} \tag{1}\] where, for \(k>0\), \[\tilde{I}^{k}=\psi(\hat{I}^{k-1},\mathbf{s}^{k},\mathbf{r}^{k})\ \ \text{and}\ \ \hat{I}^{k}_{res}=RC(I^{k}_{res}), \tag{2}\] where \(IC(\cdot)\) refers to an image compression module of the BL and \(RC(\cdot)\) refers to a residual compression module of the EL-\(k\), as shown in Figure 3. We adopt the Mean-scale [27] architecture for both the compression modules. \(\tilde{I}^{k}\in\mathbb{R}^{H^{k}\times W^{k}\times 3}\) refers to an arbitrarily upscaled prediction for the EL-\(k\), and it is predicted from the smaller reconstruction \(\hat{I}^{k-1}\) by the LIFF module which is denoted as \(\psi(\cdot)\), and \(I^{k}_{res}\) indicates a residual image between \(I^{k}\) and \(\tilde{I}^{k}\). The LIFF module takes a local grid \(\mathbf{r}^{k}\in\mathbb{R}^{H^{k}W^{k}\times 2}\) and a scale token \(\mathbf{s}^{k}\in\mathbb{R}^{H^{k}W^{k}\times 2}\) as additional inputs, which are described in details in Sec. 3.2. Since the output of convolutional layers are progressively reduced in half due to the convolution of stride 2 in the encoder part, the input to the encoder part is often padded into the size of a power of 2 in a lump at the beginning. This actually deteriorates the coding efficiency in our image compression with arbitrary scale factors. Therefore, we adopt a convolutional-layer-wise padding scheme where (i) a replicate padding with the padding size of 1 is performed if the width or height size of the input is an odd number in each convolutional layer of the encoder part of the residual compression module; and (ii) we crop out the padded region for the output of the corresponding convolutional layer of the decoder part. ### LIFF: Inter-layer arbitrary scale prediction To achieve high coding efficiency with the COMPASS, it is essential to effectively reduce the redundancy between adjacent layers. For this, we adopt an inter-layer arbitrary scale prediction method using a local implicit filter function (LIFF) which is based on the local implicit image function (LIIF) [7] and Meta-SR [15]. Our LIFF module first transforms the reconstruction \(\hat{I}^{k-1}\) of the previous layer into the feature domain and then increases its resolution to match the arbitrarily upscaled prediction \(\tilde{I}^{k}\) through a simple interpolation. Our LIFF module also generates the color prediction filter for each pixel coordinate and then estimate the RGB color pixel-wise by applying the generated filter to the extracted feature slice corresponding to the target pixel coordinate. The procedure of the LIFF module is divided into 3 stages: 1) Feature Extraction, 2) Filter Generation, 3) Pixel-wise Prediction, as illustrated in the orange box of Figure 3. **Feature Extraction.** We extract feature information from the reconstruction \(\hat{I}^{k-1}\) of the previous layer through an RDN-like feature extractor \(E_{\varphi}\)[42], and apply feature unfolding [7] and nearest-neighbor upsampling to generate the feature map \(\mathbf{F}^{k}\in\mathbb{R}^{H^{k}\times W^{k}\times C}\). **Filter Generation.** We generate the color prediction filter Figure 3: **Overall architecture of our COMPASS. It consists of a base layer (BL) depicted in the sky blue box and one or more enhancement layers (ELs) depicted in the light purple boxes which operate in an iterative manner. Note that we exploit the shared modules (LIFF and residual compression) for multiple ELs.** \(\mathbf{f}^{k}\in\mathbb{R}^{H^{k}W^{k}\times C\times 3}\) using a filter generation MLP as \[\mathbf{f}^{k}=\phi(\lceil\hat{\mathbf{F}}^{k},\mathbf{r}^{k},\mathbf{s}^{k}\rfloor;\theta), \tag{3}\] where \(\hat{\mathbf{F}}^{k}\in\mathbb{R}^{H^{k}W^{k}\times C}\) refers to the flattened feature map, \(\phi(\cdot)\) refers to the filter generation MLP with parameters \(\theta\), and \(\lceil\cdot\rfloor\) refers to channel-wise concatenation. The local grid \(\mathbf{r}^{k}\in\mathbb{R}^{H^{k}W^{k}\times 2}\) and the scale token \(\mathbf{s}^{k}\in\mathbb{R}^{H^{k}W^{k}\times 2}\) follow the same process as in the LIIF [7]. The local grid \(\mathbf{r}^{k}\) is a normalized relative coordinate between the reconstruction \(\hat{I}^{k-1}\) of the previous layer and the upscaled prediction \(\check{I}^{k}\), which is formulated as \(\mathbf{r}^{k}(i,j)=\mathbf{p}^{k}(i,j)-\mathbf{p}^{k-1}(i^{\prime},j^{\prime})\). \(\mathbf{p}^{k}(i,j)\) refers to a normalized coordinate of the upscaled prediction \(\check{I}^{k}\) at pixel coordinate \((i,j)\), and \(\mathbf{p}^{k-1}(i^{\prime},j^{\prime})\) indicates a corresponding normalized coordinate of the reconstruction \(\hat{I}^{k-1}\) of the previous layer at pixel coordinate \((i^{\prime},j^{\prime})\). We adopt the nearest-neighbor to find the pixel correspondence. The normalized coordinate is calculated as \(\mathbf{p}^{l}(i,j)=[-1+(2i+1)/H^{l},-1+(2j+1)/W^{l}]\), where \(i\in[0,H^{l}-1]\) and \(j\in[0,W^{l}-1]\). The scale token \(\mathbf{s}^{k}\) indicates the height/width ratio between \(\hat{I}^{k-1}\) and \(\check{I}^{k}\). \(\mathbf{s}^{k}\) then contains all the same ratio values of \((2\cdot H^{k-1}/H^{k},2\cdot W^{k-1}/W^{k})\). **Pixel-wise Prediction.** To determine the RGB color of the arbitrarily upscaled prediction \(\check{I}^{k}\) at pixel coordinate \((i,j)\), we apply the color prediction filter \(\mathbf{f}^{k}_{n}\) for the generated feature map \(\hat{\mathbf{F}}^{k}_{n}\) by a simple matrix multiplication as \[\check{I}^{k}(i,j)=\hat{\mathbf{F}}^{k}_{n}\odot\mathbf{f}^{k}_{n}, \tag{4}\] where \(n\in[0,H^{k}W^{k}-1]\) indicates the batch index number which is corresponding to the pixel coordinate \((i,j)\) of the prediction \(\check{I}^{k}\) via \(n=i+j\cdot H^{k}\). Note that the LIFF module can calculate this pixel-wise prediction for all coordinates in parallel as \(\check{I}^{k}=\hat{\mathbf{F}}^{k}\odot\mathbf{f}^{k}\). Figure 4 shows a predicted image \(\check{I}^{k}\) via the LIFF module and its associated residual image \(I^{k}_{res}\) to be compressed for the given reconstructed image \(\check{I}^{k-1}\) of the previous layer \(k\)-1, and an uncompressed input image \(I^{k}\) (ground truth) in the current layer \(k\). Compared to \(\check{I}^{k-1}\) in Figure 4-(a), \(\check{I}^{k}\) in Figure 4-(b) shows much closer result to \(I^{k}\) in Figure 4-(c), thus leading to a smaller amount of residues \(I^{k}_{res}\) in Figure 4-(d). ### Optimization We train the whole elements of our COMPASS in an end-to-end manner with the frozen pre-trained image compression module of the BL. To boost up the training, we use the separately pre-trained LIFF and residual compression modules. To train the COMPASS architecture, we use a combined RD loss function as: \[L=\sum_{k=1}^{K}R^{k}+\lambda\cdot D^{k}, \tag{5}\] where \(R^{k}\) and \(D^{k}\) represent a rate term and a distortion term for the EL-\(k\), respectively. As in other NN-based image compression methods [3, 4, 27, 18], the rate and distortion are jointly optimized, but we use the summation of those for the \(K\) ELs. It should be noted that we use the same \(\lambda\) value for the \(K\) ELs to maintain the R-D balance over the whole layers. The rate term \(R^{k}\) is the estimated rate amount for the EL-\(k\). Specifically, it is determined as the summation of cross-entropy values for latent representations \(\mathbf{y}^{k}\) and \(\mathbf{z}^{k}\), \(\mathbf{y}^{k}\) is the latent representation transformed from an input residual image \(I^{k}_{res}\) via the encoder network of the residual compression module, and \(\mathbf{z}^{k}\) is the latent representation transformed from the representation \(\mathbf{y}^{k}\) via the hyper-encoder network of the residual compression module, as in the previous hyperprior-based models [4, 27, 18]. The rate term \(R^{k}\) is represented as \(R^{k}=H^{k}(\mathbf{\tilde{y}}^{k}|\mathbf{\tilde{z}}^{k})+H^{k}(\mathbf{\tilde{z}}^{k})\), where \(H^{k}(\mathbf{\tilde{y}}^{k}|\mathbf{\tilde{z}}^{k})\) and \(H^{k}(\mathbf{\tilde{z}}^{k})\) are the cross-entropy terms for noisy latent representations \(\mathbf{\tilde{y}}^{k}\) and \(\mathbf{\tilde{z}}^{k}\) for the EL-\(k\), respectively. The cross-entropy values are calculated based on the Gaussian entropy model used in the Mean-scale model [27]. As in other NN-based image compression methods [3, 4, 27, 18], we sample the noisy latent representations \(\mathbf{\tilde{y}}^{k}\) and \(\mathbf{\tilde{z}}^{k}\) for the EL-\(k\) with the additive uniform noise to fit the samples to the approximate probability mass function (PMF) \(P(\cdot)\) of the discretized representations \(\mathbf{y}^{k}\) and \(\mathbf{z}^{k}\) for the EL-\(k\). \(D^{k}\) is a mean squared error (MSE) between the reconstructed image \(\check{I}^{k}\) and input image \(I^{k}\) for the EL-\(k\). \(\hat{I}^{k}\) is represented as \(\hat{I}^{k}=\check{I}^{k}+\hat{I}^{k}_{res}\), where \(\hat{I}^{k}_{res}=D^{RC}(\mathbf{\hat{y}}^{k})\) and \(\mathbf{\hat{y}}^{k}=Q(E^{RC}(I^{k}_{res}))\). Note that \(E^{RC}(\cdot)\) and \(D^{RC}(\cdot)\) refer to the encoder and decoder networks of the residual compression module \(RC(\cdot)\), respectively, and \(Q(\cdot)\) is a rounding function. Here, we use a rounded latent representation \(\mathbf{\hat{y}}^{k}\) rather than the noisy representation \(\mathbf{\tilde{y}}^{k}\) for calculating the distortion term. We first used the noisy representation \(\mathbf{\tilde{y}}^{k}\) as the input into the decoder network \(D^{RC}\), but we obtained very poor optimization results. On the other hand, when we feed the rounded representations \(\mathbf{\hat{y}}^{k}\) instead of \(\mathbf{\tilde{y}}^{k}\), the coding efficiency is much improved. The suboptimal performance of the COMPASS with noisy representations could be attributed to the propagation of small errors in the reconstructions, caused by the additive uniform noise, to the following ELs. This propagation of errors could result in a sig Figure 4: A predicted image via the LIFF module. (a) the reconstruction of the previous layer \(k\)-1, (b) the output (predicted image) of the LIFF module, (c) the input image of the current layer \(k\), (d) the residual image as the input of the residual compression module. nificant discrepancy between the training and inference, ultimately leading to poor results. The hierarchical and recursive operation of the COMPASS may also contribute to this issue. Whereas, using the rounded representation can certainly prevent the mentioned error propagation because it does not cause any discrepancy between the training and inference phases at all. To deal with the discontinuity due to the rounding operations, we just bypass the gradients backward. In contrast to the distortion term, it should be noted that our COMPASS still uses the noisy representation \(\tilde{\boldsymbol{y}}^{k}\) to calculate the rate term \(R^{k}\) to fit the samples to the approximate PMF \(P(\cdot)\). Further details in optimization are described in Appendix A. ## 4 Experiments We first describe the experimental setup in terms of two aspects: coding efficiency with a fixed scale factor of 2 and coding efficiency with arbitrary scale factors. Then we present the corresponding experimental results. \({}^{*}\). Following the typical validation procedures [6, 32] for the scalable coding, we measure the BD-rate performance for the final ELs with the largest scales (spatial sizes) of reconstructions, but we also provide the BD-rate results for the BL and intermediate ELs in Appendix B. For the simulcast and single-layer coding, we use the pre-trained models from CompressAI-Pytorch [5]. The BD-rate performance of SHVC [6] is measured using the All-Intra Mode of the SHVC reference software (SHM-12.4) [1] with the QP values of 30, 32, 34, 36, 38 and 40. The RGB inputs are converted into the YUV420 format and the reconstructions are converted back to the RGB format to achieve the best possible performance of SHVC [6]. More specifically, we use the Kodak Lossless True Color Image dataset [11] that consists of 24 768\(\times\)512-sized (or 512\(\times\)768-sized) images. For the downscaling of the input images, we use the bicubic interpolation function implemented in Pytorch [29]. For the feature extractor of the LIFF module in our COMPASS, we set the number of residual dense blocks (RDBs) and convolutional layers of each RDB to 4. The number of output channels and the growth rate of each RDB are set to 64 and 32, respectively. For the filter generation MLP of the LIFF module, we set the number of hidden layers to 5, each of which has 256 output channels. The COMPASS is built using the open-source CompressAI Pytorch library [5]. **Experimental results.** Table 1 shows the coding efficiency performance in terms of BD-rate for our COMPASS against the compared methods. It should be noted in Table 1 that each compared method becomes an anchor for comparison against our COMPASS. Therefore, the negative percentage values indicate that our COMPASS outperforms the corresponding methods in BD-rate coding efficiency by those amounts while the positive values imply under-performance of the COMPASS. Figure 5 shows the rate-PSNR performance curves for Table 1. As shown in Table 1 and Figure 5, our COMPASS significantly outperforms all the spatially scalable coding methods except the Single-layer (Mean-scale [27]). It is worthy to note that, although that Mei [25]'s enhanced version uses the same image compression module (Mean-scale [27]) as the COMPASS and focuses only on the fixed scale factor of 2, -14.23% of BD-rate gain is achieved by our COMPASS that supports various different scale factors, which will be discussed in Sec. 4.2. It is noted in Table 1 that our COMPASS has a less number of parameters, compared to Mei [25]'s method. Furthermore, impressively, our COMPASS achieves comparable results to the single-layer coding with the Mean-scale model [27], while the existing scalable coding methods [6] show considerably lower coding efficiency compared with their single-layer coding as described in [6, 32] due to low inter-layer prediction accuracy. In addition, our COMPASS shows the BD-rate gains of -24.97% and -35.87% compared to SHVC [6] at the BL and the EL-1, respectively. Further comparisons for the other layers are provided in Appendix B. ### Coding efficiency with arbitrary scale factors **Experimental setup.** We show the effective coding efficiency of our COMPASS at arbitrary scales by comparing it with the six coding methods. For this, five scale factors are used for each of two experiments. The first experiment is conducted with five two-layer scalabilities (one BL and one EL (EL-1: 1.2\(\times\), 1.6\(\times\), 2.0\(\times\), 2.4\(\times\), and 2.8\(\times\))) while the second one is with five three-layer scalabilities (one BL and two ELs (EL-1: 2.0\(\times\) and EL-2: 2.4\(\times\), 2.8\(\times\), 3.2\(\times\), 3.6\(\times\), and 4.0\(\times\))). More experiment results are provided for more combinations of scale factors in Appendix B. For Mei [25] which only supports a fixed scale factor of 2 between adjacent layers, we upscale or downscale the output images using bicubic interpolation to match with other scale factors. The other experimental conditions such as datasets and channel numbers are the same as those in Sec. 4.1. **Experimental results.** Table 2 shows the comparison results of the two-layer scalable coding. As shown, our COMPASS significantly outperforms all the spatially scalable coding methods over the entire range of the different scale factors from 1.2\(\times\) to 2.8\(\times\). Furthermore, it achieves comparable results to the single-layer coding with the Mean-scale model [27] over the entire range. Surprisingly, our COMPASS even outperforms it for the scale factors of 1.2\(\times\) and 1.6\(\times\). This is because the input images for the single-layer coding need to be padded to a multiple of 64 in order to be processed into the CNN architecture of the image compression network, which can lead to lower the coding efficiency. In contrast, our LIFF module is effective at handling the arbitrary scale factors. For the three-layer scalable coding, as shown in Table 3, our COMPASS also outperforms all the spatially scalable coding methods as the two-layer scalable coding, and even exhibits a superiority to the single-layer coding with the Mean-scale model [27] over the whole scale factors only except scale factor 4.0\(\times\). The superiority of our COMPASS stems from the fact that the LIFF module can well perform the inter-layer prediction for arbitrary scale factors. better than those from the other methods, and especially, the high-frequency components such as edges and textures are more clearly reconstructed. More visual comparison results are provided in Appendix C, where we also provide the reconstructions with the multi-layer configuration greater than three layers to verify the extensibility of our COMPASS in terms of the number of layers. ### Ablation study To verify the effectiveness of the proposed elements (or optimization strategy) in our COMPASS, we measure the coding efficiency of the ablated models and compare the results with those of the full model of our COMPASS in terms of BD-rate. In the comparison, the ablated elements are the LIFF module described in Sec. 3.2, the convolutional-layer-wise padding described in Sec. 3.1, and the adoption of rounded representations replacing the noisy representations for calculation of the distortion term in training described in Sec. 3.3. It should be noted for the ablated model without the LIFF module that a bicubic interpolation is used instead. For the ablated model without the convolutional-layer-wise padding, we pad the input images to match their sizes to the multiple of \(64\) in both vertical and horizontal axes. For the ablated model without using the rounded representations in training, we use the noisy representations instead of the rounded representations. As shown in Table 4, our full COMPASS model significantly outperforms all the ablated models, which represents that each proposed element of the COMPASS model effectively contributes to the enhancement of coding efficiency. ## 5 Conclusion In this paper, we propose a new NN-based spatially scalable image compression method, called COMPASS, which can achieve high coding efficiency while supporting arbitrary scale factors between adjacent layers, not limited to doubling the scales. To be an effective architecture, all the enhancement layers share the LIFF module and the residual compression module, which are recursively performed into higher scale factors. The LIFF module is adopted for the inter-layer arbitrary scale prediction, which can effectively reduce the spatial redundancy between layers for arbitrary scale factors. We also propose the combined RD loss function to effectively train multiple layers. Experimental results show that our COMPASS significantly outperforms SHVC [6], the simulcast coding, and the existing NN-based spatially scalable coding method [25] in terms of BD-rate for all combinations of scale factors. Our COMPASS also uses a smaller number of parameters than the existing NN-based spatially scalable coding method [25]. To the best of our knowledge, the COMPASS is the first work that shows comparable or even better performance in terms of coding efficiency than the single-layer coding for various scale factors, based on a same image compression backbone. \begin{table} \begin{tabular}{c c c|c} \hline \hline Inter-layer & Conv. layer & Rounded & BD-rate\(\downarrow\) \\ prediction & padding & latent rep. & (vs. full model) \\ \hline Bicubic & ✓ & ✓ & 9.27\% \\ LIFF & ✗ & ✓ & 18.95\% \\ LIFF & ✓ & ✗ & 13.00\% \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study results about the proposed elements or optimization strategies in our COMPASS. Figure 6: Visual comparison results for _kodim23,png_, _kodim03.png_, _kodim17.png_ images in Kodak Lossless True Color Image dataset [11] (best viewed in digital format). The ‘acc. bits’ indicates the accumulated bits up to the final EL. We match the accumulated bits among the compared methods as much as possible. **Zoom for better visual comparison**. ## Acknowledgement This work was supported by internal fund/grant of Electronics and Telecommunications Research Institute (ETRI). [23YC1100, Technology Development for Strengthening Competitiveness in Standard IPR for communication and media]
近年、ニューラルネットワーク(NN)に基づく画像圧縮研究は活発に進められており、従来の方法と比較して優れた性能を発揮しています。しかし、多くの研究は非スケーラブルな画像圧縮(単層符号化)に焦点を当てており、空間スケーラブルな画像圧縮は注目されていません。ただし、多くの応用が期待されています。この論文では、空間的にスケーラブルな画像圧縮方法である、COMPASSと名付けられた、NNに基づく新しい方法を提案します。COMPASSは任意のスケールでの空間スケーラブルに対応します。提案されたCOMPASSは、推論中に層の数とスケールファクターを任意に決定する柔軟な構造を持っています。隣接層間の空間redundancyを任意のスケールファクターで削減するために、COMPASSは、インパルス нейрон репрезентацииに基づいた、任意スケール予測方法LIFFを採用しています
2310.20222
Cosmological singularities in $f(T,φ)$ gravity
The pursuit of understanding the mysteries surrounding dark energy has sparked significant interest within the field of cosmology. While conventional approaches, such as the cosmological constant, have been extensively explored, alternative theories incorporating scalar field-based models and modified gravity have emerged as intriguing avenues. Among these, teleparallel theories of gravity, specifically the $f(T,\phi)$ formulation, have gained prominence as a means to comprehend dark energy within the framework of teleparallelism. In this study, we investigate two well-studied models of teleparallel dark energy and examine the presence of cosmological singularities within these scenarios. Using the Goriely-Hyde procedure, we examine the dynamical systems governing the cosmological equations of these models. Our analysis reveals that both models exhibit Type IV singularities, but only for a limited range of initial conditions. These results could indicate a potential edge for teleparallel cosmological models over their other modified gravity counterparts, as the models we examine seem to be only allowing for weak singularities that too under non general conditions.
Oem Trivedi, Maxim Khlopov, Jackson Levi Said, Rafael C. Nunes
2023-10-31T06:48:58
http://arxiv.org/abs/2310.20222v2
# Cosmological singularities in \(f(T,\phi)\) gravity ###### Abstract The pursuit of understanding the mysteries surrounding dark energy has sparked significant interest within the field of cosmology. While conventional approaches, such as the cosmological constant, have been extensively explored, alternative theories incorporating scalar field-based models and modified gravity have emerged as intriguing avenues. Among these, teleparallel theories of gravity, specifically the \(f(T,\phi)\) formulation, have gained prominence as a means to comprehend dark energy within the framework of teleparallelism. In this study, we investigate two well-studied models of teleparallel dark energy and examine the presence of cosmological singularities within these scenarios. Using the Goriely-Hyde procedure, we examine the dynamical systems governing the cosmological equations of these models. Our analysis reveals that both models exhibit Type IV singularities, but only for a limited range of initial conditions. These results could indicate a potential edge for teleparallel cosmological models over their other modified gravity counterparts, as the models we examine seem to be only allowing for weak singularities that too under non general conditions. Introduction Observations of the late-time acceleration of the Universe came as a surprise to the cosmological community [1]. Since then, extensive efforts have been dedicated to explaining this expansion. Standard approaches, such as the Cosmological constant [1, 2, 3, 4, 5], as well as more exotic scenarios like Modified gravity theories [6, 7, 8], and recent proposals for the direct detection of dark energy [9], have been pursued. One fascinating avenue for understanding dark energy is through Quintessence, where a scalar field drives the late-time cosmic acceleration of the universe [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. Quintessence is particularly interesting as it represents the simplest scalar field dark energy scenario that avoids issues like ghosts or Laplacian instabilities. In quintessence models, the acceleration of the universe is driven by a slowly varying scalar field with a potential \(V(\phi)\), similar to the mechanism of slow-roll inflation. However, in this case, contributions from non-relativistic matter, such as baryons and dark matter, cannot be neglected. It is worth noting that simple models of Quintessence have been shown to be in conflict with the current H0 tension [21, 22, 23], suggesting that simple Quintessence models may perform worse than \(\Lambda\)-CDM models in light of the current H0 data [24].This leads one to consider other more exotic possibilities for scalar field dark energy models and one such possibility is to consider models in Teleparallel gravity.Teleparallel gravity, a theory based on torsion, provides an alternative description of gravity [25, 26, 27, 28, 29, 30, 31, 32, 33], where gravitation is mediated by torsion. In this approach, the Lagrangian density of Teleparallel Equivalent of General Relativity (TEGR) is proportional to the torsion scalar \(T\). In TEGR, the tetrad field and spin connection pair replace the metric tensor and Levi-Civita connection, respectively, while the teleparallel connection replaces the usual connection [30, 31]. Consequently, at the level of the dynamical equations, curvature-based gravitational theories are equivalent to tensor-based theories [30, 34]. By introducing an arbitrary function \(f(T)\) in place of the torsion scalar \(T\), a generalization of TEGR known as \(f(T)\) gravity is obtained [35, 36, 37, 38, 39, 40, 41], leading to new cosmological models. In this framework, the tetrad fields, which form the orthogonal basis for the tangent space, serve as the dynamical variables of teleparallel gravity. The torsion tensor is constructed using the first derivative of the tetrad product. The field equations are derived by varying the action with respect to the tetrad fields, while the spin connection preserves the local Lorentz invariance and contributes to the equations of motion. For further exploration of \(f(T)\) gravity, refer to [42, 43, 44, 45, 46, 47, 48]. The investigation of scalar-torsion theories with non-minimal coupling between the torsion scalar and a scalar field was carried out in the context of dark energy [49, 50], including studies with arbitrary non-minimal coupling functions and tachyon terms for the scalar field [51, 52]. Another extension of \(f(T)\) gravity is the generalized scalar-torsion \(f(T,\phi)\) gravity, where \(\phi\) represents the canonical scalar field, and the gravitational action incorporates a non-minimal coupling between the scalar field and torsion scalar [53]. Additionally, within the covariant teleparallel framework, a new class of theories has been proposed, where the action depends on the scalar field and an arbitrary function of the torsion scalar [54]. Recently, a significant amount of research has been dedicated to exploring the various types of cosmological singularities that may occur in the present and distant future of the Universe [55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68]. However, it is often challenging to classify and study cosmological singularities in highly unconventional cosmologies influenced by considerations of quantum gravity or phenomenology. Traditional methods may not be applicable in these cases. Therefore, alternative approaches are necessary to identify cosmological singularities in exotic cosmologies. In this context, the Goriely-Hyde procedure, a particular method in dynamical systems, can be extremely useful [69]. Understanding the singularity structure of dynamical systems is an intriguing aspect, especially when these systems describe significant physical phenomena. Although various approaches have been proposed to investigate the singularity structure of autonomous dynamical systems, the Goriely-Hyde procedure has proven particularly valuable for cosmological studies due to the abundance of interesting dynamical systems in cosmology [70]. Previous applications of the Goriely-Hyde method have explored finite and non-finite time singularities in specific quintessence models [71, 72, 73]. However, a comprehensive analysis of cosmological singularities teleparallel models of dark energy using this approach is still lacking, and our work aims to address this gap.The study of the cosmological dynamics and stability of \(f(t,\phi)\) dark energy was conducted in [74], while an analysis of scalar perturbations was performed in [75]. A recent full dynamical systems analysis of \(f(t,\phi)\) dark energy for two particular models was done in [76] and we intend to use the dynamical systems approach developed there to pursue our singularity analysis. In Section II, we provide a concise overview of the Goriely-Hyde method while in section III we demonstrate the diverse characteristics of singularities in \(f(T,\phi)\) models, including both finite and infinite-time occurrences which can occur for both \(f(t,\phi)\) models considered in [76]. Subsequently, in Section IV, we consider two well-motivated ansatz for the Hubble parameter and classify the types of cosmological singularities (Types I-IV) that can arise within these regimes. Finally, we conclude our work in Section V. ## 2 Teleparallel gravity General relativity (GR) can account for most observed phenomena with appropriate modifications considered in the matter sector. In this context, the widely accepted concordance model combines \(\Lambda\)-CDM cosmology with inflation. However, the enigmatic nature of certain particle species remains a puzzle despite significant progress in physics beyond the standard model of particle physics. It is also plausible that the standard model of particle physics might not require substantial restructuring to address these observational challenges. Instead, it could be the gravitational sector that requires further examination. This could involve extensions of GR or modifications beyond GR as alternatives to its original formulation. The scientific literature has witnessed numerous proposals for new theories of gravity, motivated by various phenomena, theoretical approaches, or even quantum physics. One intriguing possibility that has garnered increasing attention in recent decades is teleparallel gravity, where torsion replaces curvature as the mechanism responsible for generating gravitational fields. This theory replaces the traditional Levi-Civita connection, which is curvature-based, with a teleparallel connection based on torsion. Numerous publications on this topic have emerged in the literature. Among the theories arising from torsion-based approaches to gravity is the teleparallel equivalent of general relativity (TEGR), which is dynamically equivalent to GR and thus indistinguishable from it through classical experiments. In teleparallel gravity (TG), one typically assumes an action of the form: \[\mathcal{S}_{\mathrm{TG}}:=\mathcal{S}_{\mathrm{g}}[e,\omega]+\mathcal{S}_{ \mathrm{m}}[e,\chi]\,, \tag{1}\] Here, the gravitational part \(\mathcal{S}_{\mathrm{g}}\) of the action depends on the tetrad \(e^{A}{}_{\mu}\) and the spin connection \(\omega^{A}{}_{B\mu}\), while the matter part depends on the tetrad \(e^{A}{}_{\mu}\) and arbitrary matter fields \(\chi^{I}\), but not on the spin connection [77, 78]. This is because we assume that the hypermomentum vanishes, thereby preventing this coupling. Introducing a dependence on spin would effectively introduce a second matter tensor, resulting from the variation of the matter Lagrangian with respect to the spin connection. The variation of the matter part of the action, after integration by parts to eliminate derivatives acting on field variations, can be expressed as follows: \[\delta\mathcal{S}_{\mathrm{m}}=\int d^{4}xe(\Theta_{A}{}^{\mu}\delta e^{A}{}_ {\mu}+\Omega_{I}\delta\chi^{I}) \tag{2}\] Here, \(\Omega_{I}=0\) represents the matter field equations, and \(\Theta_{A}{}^{\mu}\) denotes the energy-momentum tensor. The corresponding variation of the gravitational action takes the form: \[\delta\mathcal{S}_{\mathrm{g}}=-\int d^{4}xe(W_{A}{}^{\mu}\delta e^{A}{}_{\mu} +Y_{A}{}^{B\mu}\delta\omega^{A}{}_{B\mu}) \tag{3}\] The tensors \(W_{A}{}^{\mu}\) and \(Y_{A}{}^{B\mu}\) arise from the variation and integration by parts, with their specific form depending on the particular theory under consideration. The explicit expression for \(W_{A}{}^{\mu}\) can be found for several theories in [79]. For brevity, we omit \(Y_{A}{}^{B\mu}\) here, as it turns out to be redundant in deriving the field equations, which can be entirely determined from \(W_{A}{}^{\mu}\) alone. Furthermore, by varying with respect to the tetrad, one can derive the field equations: \[W_{A}{}^{\mu}=\Theta_{A}{}^{\mu}\,. \tag{4}\] An alternative representation of the field equations, more commonly used, is obtained by transforming the first index into a spacetime index with the tetrad while lowering the second index: \[W_{\mu\nu}=e^{A}{}_{\mu}g_{\rho\nu}W_{A}{}^{\rho}\,,\quad\Theta_{\mu\nu}=e^{A} {}_{\mu}g_{\rho\nu}\Theta_{A}{}^{\rho}\,, \tag{5}\] This yields the field equations in the form: \[W_{\mu\nu}=\Theta_{\mu\nu}\,. \tag{6}\] However, deriving the field equations for the spin connection is more complex, as it must satisfy the conditions of being flat, \(R^{\alpha}{}_{\beta\mu\nu}=0\), and metric-compatible, \(\nabla_{\alpha}g_{\mu\nu}=0\), by definition. Various approaches exist to maintain these properties during the variation procedure [80, 81]. For considering cosmological scenarios in teleparallel theories, it is helpful to consider an FLRW metric of the form [79] \[ds^{2}=N(t)^{2}dt^{2}-a(t)^{2}\Big{[}\frac{dr^{2}}{1-kr^{2}}+r^{2}(d\vartheta^{ 2}+\sin^{2}\vartheta d\varphi^{2})\Big{]}\,, \tag{7}\] where \(N(t)\) and \(a(t)\) represent the lapse function and scale factor respectively. In the case of flat universes (\(k=0\)) one can write \[\mathrm{d}s^{2}=N(t)^{2}\mathrm{d}t^{2}-a^{2}(t)\left(\mathrm{d}x^{2}+\mathrm{ d}y^{2}+\mathrm{d}z^{2}\right)\,, \tag{8}\] and this results in the diagonal tetrad \[e^{A}_{\mu}=\mathrm{diag}\left(N(t),\,a(t),\,a(t),\,a(t)\right)\,, \tag{9}\] which turns out to be in the Weitzenbock gauge for the extensions to TEGR. An important remark here is that the above tetrad (with vanishing spin connection) is the only one that has the property that both the tetrad and the teleparallel connection obey cosmological symmetries for flat FLRW. One can also relax the condition that the teleparallel connection enjoys the symmetries of cosmology, but then, the corresponding cosmological equations would not respect the symmetries of cosmology. If we use the diagonal tetrad (9) in Cartesian coordinates one can, for example, find the modified FLRW equations for \(f(T)\) gravity as \[-6H^{2}f_{T}-\frac{1}{2}f =\kappa^{2}\rho\,, \tag{10a}\] \[-2f_{T}(3H^{2}+\dot{H})-2H\dot{f}_{T}-\frac{1}{2}f =-\kappa^{2}p\,, \tag{10b}\] where dots are derivatives with respect to time, so that \(\dot{f}_{T}=f_{TT}\dot{T}\). One can further obtain modified FLRW equations for other teleparallel appraoches, like \(f(T,B)\) gravity being \[3H\dot{f}_{B}-3H^{2}(3f_{B}+2f_{T})-3f_{B}\dot{H}-\frac{1}{2}f(T, B) =\kappa^{2}\rho\,, \tag{11a}\] \[-(3H^{2}+\dot{H})(2f_{T}+3f_{B})-2H\dot{f}_{T}+\ddot{f}_{B}-\frac {1}{2}f(T,B) =-\kappa^{2}p\,. \tag{11b}\] Or for the Teleparallel Gauss-Bonet models being (fixing the gauge such that \(N=1\)) \[-6f_{T}H^{2}-12H^{3}\dot{f}_{T_{G}}+12f_{T_{G}}H^{2}\left(\dot{H}+H^{2} \right)-\frac{1}{2}f(T,T_{G})=\kappa^{2}\rho\,, \tag{12a}\] \[-2H\dot{f}_{T}-2f_{T}\left(\dot{H}+3H^{2}\right)+12f_{T_{G}}H^{2} \left(\dot{H}+H^{2}\right)-8H\dot{f}_{T_{G}}\left(\dot{H}+H^{2}\right)\] \[-4H^{2}\ddot{f}_{T_{G}}-\frac{1}{2}f(T,T_{G})=-\kappa^{2}p\,, \tag{12b}\] where \(f_{T_{G}}=\frac{\partial f}{\partial T_{G}}T_{G}\). It is worth mentioning that \(B_{G}=0\) in flat FLRW and then the standard Gauss-Bonnet term is just \(\mathcal{G}=T_{G}\). While there are a multitude of approaches of dealing with such exotic cosmological systems like reconstruction methods or Noether-symmetries approaches, dynamical systems methods are also an efficient way to understand the dynamics of the models. Dynamical systems allow for the extraction of key features of the cosmology without solving the evolution equations directly (in an exact form). Thus, it then becomes possible to describe the overall nature of the gravitational theory and henceforth determine whether the model can generate a viable cosmological evolution. This therefore serves as a very useful tool especially in models where it is difficult to extract any cosmological solutions from directly solving the field equations such as in f(R) gravity. In the cases we have considered so far, one can for example write the equations (10) as an autonomous dynamical system using the variables \[\tilde{x}=-\frac{\ddot{f}}{H\dot{f}}\,,\quad\tilde{y}=\frac{f}{4H^{2}\dot{f}} \,,\quad\tilde{z}=\frac{3H^{2}+\dot{H}}{H^{2}}\,, \tag{13}\] These variables were considered in [82] and the authors considered the following scenarios: (i) absence of matter fluids and (ii) presence of dust and radiation components. Furthermore, the case when the parameter \(m=-\frac{\ddot{H}}{H^{3}}\) takes on constant values \(m=0\) (quasi-de Sitter evolution) and \(m=-\frac{9}{2}\) (matter dominated evolution) was explored. One can also write (11) in the dynamical systems method using the phase space variables \[\tilde{x}:=\frac{\dot{\phi}}{\sqrt{1+H^{2}}}\,,\quad\tilde{y}:=\frac{V(\phi)} {6H^{2}}\,,\quad\tilde{z}:=\frac{(-T)^{n}}{1+H^{2}}\,,\quad\eta:=\frac{H}{ \sqrt{1+H^{2}}}\,. \tag{14}\] The cosmological dynamics of \(f(T,T_{G})\) gravity was investigated in [83]. In particular, the model \(f(T,T_{G})=-T+\alpha_{1}\sqrt{T^{2}+\alpha_{2}T_{G}}\), where \(\alpha_{1,2}\neq 0\) are constants, was studied. In this case, the presence of a perfect dust fluid was assumed and the following dimensionless phase-space parameters were defined \[\tilde{x}=\sqrt{1+\frac{2\alpha_{2}}{3}\left(1+\frac{\dot{H}}{H^{2}}\right)} \,,\quad\Omega_{\rm m}=\frac{\kappa^{2}\rho_{\rm m}}{3H^{2}}\,. \tag{15}\] While we have briefly discussed several approaches to teleparallel gravity and their status quo in cosmology, what we are most interested in this work are the \(f(T,\phi)\) models and we shall now discuss them in more detail.The action of TEGR (Teleparallel Equivalent of General Relativity) can be generalized to \(f(T)\) gravity by introducing a scalar field \(\phi\). The action, including matter and radiation, can be expressed as [75, 76]: \[S=\int d^{4}x\,e[f(T,\phi)+P(\phi)X]+S_{m}+S_{r}\,, \tag{16}\] Here, \(e=\det[e_{\mu}^{A}]=\sqrt{-g}\) represents the determinant of the tetrad field. The tetrad field, \(e_{\mu}^{A}\), \(A=0,1,2,3\), is related to the metric tensor \(g_{\mu\nu}\) and the Minkowski tangent space metric \(\eta_{AB}\) as \(g_{\mu\nu}=\eta_{AB}e_{\mu}^{A}e_{\nu}^{B}\), where \(\eta_{AB}=(-1,1,1,1)\). The tetrad satisfies the orthogonality condition \(e_{A}^{\mu}e_{\mu}^{B}=\delta_{A}^{B}\), and the spin connection is denoted by \(\omega_{B\mu}^{A}\). The function \(f(T,\phi)\) represents an arbitrary function of the scalar field \(\phi\) and the torsion scalar \(T\), while \(X=-\partial_{\mu}\phi\partial^{\mu}\phi/2\) represents the kinetic term of the field. This general action includes non-minimally coupled scalar-torsion gravity models with the coupling function \(f(T,\phi)\), \(f(T)\) gravity, and minimally coupled scalar field. For a flat FLRW space-time background, the field equations derived from the action are [76]: \[f(T,\phi)-P(\phi)X-2Tf_{,T} = \rho_{m}+\rho_{r} \tag{17}\] \[f(T,\phi)+P(\phi)X-2Tf_{,T}-4\dot{H}f_{,T}-4H\dot{f}_{,T} = -p_{r}\] (18) \[-P_{,\phi}X-3P(\phi)H\dot{\phi}-P(\phi)\ddot{\phi}+f_{,\phi} = 0 \tag{19}\] In these equations, the Hubble parameter is denoted as \(H\equiv\frac{\dot{a}}{a}\), where an overdot represents a derivative with respect to cosmic time \(t\). The energy density for matter and radiation are denoted as \(\rho_{m}\) and \(\rho_{r}\) respectively, and the pressure at the radiation era is \(p_{r}\). The torsion scalar \(T\) is given by \(T=6H^{2}\). The non-minimal coupling function \(f(T,\phi)\) is defined as [54]: \[f(T,\phi)=-\frac{T}{2\kappa^{2}}-G(T)-V(\phi)\,, \tag{20}\] where \(V(\phi)\) is the scalar potential and \(G(T)\) is an arbitrary function of the torsion scalar. In the matter-dominated era, \(\omega_{m}=\frac{p_{m}}{\rho_{m}}=0\), and in the radiation era, \(\omega_{r}=\frac{p_{r}}{\rho_{r}}=1/3\). In this case, Eqs. (17)-(19) reduce to: \[\frac{3}{\kappa^{2}}H^{2}=P(\phi)X+V(\phi)-2TG_{,T}+G(T)+\rho_{m}+\rho_{r}\,, \tag{21}\] \[-\frac{2}{\kappa^{2}}\dot{H}=2P(\phi)X+4\dot{H}(G_{T}+2TG_{,TT})+\rho_{m}+\frac{4} {3}\rho_{r}\,, \tag{22}\] \[P(\phi)\ddot{\phi}+P_{,\phi}(\phi)X+3P(\phi)H\dot{\phi}+V_{,\phi}(\phi)=0\,. \tag{23}\] The modified Friedmann equations, taking into account dark energy, become: \[\frac{3}{\kappa^{2}}H^{2} = \rho_{m}+\rho_{r}+\rho_{de}\,, \tag{24}\] \[-\frac{2}{\kappa^{2}}\dot{H} = \rho_{m}+\frac{4}{3}\rho_{r}+\rho_{de}+p_{de}\,. \tag{25}\] Comparing Eq. (21) with Eq. (24), and Eq. (22) with Eq. (25), we can extract the energy density (\(\rho_{de}\)) and pressure (\(p_{de}\)) for the dark energy sector: \[\rho_{de}=P(\phi)X+V(\phi)-2TG_{,T}+G(T)\,, \tag{26}\] \[p_{de}=P(\phi)X-V(\phi)+2TG_{,T}-G(T)+4\dot{H}(G_{,T}+2TG_{,TT})\,. \tag{27}\] For simplicity we set \(P(\phi)=1\) and consider the well studied exponential potential, \(V(\phi)=V_{0}e^{-\lambda\phi}\), where \(\lambda\) is a constant. In order to proceed further and really carry out the analysis we want to, we need a form for \(G(T)\) and here we will be considering two forms which were studied in [76] and will be carrying out the Goriely-Hyde analysis on both of these models 1. Footnote 1: It is worth discussing any effects of the separation of T from \(\phi\) here in the action (20). If one, for example, considers actions like those in [50] where terms like \(T\phi^{2}\) come into play then one could potentially expect that results on singularities could be affected in some ways, as one would not be wrong to think that such a coupling between the torsion scalar and field terms could have some significant outcomes. Although a definitive answer on this aspect would need one to do a proper analysis similar to the one we have done for the models considered in our paper, we do feel very interesting results may await an endeavour like this. ## 3 The Goriely-Hyde Procedure The Goriely-Hyde technique [69] offers an elegant method for identifying finite-time singularities in dynamical systems. The procedure can be summarized as follows: * We begin by considering a dynamical system governed by \(n\) differential equations given by: \[\dot{x}i=fi(x),\] (28) where \(i=1,2,...,n\). Here, \(t\) represents time, but in quintessence models, it can be better represented as the number of e-foldings, denoted by \(N\). We identify the parts of the equation \(f_{i}\) that become significant as the system approaches the singularity. These significant parts are referred to as "dominant parts" [69]. Each dominant part represents a mathematically consistent truncation of the system, denoted as \(\hat{f}i\). Consequently, the system can be expressed as: \[\dot{x}i=\hat{f}_{i}(x).\] (29) * Without loss of generality, the variables \(x_{i}\) near the singularity can be represented as: \[x_{i}=a_{i}\tau^{p_{i}},\] (30) where \(\tau=t-t_{c}\), and \(t_{c}\) is an integration constant. By substituting equation (4) into equation (3) and equating the exponents, we can determine the values of \(p_{i}\) for different \(i\), which collectively form the vector \(\mathbf{p}=(p_{1},p_{2},...,p_{n})\). Similarly, we calculate the values of \(a_{i}\) to form the vector \(\vec{a}=(a_{1},a_{2},...,a_{n})\). It is worth noting that if \(\vec{a}\) comprises solely real entries, it corresponds to finite-time singularities. On the other hand, if \(\vec{a}\) contains at least one complex entry, it may lead to non-finite-time singularities. Each \((a_{i},p_{i})\) set is referred to as a dominant balance of the system. * Next, we compute the Kovalevskaya matrix defined as: \[R=\begin{pmatrix}\frac{\partial f_{1}}{\partial x_{1}}&\frac{\partial f_{1}}{ \partial x_{2}}&.&.&\frac{\partial f_{1}}{\partial x_{n}}\\ \frac{\partial f_{2}}{\partial x_{1}}&\frac{\partial f_{2}}{\partial x_{2}} &.&.&\frac{\partial f_{2}}{\partial x_{n}}\\.&.&.&.&.\\.&.&.&.&.\\ \frac{\partial f_{n}}{\partial x_{1}}&\frac{\partial f_{n}}{\partial x_{2}} &.&.&\frac{\partial f_{n}}{\partial x_{n}}\end{pmatrix}-\begin{pmatrix}p_{1} &0&.&.&0\\ 0&p_{2}&.&.&0\\.&.&.&.&.\\.&.&.&.&.\\ 0&0&.&.&p_{n}\end{pmatrix}.\] (31) After obtaining the Kovalevskaya matrix, we evaluate it for different dominant balances and determine the eigenvalues. If the eigenvalues take the form \((-1,r_{2},r_{3},...,r_{n})\), where \(r_{2},r_{3},...>0\), then the singularity is regarded as general and will occur regardless of the initial conditions of the system. Conversely, if any of the eigenvalues \(r_{2},r_{3},...\) are negative, the singularity is considered local and will only occur for certain sets of initial conditions. ## 4 Goriely-Hyde analysis of \(f(T,\phi)\) models ### Model I In the first model, we consider a specific form for \(G(T)\) as given in [76, 84]: \[G(T)=\beta T\ln\left(\frac{T}{T_{0}}\right)\,, \tag{32}\] where \(\beta\) is a constant and \(T_{0}\) represents the value of \(T\) at the initial epoch. This model, which has been investigated in [84], exhibits physically favorable critical points and offers an interesting approach for modeling the evolution of the Universe. By substituting this expression into Eqs. (26)-(27), the effective dark energy density and pressure terms are reduced to: \[\rho_{de}=\frac{\dot{\phi}^{2}}{2}+V(\phi)-6\beta H^{2}\ln\left(\frac{6H^{2}} {T_{0}}\right)-12H^{2}\beta\,, \tag{33}\] \[p_{de}=\frac{\dot{\phi}^{2}}{2}-V(\phi)+6\beta H^{2}\ln\left(\frac{6H^{2}}{T_{0}} \right)+12H^{2}\beta+4\dot{H}\left(\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+3 \beta\right)\,. \tag{34}\] In order to analyze the dynamics of the scalar-torsion \(f(T,\phi)\) gravity model, [76] introduced a set of dimensionless phase space variables to represent the system in an autonomous form. These variables are defined as 2 follows: Footnote 2: Note that it is not necessary that the variables will be defined in this same way for all cosmological paradigms, as we shall see later in the paper too. In fact, one can use different variables for the same paradigm too if required or wished for. See, for example, [85, 86, 87] for extended discussions on the same \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,\qquad\quad y=\frac{ \kappa\sqrt{V}}{\sqrt{3}H}\,,\qquad\quad z=-4\beta\kappa^{2}\,,\qquad\quad u= -2\beta\ln\left(\frac{T}{T_{0}}\right)\kappa^{2}\,, \tag{35}\] \[\rho=\frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}\,,\qquad\quad\lambda =-\frac{V_{,\phi}(\phi)}{\kappa V(\phi)}\,,\qquad\quad\Theta=\frac{V(\phi) \,,V_{,\phi\phi}}{V_{,\phi}(\phi)^{2}}\,. \tag{36}\] These dimensionless variables allow for a simplified representation of the system's dynamics and facilitate the analysis of the scalar-torsion \(f(T,\phi)\) gravity model. Using these variables, one can finally write the cosmological equations of this model as a dynamical system as follows : \[\frac{dx}{dN} =-\frac{x\rho^{2}-3x\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}-3x+ \sqrt{\frac{3}{2}}\lambda y^{2}\,,\] \[\frac{dy}{dN} =\frac{-y\rho^{2}+3y\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}- \sqrt{\frac{3}{2}}\lambda yx\,,\] \[\frac{du}{dN} =\frac{z\rho^{2}-3z\left(u-x^{2}+y^{2}+z-1\right)}{2u+3z-2}\,,\] \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+u+3x^{2}-3y^{2}+3z-1\right)}{2u+3z-2}\,,\] \[\frac{dz}{dN} =0\,,\] \[\frac{d\lambda}{dN} =-\sqrt{6}(\Theta-1)x\lambda^{2}\,.\] For our analysis, we would be considering \(\lambda\) to be a constant which is not equal to zero (which would again mean that we are considering an exponential potential form as we remarked earlier ). Furthermore, we consider that \(2u>>3z-2\) ( which can be justified considering the forms of z and u we have described earlier ). This would allow us to write the dynamical equations as \[\frac{dx}{dN} =-\frac{x\rho^{2}-3x\left(u-x^{2}+y^{2}+z-1\right)}{2u}-3x+\sqrt{ \frac{3}{2}}\lambda y^{2}\,, \tag{37}\] \[\frac{dy}{dN} =\frac{-y\rho^{2}+3y\left(u-x^{2}+y^{2}+z-1\right)}{2u}-\sqrt{ \frac{3}{2}}\lambda yx\,,\] (38) \[\frac{du}{dN} =\frac{z\rho^{2}-3z\left(u-x^{2}+y^{2}+z-1\right)}{2u}\,,\] (39) \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+u+3x^{2}-3y^{2}+3z-1\right)}{2u}\,,\] (40) \[\frac{dz}{dN} =0\,,\] (41) \[\frac{d\lambda}{dN} =0,. \tag{42}\] Now we are in the right position to start off our singularity analysis. The first truncation that we consider is given by \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ 3y^{3}/2u\\ 3z/2u\\ \rho^{3}/2u\end{pmatrix} \tag{43}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((1/2,-1/4,1/2,-1/4)\) from which we can get the dominant balances to be \[a_{1}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3}}, \sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{2}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{3}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },-\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{4}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{5}=\left(-\frac{\lambda z}{\sqrt{2}},\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3} },-\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{6}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3 }},-\sqrt{3z},\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{7}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4]{3 }},\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] \[a_{8}=\left(-\frac{\lambda z}{\sqrt{2}},-\frac{i\sqrt[4]{z}}{\sqrt{2}\sqrt[4] {3}},-\sqrt{3z},-\frac{\sqrt[4]{3}\sqrt[4]{z}}{\sqrt{2}}\right)\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}-\frac{1}{2}&\sqrt{6}\lambda y&0&0\\ 0&\frac{9y^{2}}{2u}+\frac{1}{4}&-\frac{3y^{3}}{2u^{2}}&0\\ 0&0&-\frac{3z^{2}}{2u^{2}}-\frac{1}{2}&0\\ 0&0&\frac{g^{3}}{2u^{2}}&\frac{1}{4}-\frac{3g^{2}}{2u}\end{array}\right) \tag{45}\] Using the dominant balances we introduced in (44), we can now plug them into the Kovalevskaya matrix (45) to get the eigenvalues to be \[r=(-1,-1/2,-1/2,-1/2) \tag{46}\] We note that all the other eigenvalues besides the initial -1 are also negative, which means that according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. Coupled with the fact that the dominant balances (44) have complex entries, this would mean that this truncation tells us that the singularities for this system may occur in non-finite time. The second truncation that we consider is given by \[\hat{f}=\begin{pmatrix}3x^{3}/2u\\ -\sqrt{\frac{3}{2}}\lambda xy\\ \rho^{2}z/2u\\ 3\rho y^{2}/2u\end{pmatrix} \tag{47}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \(p=(-1,-1,-1,-3/2)\) from which we can get the dominant balances to be \[a_{1}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{2}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{3}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{4}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{5}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{6}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{7}=\left(\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] \[a_{8}=\left(-\frac{\sqrt{\frac{2}{3}}}{\lambda},-\frac{1}{\lambda},-\frac{1}{ \lambda^{2}},-\frac{i\sqrt{\frac{2}{z}}}{\lambda^{2}}\right)\] We again see that the balances have complex entries 3 while the the Kovalevskaya matrix can be written as Footnote 3: At this point we would like to highlight that complex entries in \(\mathbf{\hat{a}}\) observed for the previous truncation and this one ( and which will be observed for Model II as well for a few truncations) are completely consistent with the fact that the system consists of expansion normalized variables which are real. As mentioned in section 2, complex entries for various \(\mathbf{a}\) suggest that the singularities will be non-finite time in nature and hence these quantities taking up complex values is consistent with the analysis as shown in [69]. Similar case has been for various cosmological systems (for example, see [71, 72]) \[.R=\left(\begin{array}{ccc}\frac{9x^{2}}{2u}+1&0&-\frac{3x^{3}}{2u^{2}}&0\\ -\sqrt{\frac{3}{2}}\lambda y&1-\sqrt{\frac{3}{2}}\lambda x&0&0\\ 0&0&1-\frac{\rho^{2}z}{2u^{2}}&\frac{\rho z}{u}\\ 0&\frac{3\rho y}{u}&-\frac{3\rho y^{2}}{2u^{2}}&\frac{3\rho^{2}}{2u}+\frac{3}{ 2}\end{array}\right) \tag{49}\] Using the dominant balances we introduced in (48), we can now plug them into the Kovalevskaya matrix (49) to get the eigenvalues to be \[r\sim(-1,1.27,-1.5,1.27) \tag{50}\] We note that as one of the other eigenvalues (-1.5) besides the initial -1 is also negative, according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. Coupled with the fact that the dominant balances (48) have complex entries, this would mean that this truncation tells us that the singularities for this system may occur in non-finite time. The third truncation that we consider is given by \[\hat{f}=\begin{pmatrix}-\rho^{2}x/2u\\ -\rho^{2}y/2u\\ 3x^{2}z/2u\\ -3\rho z/2u\end{pmatrix} \tag{51}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((1/2,1/2,1,1/4)\) from which we can get the dominant balances to be \[\begin{array}{c}a_{1}=\left(2\sqrt{6z},2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{2}=\left(-2\sqrt{6z},2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{3}=\left(2\sqrt{6z},-2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{4}=\left(2\sqrt{6z},2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{5}=\left(-2\sqrt{6z},-2\sqrt{6z},-6z,2\sqrt{3z}\right)\\ \\ a_{6}=\left(-2\sqrt{6z},2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{7}=\left(2\sqrt{6z},-2\sqrt{6z},-6z,-2\sqrt{3z}\right)\\ \\ a_{8}=\left(-2\sqrt{6z},-2\sqrt{6z},-6z,-2\sqrt{3z}\right)\end{array} \tag{52}\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}-\frac{\rho^{2}}{2u}-\frac{1}{2}&0&\frac{\rho^{ 2}x}{2u^{2}}&-\frac{\rho x}{u}\\ 0&-\frac{\rho^{2}}{2u}-\frac{1}{2}&\frac{\rho^{2}y}{2u^{2}}&-\frac{\rho y}{u} \\ \frac{3xz}{u}&0&-\frac{3x^{2}z}{2u^{2}}-1&0\\ 0&0&\frac{3\rho z}{2u^{2}}&-\frac{3z}{2u}-\frac{1}{4}\end{array}\right) \tag{53}\] Using the dominant balances we introduced in (52), we can now plug them into the Kovalevskaya matrix (53) to get the eigenvalues to be \[r\sim(-1,1/2,-0.1,-0.1) \tag{54}\] We note that as two of the other eigenvalues besides the initial -1 are also negative, according to the Goriely-Hyde method the singularities of this system with regards to this truncation can occur only for a limited set of initial conditions. But in this case we see something which we didn't in the previous two truncations ; the dominant balances (52) do not have complex entries. This means that this truncation tells us that it is definitely possible to have singularities occuring in finite-time for this particular model. While we can go on and evaluate more truncations, we find that in no other truncation would we see something which we have not observed already in these three truncations. Namely that there is no truncation for this system for which the eigenvalues besides -1 are all positive and so it does seem like for this model the singularities will not be general and can only happen for a limited set of initial conditions. ### Model II In this scenario, we consider the function \(G(T)\) to be of the form \(G(T)=T+\alpha T^{2}\), where \(\alpha\) is a constant [88]. This represents a slight extension beyond the Teleparallel Equivalent of General Relativity (TEGR), as \(\alpha=0\) corresponds to the TEGR model. For this particular \(G(T)\), Eqs. (36)-(37) can be expressed as follows: \[\rho_{de} = \frac{\dot{\phi}^{2}}{2}+V(\phi)-T(1+3T\alpha)\,, \tag{55}\] \[p_{de} = \frac{\dot{\phi}^{2}}{2}-V(\phi)+T(1+3T\alpha)+4\dot{H}(1+6T\alpha )\,. \tag{56}\] To establish an independent dynamical system, we introduce dimensionless variables defined as: \[x=\frac{\kappa\dot{\phi}}{\sqrt{6}H}\,,\qquad\quad y=\frac{ \kappa\sqrt{V}}{\sqrt{3}H}\,,\qquad\quad z=-2\kappa^{2}\,,\qquad\quad u=-36H^{ 2}\alpha\kappa^{2}\,, \tag{57}\] \[\rho=\frac{\kappa\sqrt{\rho_{r}}}{\sqrt{3}H}\,,\qquad\quad\lambda =-\frac{V_{,\phi}(\phi)}{\kappa V(\phi)}\,,\qquad\quad\Theta=\frac{V(\phi)V _{,\phi\phi}}{V_{,\phi}(\phi)^{2}}\,. \tag{58}\] Consequently, the corresponding dynamical system can be obtained as, \[\frac{dx}{dN}=-\frac{x\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1 \right)\right)}{2(2u+z-1)}-3x+\sqrt{\frac{3}{2}}\lambda y^{2}\,,\] \[\frac{dy}{dN}=-\frac{1}{2}y\left(\frac{\rho^{2}-3\left(u-x^{2}+y ^{2}+z-1\right)}{2u+z-1}+\sqrt{6}\lambda x\right)\,,\] \[\frac{du}{dN}=\frac{u\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1 \right)\right)}{2(2u+z-1)}\,,\] \[\frac{d\rho}{dN}=-\frac{\rho\left(\rho^{2}+5u+3x^{2}-3y^{2}+z-1 \right)}{2(2u+z-1)}\,,\] \[\frac{dz}{dN}=0\,,\] \[\frac{d\lambda}{dN}=-\sqrt{6}(\Theta-1)x\lambda^{2}\,.\] We again consider \(\lambda\) to be a constant here, which would mean that we are interested in exponential potentials. Furthermore, we assume that \(2u>>z-1\) which is again not hard to justify considering the definitions of these quantities in (57). By taking these considerations into account, the dynamical system takes the form \[\frac{dx}{dN} =-\frac{x\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right)\right)}{4u}-3 x+\sqrt{\frac{3}{2}}\lambda y^{2}\,, \tag{59}\] \[\frac{dy}{dN} =-\frac{1}{2}y\left(\frac{\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right) }{2u}+\sqrt{6}\lambda x\right)\,,\] (60) \[\frac{du}{dN} =\frac{u\left(\rho^{2}-3\left(u-x^{2}+y^{2}+z-1\right)\right)}{4u }\,,\] (61) \[\frac{d\rho}{dN} =-\frac{\rho\left(\rho^{2}+5u+3x^{2}-3y^{2}+z-1\right)}{4u}\,,\] (62) \[\frac{dz}{dN} =0\,,\] (63) \[\frac{d\lambda}{dN} =0\,. \tag{64}\] We can now start with the Goriely-Hyde analysis of this system, with the first truncation that we consider being \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ -y\rho^{2}/4u\\ 3y^{2}\\ -3\rho x^{2}/4u\end{pmatrix} \tag{65}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \((-1,-1,-1,-1)\) from which we can get the dominant balances to be \[a_{1}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac{2}{3} },\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{2}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac{2 }{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{3}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{4}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[. \tag{66}\] \[a_{5}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},\frac{\sqrt{2}}{\lambda}\right)\] \[a_{6}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[a_{7}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] \[a_{8}=\left(-\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{1}{\lambda}\sqrt{\frac {2}{3}},\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}}{\lambda}\right)\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}1&\sqrt{6}\lambda y&0&0\\ 0&1-\frac{\rho^{2}}{4u}&\frac{\rho^{2}y}{4u^{2}}&-\frac{\rho y}{2u}\\ 0&6y&1&0\\ -\frac{3\rho x}{2u}&0&\frac{3\rho x^{2}}{4u^{2}}&1-\frac{3x^{2}}{4u}\end{array}\right) \tag{67}\] Using the dominant balances we introduced in (66), we can now plug them into the Kovalevskaya matrix (67) to get the eigenvalues to be \[r\sim(-1,1,-2\sqrt{2},2\sqrt{2}) \tag{68}\] As we have one of the eigenvalues besides -1 also being negative, this truncation tells us that the singularities that could appear for this model would also only be occuring for a limited set of initial conditions for the variables. Furthermore given that the dominant balances (66) all have real entries then this would mean that the singularities only appear in finite time. The second truncation that we would be considering is given by \[\hat{f}=\begin{pmatrix}\sqrt{\frac{3}{2}}\lambda y^{2}\\ -y\rho^{2}/4u\\ 3y^{2}\\ -3\rho x^{2}/4u\end{pmatrix} \tag{69}\] Using the ansatz of the Goriely-Hyde method, we get the exponents to be \(p=(-1,-1,-1,-1)\) from which we can get the dominant balances to be \[\begin{split} a_{1}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}}, \frac{\sqrt{2}i}{\lambda},-\frac{1}{2\lambda^{2}},\frac{\sqrt{2}i}{\lambda} \right)\\ \\ a_{2}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},\frac{\sqrt{2}i}{\lambda}\right)\\ \\.\\ a_{3}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}i}{\lambda}\right)\\ \\ a_{4}=\left(\frac{1}{\lambda}\sqrt{\frac{2}{3}},-\frac{\sqrt{2}i}{ \lambda},-\frac{1}{2\lambda^{2}},-\frac{\sqrt{2}i}{\lambda}\right)\\ \end{split} \tag{70}\] We can now write the Kovalevskaya matrix to be \[.R=\left(\begin{array}{cccc}1-\frac{\rho^{2}}{4u}&0&\frac{\rho^{2}x}{4u^{2 }}&-\frac{\rho x}{2u}\\ -\sqrt{\frac{3}{2}}\lambda y&1-\sqrt{\frac{3}{2}}\lambda x&0&0\\ \frac{3x}{2}&0&1&0\\ 0&\frac{3\rho y}{2u}&-\frac{3\rho y^{2}}{4u^{2}}&\frac{3y^{2}}{4u}+1\end{array}\right) \tag{71}\] Using the dominant balances we introduced in (70), we can now plug them into the Kovalevskaya matrix (71) to get the eigenvalues to be \[r\sim(-1,1.6,3.7,-0.2) \tag{72}\] We again see that there are eigenvalues besides -1 which are negative, which again suggests that the model may not have general singularities. Furthermore this truncation also suggests that singularities could take place in non-finite time as shown by the complex entries in the dominant balance (70). While we can again go on for more truncations, what we have found out is that the other truncations do not offer anything new other than what we have seen so far. Namely, no truncation suggests that the model can allow for general singularities and so we are not going to be evaluating for more truncations here. Physical classification of the singularities Until now, we have discussed the singularity structure within the dark energy scenario from a dynamical perspective. However, it is insufficient to merely acknowledge the existence of singularities in this system from a physical standpoint. Thus, it becomes necessary to appropriately classify the potential types of singularities that could occur in this model. Various types of physical singularities for cosmology at a specific time \(t=t_{s}\), where \(t_{s}\) represents the occurrence of the singularities, can be classified as follows [57, 89]: * Type I ("Big Rip"): In this case, the scale factor \(a\), effective energy density \(\rho_{\rm eff}\), and effective pressure density \(p_{\rm eff}\) diverge. * Type II ("Sudden/Quiescent singularity"): In this case, \(p_{\rm eff}\) diverges, as well as the derivatives of the scale factor beyond the second derivative. * Type III ("Big Freeze"): In this case, the derivative of the scale factor from the first derivative onwards diverges. * Type IV ("Generalized sudden singularities"): In this case, the derivative of the scale factor diverges from a derivative higher than the second. Among these classifications, Type I singularities are considered strong singularities since they have the ability to distort finite objects, while singularities of Type II, Type III, and Type IV are regarded as weak singularities as they cannot be perceived as either the beginning or the end of the universe. Although there are other minor types of singularities, such as Type V singularities or "w" singularities, we will focus solely on Type I to Type IV singularities here. The most general form of the Hubble parameter for investigating singularities within the aforementioned classified types is expressed as [72]: \[H(t)=f_{1}(t)+f_{2}(t)(t-t_{s})^{\epsilon} \tag{73}\] Here, \(f_{1}(t)\) and \(f_{2}(t)\) are assumed to be nonzero regular functions at the time of the singularity, and similar conditions apply to their derivatives up to the second order. Additionally, \(\alpha\) is a real number. It is not mandatory for the Hubble parameter (34) to be a solution to the field equations; however, we will consider this case and explore the implications of this assumption on the singularity structure based on our dynamic analysis. First, we observe that none of the variables \(x\), \(y\), or \(z\) as defined in (10) can ever become singular for any cosmic time value. The singularities that can occur considering the Hubble parameter as defined in (34) are as follows: * For \(\epsilon<-1\), a big rip singularity occurs. * For \(-1<\epsilon<0\), a Type III singularity occurs. * For \(0<\epsilon<1\), a Type II singularity occurs. * For \(\epsilon>1\), a Type IV singularity occurs. Another ansatz useful for classifying singularities was introduced in [61] whereby the scale factor was written as: \[a(t)=g(t)(t-t_{s})^{\alpha}+f(t) \tag{74}\] where \(g(t)\) and \(f(t)\) and all their higher-order derivatives with respect to cosmic time are smooth functions of the cosmic time. For this ansatz, according to the values of the exponent \(\alpha\), one can have the following singularities: * For \(\alpha<0\), a Type I singularity occurs. * For \(0<\alpha<1\), a Type III singularity develops. * For \(1<\alpha<2\), a Type II singularity occurs. * For \(\alpha>2\), a Type IV singularity occurs. Again, it is not mandatory for the scale factor in equation (74) to necessarily be a solution to the field equations, but we would like to consider this and equation (73) in order to gain a well-motivated understanding of the types of cosmological singularities we can encounter in the various models we have discussed so far. To proceed further, we need to express the expansion normalized variables that we defined for both models in terms of the Hubble parameter alone. To do this, we realize that we need to express the potential and the derivative of the field parameter in each case in terms of the Hubble parameter as these are the quantities on which the expansion normalized variables really depend in both the scenario ( in this scenario we are talking about representing the x and y variables in both cases in terms of the Hubble parameter). For the model \(G(T)=\beta T\ln\left(\frac{T}{T_{0}}\right)\) (32), we have \[\dot{\phi}_{\beta}^{2}=\frac{2\dot{H}}{\kappa^{2}}-\rho_{m}-\frac{4}{3}\rho_{ r}-4\dot{H}\Biggl{[}\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+3\beta\Biggr{]} \tag{75}\] While the potential for this case be written as \[V_{\beta}=\frac{\dot{H}\left(6\beta+2\beta\ln\left(\frac{6H^{2}}{T_{0}} \right)+\frac{1}{\kappa^{2}}\right)-\frac{\rho_{m}}{2}+\rho_{r}}{H^{2}}+3 \left(4\beta+2\beta\ln\left(\frac{6H^{2}}{T_{0}}\right)+\frac{1}{\kappa^{2}}\right) \tag{76}\] For the model \(G(T)=T+\alpha T^{2}\), we have the same quantities to be \[\dot{\phi}_{\alpha}^{2}=\dot{H}\left(-24\alpha H^{2}-\frac{2}{\kappa^{2}}-4 \right)-\rho_{m}-4\rho_{r} \tag{77}\] \[V_{\alpha}=H^{2}\left(12\alpha\dot{H}+\frac{3}{\kappa^{2}}+1\right)+\left(\frac{1} {\kappa^{2}}+2\right)\dot{H}+3\alpha H^{4}-\frac{\rho_{m}}{2}+\rho_{r} \tag{78}\] Using these one can express the dynamical variables used in the Goriely-Hyde analysis of both the models ((35),(36),(57),(58)) completely in terms of the Hubble parameter ( we will not write the variables out explicitly here as they have quite long expressions) and now we can use both the ansatz (73)-(74) to see under what conditions will the variables blow up. Remember that we do not want the dynamical variables to blow up and the values of the exponents of the ansatz for which they do not blow up will tell us the singularities which will be possible for these models. The interesting conclusion that actually comes out when one puts both the ansatz into the dynamical variables is that only Type IV singularities are possible for both models. None of Type I, Type II or Type III singularities can occur for any of the models for any of the ansatz (73)-(74) while Type IV singularities do take place for both the models, for any of the ansatz'. This is quite an interesting behaviour which to the best of our knowledge has only been shown in \(f(T,\phi)\) theories, in that one is only observing Type IV singularities for both of the models considered. This leads one to speculate the possibility that \(f(T,\phi)\) gravity may be better suited for cosmology than some of their other modified gravity counterparts as the theory is only admitting weak singularities. Furthermore, given the analysis from the Goriely-Hyde procedure, one is lead to conclude that such singularities can only occur for a limited set of initial conditions and may occur in finite or even non-finite time. ## 6 Concluding remarks In this paper, we have considered a well studied formulation of teleparallel dark energy in the form of \(f(T,\phi)\) gravity, where the scalar field drives the expansion of the universe. We considered two particular well studied models of this theory and probed cosmological singularities for both the scenarios. For this endeavor, we used a method pioneered by the works of Odintsov in recent years, in which we applied the Goriely-Hyde procedure to the various dynamical systems by which the cosmological equations of these three models could be described. This allowed us to make predictions about whether singularities in these scenarios would be strongly dependent on initial physical conditions and whether they could happen in finite or nonfinite times. After this, we employed two very well-motivated ansatz' for the Hubble parameter and the scale factor to reach the conclusion that one can only have Type IV singularities for both of the models considered in our work, that too only for a limited set of initial conditions.This work propels one to think in the direction that \(f(T,\phi)\) theories may only allow for weak cosmological singularities, which may make them better placed than some of the other modified gravity based dark energy regimes which allow for more singularities and also those of the stronger types. ## Acknowledgements The authors would like to thank Sergei Odintsov for very helpful discussions. The research by M.K. was carried out in Southern Federal University with financial support of the Ministry of Science and Higher Education of the Russian Federation (State contract GZ0110/23-10-IF). RCN thanks the CNPq for partial financial support under the project No. 304306/2022-3. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
暗エネルギーの謎を解明する追求は、宇宙論分野で大きな注目を集めています。従来の方法は、 kosmologial constant を中心に、広く検討されてきましたが、スケール場ベースモデルと重力に修正を加えた理論は、魅力的な研究の道を開いています。この中で、特に $f(T,\phi)$ モデルは、テレパレルの枠組みの中で暗エネルギーを理解するための手段として注目を集めています。この研究では、テレパレルの暗エネルギーの2つのよく研究されているモデルを調査し、これらのシナリオにおける kosmologial Singularities の存在について調べました。Goriely-Hydeの手法を用いて、これらのモデルの kosmologial 方程式を支配するダイナmical システムを調査しました。分析の結果、両方のモデルは Type IV Singularities を発現していますが、初期条件の範囲が限定されています。この結果は、テレパレルの宇宙論モデルが他の重
2309.13953
Jupiter's equatorial quasi-quadrennial oscillation forced by internal thermal forcing
Observations have shown that there exists downward propagation of alternating westward/eastward jets in Jupiter's equatorial stratosphere, with a quasi-period between four and six years. This phenomenon is generally called the quasi-quadrennial oscillation (QQO). Here, we simulate the QQO by injecting isotropic small-scale thermal disturbances into a three-dimensional general circulation model of Jupiter. It is found that the internal thermal disturbance is able to excite a wealth of waves that generate the equatorial QQO and multiple jet streams at middle and high latitudes of both hemispheres. The dominant wave mode in generating the QQO-like oscillation is that with a zonal wavenumber of 10. Inhomogeneous evolution of potential vorticity favors the emergence of the off-equatorial zonal jets. The off-equatorial jets migrate to the equator, strengthen the deep equatorial jets, and result in the prolonging of the QQO-like oscillations.
Yuchen Lian, Xianyu Tan, Yongyun Hu
2023-09-25T08:38:39
http://arxiv.org/abs/2309.13953v1
# Jupiter's equatorial quasi-quadrennial oscillation forced by internal thermal forcing ###### Abstract Observations have shown that there exists downward propagation of alternating westward/eastward jets in Jupiter's equatorial stratosphere, with a quasi-period between four and six years. This phenomenon is generally called the quasi-quadrennial oscillation (QQO). Here, we simulate the QQO by injecting isotropic small-scale thermal disturbances into a three-dimensional general circulation model of Jupiter. It is found that the internal thermal disturbance is able to excite a wealth of waves that generate the equatorial QQO and multiple jet streams at middle and high latitudes of both hemispheres. The dominant wave mode in generating the QQO-like oscillation is that with a zonal wavenumber of 10. Inhomogeneous evolution of potential vorticity favors the emergence of the off-equatorial zonal jets. The off-equatorial jets migrate to the equator, strengthen the deep equatorial jets, and result in the prolonging of the QQO-like oscillations. Jupiter ---- Atmosphere Circulation 0000-0002-4000-0000]Yuchen Lian 0000-0002-3870-3870]Xianyu Tan 0000-0002-4880-7880]Yongyun Hu ## 1 Introduction Infrared imaging observations showed that there existed periodic changes of temperature anomalies in Jupiter's stratosphere in the past decades (e.g. Orton et al., 1991; Friedson, 1999; Flasar et al., 2004; Simon-Miller et al., 2006; Giles et al., 2020; Antunano et al., 2021). The equatorial temperature anomalies alternate between cold and warm states of \(\pm 5\) K, with a quasi-period between 4 and 6 years. Associated with the temperature anomalies are vertically stacked eastward and westward zonal jets, with velocities variations of about \(\pm 100\) m s\({}^{-1}\)(Orton et al., 1991, 1994). This phenomenon is termed the quasi-quadrennial oscillation (QQO) (Leovy et al., 1991). Additionally, the stratospheric ethane's asymmetry with longitude and the equator may indicate the presence of QQO's circulation cells (Fletcher et al., 2016, 2017). Leovy et al. (1991) pointed out that the QQO closely resembles the Earth's quasi-biennial oscillation (QBO), in which equatorial zonal winds in the lower stratosphere oscillates between westward and eastward with a quasi-period of about 28 months (Baldwin et al., 2001). Theoretical and numerical studies demonstrated that the QBO is caused by the selective absorption of tropical waves (Lindzen and Holton, 1968; Lindzen, 1970, 1971, 1972). It was shown that eastward (westward) equatorial gravity waves (both Kelvin and inertia-gravity waves) tend to be dissipated and absorbed when they encounter their critical layers, and that the deposited eddy momentum fluxes would accelerate the jets below the critical layers, while the westward (eastward) waves can propagate transparently in the mean flow. Dunkerton (1985, 1997) used a two-dimensional model in latitude and pressure and showed that the QBO can be driven both by equatorial trapped waves and small-scale gravity waves. It implies that temperature fluctuations in Jupiter's equatorial stratosphere could be caused by a spectrum of breaking gravity waves (Young et al., 2005). Evidence shows vigorous wave activities in Jupiter's atmosphere, especially in the equatorial region (e.g. Rogers and Mettig, 2008; Asay-Davis et al., 2011; Simon-Miller et al., 2012). Most of these wave features appear within \(\pm 5^{\circ}\) of latitude (Simon-Miller et al., 2015; Orton et al., 2020), and they exhibit diversity and complexity with various zonal wavenumbers, including waves with wavenumber between 17 and 70 (Harringtona et al., 1996; Cosentino et al., 2017). Simon-Miller et al. (2012, 2015) showed that these waves consist of wavenumbers \(\geq 75\) and and have a phase speed of \(101\pm 3\) m s\({}^{-1}\), which are likely inertia-gravity waves. The thermal wave patterns with wavenumbers 2 to 15, observed by _Voyager_, have been identified as Rossby waves (Deming et al., 1997). Rossby wave activities may generate dry downdrafts, participating in the formation of the regular arrays of cloud plumes (Friedson, 2005; Garcia-Melendo et al., 2011). The wave patterns also show an organization of zonal wavenumber 11 to 13 (Allison, 1990). The properties of waves in Jupiter's atmosphere have been observed from _Galileo_ and _New Horizons_, with a vertical wavelength of \(\sim 50\) km (Allison & Atkinson, 2001; Arregi et al., 2009), horizontal wavelengths of \(\sim 10^{3}\) km (Reuter et al., 2007) and phase speed of \(\sim 200\) m s\({}^{-1}\)(Allison & Atkinson, 2001). Sub-grid wave parameterization is widely utilized to simulate the QQO in large-scale models. Friedson (1999) assumed that the equatorial waves are a combination of vertically propagating waves with eastward and westward velocities relative to the mean flow. They invoked a wave parameterization by choosing a flat gravity wave spectrum limited by maximum and minimum phase speeds to generate the momentum fluxes over waveguides. Li & Read (2000) followed Friedson (1999) and their models established a QQO pattern with a period of \(\sim 45\) months. Cosentino et al. (2017, 2020) argued that the observed QQO signal extends up to 0.1 mbars which is lower than the top pressure in Friedson (1999). They utilized a stochastic gravity drag parameterization and reproduced the QQO thermal structures, which are similar to the structures observed by _Cassini_. However, some aspects of Jupiter's QQO have not been explored. Many previous studies used two-dimensional models. These models lack 3D wave-flow interactions, in which wave generation and propagation and their interactions with the mean flow are parameterized and usually tuned to match certain aspects of observations. In contrast, 3D models have self-consistent wave-mean-flow interactions and are an alternative tool to elucidate the nature of the QQO. Secondly, several interruptions of Jupiter's QQO have been observed. For example, the period of QQO is 5.7 years from 1980 to 1990, and the period of QQO is 3.9 years from 1996 to 2006 (Antunano et al., 2021). In 2017, a lag occurred with the downward signal of QQO, delaying the phase of the QQO about a year (Giles et al., 2020). These changes may be related to meteorological phenomena in the deep troposphere between 0.5 bar and 4 bar (Anderson et al., 2018). However, the previous studies failed to simulate the variation of the QQO's period. We hope to provide some explanations for the variation of the QQO's period. Here, we study Jupiter's QQO using a global general circulation model. Atmospheric waves are generated by isotropic internal thermal perturbations near the radiative-convective boundary layers, instead of being parameterized. The wave-mean-flow interactions are self-consistently solved in our 3D model. The paper is organized as follows. The 3D model is described in Section 2. In Section 3, we present the simulation results and address the associated mechanisms. Conclusions and discussion are in Section 4. ## 2 Model The model used here is the MITgcm (Adcroft et al., 2004). A Newtonian cooling scheme is applied to the thermodynamics equation, which relaxes the temperature to a reference temperature \(T_{\rm ref}\) over a radiative timescale \(\tau_{\rm rad}\). Newtonian cooling scheme has been widely used in previous studies for Jupiter (e.g. Lian & Showman, 2010) and extrasolar giant planets (e.g. Liu & Showman, 2013; Mayne et al., 2014). Both reference temperature \(T_{\rm ref}\) and radiative timescale \(\tau_{\rm rad}\) profiles are from Li et al. (2018). The vertical dimension of the model extends from 10 bars at the bottom to \(10^{-4}\) bars at the top. The profiles in Li et al. (2018) are extended to 10 bars, by assuming a dry adiabatic temperature profile and a constant radiative timescale of \(2\times 10^{8}\) s at the bottom (Figure 1). The heating terms in the thermodynamic equation are therefore written as: \[\frac{q}{c_{p}}=-\frac{T(\lambda,\phi,p,t)-T_{\rm eq}(\phi,p)}{\tau_{\rm rad}} +S(\lambda,\phi,p,t+\delta t) \tag{1}\] where \(q\) is the specific heating rate (W kg\({}^{-1}\)), \(c_{p}\) is the specific heat, \(\lambda\) is longitude, \(\phi\) is latitude, \(p\) is pressure, and \(t\) is time. The first term on the right-hand side is the Newtonian cooling term that represents radiative damping. We define the equilibrium temperature \(T_{\rm eq}=\theta_{eq}(p/p_{0})^{\kappa}\) and the equilibrium potential temperature as \[\theta_{eq}(\phi,p)=\theta_{\rm ref}(p)+\delta\theta(\phi,p) \tag{2}\] where \(p_{0}=10\) bars is the standard reference pressure, \(\kappa=R/c_{p}\), \(\theta_{\rm ref}\) is the reference potential temperature, and \(\delta\theta\) represents the latitudinal difference in equilibrium potential temperature corresponding to the solar irradiation. In our simulations, the hot equator and cold poles forced by solar insolation are parameterized as \(\delta\theta=\Delta\theta(p)\cos\phi\). \(\Delta\theta\) increases from 0 K at 3 bars to 10 K at \(10^{-4}\) bar, roughly consistent with observations (Simon-Miller et al., 2006). The heating/cooling rate \(S(\lambda,\phi,p,t+\delta t)\) in Equation (1) represents the internal forcing due to interior convective perturbations, mimicking overshooting and mixing effects across the radiative-convective boundary at the bottom of stratified layers. We assume that convective perturbations are spatially isotropic and random in time. This scheme is originally developed in Showman et al. (2019) for brown dwarfs and giant planets. Tan (2022) applied this scheme to updated brown dwarf models, and Lian et al. (2022) applied it to close-in gas giant planets. Here we adopt the scheme to the Jovian atmosphere. A spatial perturbation pattern \(F\) is parameterized by globally isotropic heat sources at isobaric surfaces, which is numerically made by integrating all the spatial patterns of spherical harmonic functions with the wavenumber between 1 and \(n_{f}\): \[F=f_{\rm amp}(p)\sum_{m=1}^{n_{f}}N_{n_{f}}^{m}(\sin\phi)\cos[m(\lambda+\psi_{ m})] \tag{3}\] where \(N_{n_{f}}^{m}(\sin\phi)\) is the normalized associated Legendre polynomials, \(m\) and \(\psi_{m}\) are the zonal wavenumber and a randomly chosen longitude phase, respectively. \(f_{\rm amp}(p)\) is an internal forcing amplitude with unit of K s\({}^{-1}\) with a vertical variation. \(f_{\rm amp}(p)\) exponentially decreases with decreasing pressure and equals zero at the pressure \(p_{\rm{forctop}}=1\) bar. Figure 2 shows an example of spatial patterns of \(f_{\rm amp}(p)\) and \(F(\lambda,\phi,p)\). The heating/cooling rate \(S\) evolves with time as a Markov process: \[S(\lambda,\phi,p,t+\delta t)=\sqrt{1-\alpha^{2}}S(\lambda,\phi,p,t)+\alpha F( \lambda,\phi,p) \tag{4}\] where \(\alpha\) is a de-correlation factor and equals \(\delta\)t/\(\tau_{\rm s}\), \(\delta\)t is the dynamical time step and \(\tau_{\rm s}\) is the storm timescale of the internal forcing, representing the large-scale convective temporal organization. We are not able to Figure 1: Reference temperature (a) and radiative timescale profiles (b) used in our simulations. Figure 2: (a) The vertical structure of the thermal perturbation’s amplitude \(f_{\rm amp}\). (b) An example of the spatial pattern of \(F(\lambda,\phi,p)\) in one layer shows the horizontal isotropic pattern. Here shows for \(n_{f}=10\). accurately estimate the value of the storm timescale. However, observations provide us with some information. The thermal wave amplitude lasts \(10^{5}\) seconds on Jupiter (Deming et al., 1997). Therefore, we set up the \(\tau_{\rm s}\) as \(10^{5}\) s in our canonical models and change the \(\tau_{\rm s}\) from \(10^{6}\) s to \(10^{3}\) s in sensitivity tests. We have to choose a forcing amplitude appropriate for Jupiter. The convective overshooting and mixing near the radiative-convective boundary is expected to cause a temperature perturbation with an amplitude of \(\Delta T_{\rm internal}\), We aim to generate the same amount of \(\Delta T_{\rm internal}\sim 30\) K at 10 bars with theoretical \(\tau_{\rm rad}\sim pc_{p}/4g\sigma T^{3}\) in our global-scale GCM by our forcing scheme, where \(g\) is gravity and \(\sigma\) is Stefan-Boltzmann constant. This can be estimated using the analysis in Showman et al. (2019), that \(\Delta T_{\rm internal}\) is written as: \[\frac{\Delta T_{\rm internal}}{\tau_{rad}}\sim f_{\rm amp}\sqrt{n_{f}} \tag{5}\] \(f_{\rm amp}\) and \(\sqrt{n_{f}}\) should be restricted to obey the relationship in Equation 5 in order to keep temperature perturbations \(\Delta T_{\rm internal}\) the same in different cases. \(n_{f}\) is changed from 5 to 40 in different simulations to test the dominant wavenumber. In principle, we expect that the amplitude of internal forcing \(f_{\rm amp}\) in our simulations is related to the real internal heat fluxes. Jupiter has \(\sim 6\) W m\({}^{-2}\) internal heat flux (Ingersoll, 1990; Guillot, 2005; Li et al., 2018), so we estimate that the thermal amplitude \(n_{f}=10\) yields \(f_{\rm amp}\sim 4\times 10^{-6}\) K s\({}^{-1}\), and \(n_{f}=40\) yields \(f_{\rm amp}\sim 2\times 10^{-6}\) K s\({}^{-1}\). The internal forcing may be sufficiently strong to trigger super adiabatic regions, and we include a dry convective adjustment scheme to instantaneously remove the super adiabatic layers while conserving enthalpy. The stability criterion of a pair of adjacent stable layers is \(T_{n+1}-T_{k}<C1_{n+1}(T_{n+1}+T_{n})\), where \(n\) is the vertical index, \(T\) is the temperature, and \(C1_{n+1}=\kappa(p_{n+1}-p_{n})/2p_{(n+1)/2}\). If any pair of adjacent layers are unstable, the temperature will be adjusted to be convectively neutral while conserving total enthalpy. Such process is repeated until the whole column satisfies the stability criteria (Manabe and Strickler, 1964). We apply a Ryleigh drag term \(-\mathbf{u}/\tau_{\rm drag}\) into the dynamic equations to mimic the effects of angular momentum mixing with the planetary interior where the magnetohydrodynamic drag is important (Liu and Schneider, 2010), where \(\mathbf{u}\) is the horizontal velocity. The drag is linearly decreased with decreasing pressure. That is, \(\tau_{\rm drag}\) increases from a certain timescale (e.g. \(10^{7}\), \(10^{6}\) s) at the bottom linearly with decreasing pressure to infinite at a certain pressure \(p_{\rm drag,top}=4\) bars, after which the atmosphere is drag-free at pressures \(\leq p_{\rm drag,top}\). In order to diminish wave reflection at the upper boundary, we add a "sponge" layer for dissipation. This dissipation term is introduced as \(-\nu\mathbf{u}\), and \(\nu\) is the same as the damping coefficient form in Dowling et al. (1998): \[\nu(k)=\frac{1}{5\delta t}\frac{1}{2}(1-\cos(\pi\frac{k_{sp}+1-k}{k_{sp}})), \tag{6}\] where \(k\) is the vertical index (\(k=1\) at bottom) and \(k_{\rm sp}=10\) is the number of sponge layers. The model has 100 vertical layers. The rotation period is set to be 9.84 hours, and planetary radius \(R_{J}=7.14\times 10^{7}\) m, \(c_{p}=13000\) J kg\({}^{-1}\) K\({}^{-1}\) and \(\kappa=R/c_{p}=2/7\) appropriate to a hydrogen-dominated atmosphere. The gravity is set to 24.79 m s\({}^{-2}\). We adopt a horizontal resolution of C128 in our cubed-sphere grid in MITgcm, corresponding to horizontal resolution 128 \(\times\) 128 for each cube face (\(0.7^{\circ}\) per grid in latitude and longitude). The timestep is 100 seconds for most cases. All simulations integrated until balance, approximately 7000-10000 Earth days. ## 3 Results ### Basic Flow Regime Figure 3 shows temperature and zonal wind patterns with different bottom drag timescales. The drag timescales decrease from \(10^{7}\) s in Figure 3a, d to \(10^{5}\) s in Figure 3c, f (the drag effects increase in strength from top to bottom). Figure 4 shows temperature and zonal wind patterns with decreasing storm timescales from \(10^{6}\) s in Figure 4a, e to \(10^{3}\) s in Figure 4d, f. The key result is that the isotropic internal forcing results in zonal jets, and that the zonal jets are sensitive to drag and storm timescales. This is qualitatively in agreement with previous results in the context of isolated brown dwarfs and giant planets that lack equator-to-pole stellar irradiation difference (Showman et al., 2019; Tan, 2022). The left panels of Figure 3 and 4 show a temperature anomaly in the subtropics of opposite sign to that at the equator around 60 mbars, with a colder equator about 100 K and warmer subtropics and poles \(\sim 160\) K, which is caused by overturning circulation that is generated by the equatorial wave accelerations. The same meridional temperature variation has been found in the QBO simulations on Earth (e.g. Dickinson, 1968; Plumb and Bell, 1982; Baldwin et al., 2001). If the heating is homogeneous around the latitude circle, the zonal-mean momentum equation is: \[\frac{\partial\bar{u}}{\partial t}-f\bar{v}=a_{x} \tag{7}\] \(a_{x}\) can be the acceleration momentum source, which comes from the eddy-momentum transports or the friction. Coriolis force is weak near the equator, so the second term is negligible, and all the accelerations work on the background flow, leading to the time evolution of the zonal jets. However, Coriolis force increases poleward, becoming the main part to balance the momentum acceleration. An overturning circulation is driven by the eddy accelerations, forcing the air to ascend under the eastward jets and descend above the westward jets. A colder equator appears in the cyclonic shear zone due to the adiabatic cooling by ascending motions, and the warmer equator appears in the anti-cyclonic shear zone. In brief, the upwelling with equatorial cold anomaly accompanies the eastward equatorial wind, while the downwelling with equatorial warm anomaly accompanies the westward equatorial wind (Dickinson, 1968). Figure 5a shows a schematic for the mechanism causing this meridional temperature variation. Figure 5b shows the matching of the eastward wind (the blue dash line circle), cold center (the black solid line circle), and the overturning circulation, which are shown by the stream function colormap. We have ruled out the Kelvin-Helmholtz instabilities of vertical wind shears to generate waves. We confirm that the Richardson number \(R_{i}=N^{2}/(\partial u/\partial z)^{2}\) is far greater than 1 in our simulations, where \(N\) is Brunt-Vaisala Frequency. Meridional propagations of Rossby waves help generate and spatially inhomogenized zonal jets by breaking the absolute vorticities into PV staircases (Dritschel & McIntyre, 2008; Dunkerton & Scott, 2008). The absolute vorticity is the sum of the relative vorticity and the Coriolis parameter, providing an approximation of potential vorticity (PV). A particularly useful form of analyzing angular momentum transport to the mean flow by atmospheric waves was developed by Eliassen & Palm (1961) Figure 3: Temperature (K, left panels) and zonal winds (m s\({}^{-1}\), right panels) at a pressure of 60 mbars in simulations with different bottom drag. These are snapshots at 10000 simulation days. The drag timescales are 10\({}^{7}\) s (a), 10\({}^{6}\) s (b), and 10\({}^{5}\) s (c). The forcing amplitude is 4 \(\times\) 10\({}^{-6}\) K s\({}^{-1}\), storm timescale \(\tau_{\rm s}\) is 10\({}^{5}\) s, forcing wavenumber \(n_{f}\) = 10, and other parameters are described in the text. that involves analyzing the so-called Eliassen-Palm (EP) flux. The EP flux equation in the quasi-geostrophic balance reads (e.g. Andrews et al., 1987): \[\mathbf{\bar{F}}=\bar{F}_{y}+\bar{F}_{z}=-\rho\overline{u^{\prime}v^{\prime}} \mathbf{j}+\rho f_{0}\frac{\overline{v^{\prime}\theta^{\prime}}}{\bar{\theta}_ {z}}\mathbf{k} \tag{8}\] primes denote the residuals from the zonal average (overbars). The acceleration of the mean flow is corresponding to the convergence of the EP flux \(\nabla\cdot\mathbf{\bar{F}}=\frac{\partial\bar{F}_{z}}{\partial y}+\frac{ \partial\bar{F}_{z}}{\partial z}\). We also have \(\overline{v^{\prime}q^{\prime}}=\nabla\cdot\mathbf{\bar{F}}\), where \(q\) is PV, \(q^{\prime}\) is the perturbations relative to the zonal-mean part \(\bar{q}\) from total PV, and \(\overline{v^{\prime}q^{\prime}}\) is the mean meridional flux of PV. The EP flux is corresponding to the meridional flux of PV. Figure 4: Temperature (K, left panels) and zonal winds (m s\({}^{-1}\), right panels) at 60 mbars in four simulations with different \(\tau_{\mathrm{s}}\). These are snapshots at 10000 simulation days. The four simulations are identical, except for different storm timescales: \(10^{6}\) (a), \(10^{5}\) (b), \(10^{4}\) (c), and \(10^{3}\) s (d). The forcing wavenumber \(n_{f}\) is 10 and the drag timescale \(\tau_{\mathrm{drag}}=10^{7}\) s. Other parameters are the same as the parameters in Figure 3. For linearized planetary Rossby waves, the group velocities \(\mathbf{c}_{g}\) are related to the EP fluxes via (Vallis, 2006): \[\bar{\mathbf{F}}=\mathbf{c}_{g}\bar{A} \tag{9}\] The quantity \(\bar{A}=p\bar{q}^{2}/2\bar{q}_{y}\) is the mean wave activity density. That is, the group velocities of planetary Rossby waves are parallel to the EP fluxes, and the meridional PV fluxes are corresponding to the meridional Rossby wave group velocities. The Rossby waves are more easily breaking in the regions with weak PV gradients. When the Rossby waves break, the EP fluxes transport momentum to the mean flow (Hoskins and James, 2014). Rossby waves breaking causes PV mixing by the eddies and weaken the PV gradients further (Dritschel and McIntyre, 2008; Marcus and Shetty, 2010). As a result, the positive feedback continues and the PV gradient moves to a state with a high-low contrast between latitudes, leading to the formation of zones and belts. The sharp edges between the inhomogeneous PV are corresponding to the maximum speeds of the zonal jets. We could obtain a general idea of the amplitude and propagating directions of Rossby waves from the white arrows of EP fluxes in Figure 6, that the arrows represent the group velocity directions and their lengths are proportional to the group velocities (Equation 9). The shorter arrows are consistent with the center of zonal jets with weaker PV gradients, showing more wave breaking. When waves reach the critical latitudes where their phase speeds are equal to the mean speeds of jets, their meridional group velocities tend to zero, which prevents further propagation (O'Rourke and Vallis, 2016). While the regions close to zero wind contours have larger EP fluxes, showing the existence of strong meridional Rossby wave propagations between the strips of PV. The convergences of EP fluxes are shown in colormap in Figure 6, indicating that the waves are absorbed. Wave absorption causes wave-induced acceleration of the same sign as the background flow, in which the red color shows eastward and the blue color shows westward. The PV is forced to organize staircase patterns, promoting the formations of the zonal jets. We have demonstrated that this mechanism also works in a full 3D model by analyzing the EP flux convergence and divergence. After explaining the formations of the jets, we analyze the factors that affect the jet distributions. Firstly, the rotation rate affects the widths of the jets. Generally speaking, the widths of the jets are determined by the Rhines scale \(L_{R}=\sqrt{U/\beta}\), where \(U\) is the eddy wind speed and \(\beta\) is the meridional gradient of the Coriolis parameter. When the motion scale reaches the Rhines scale, the eddy fields will be affected by the gradients of the Coriolis force, and the \(\beta-\)effect induces the self-organization of the jets (Rhines, 1975). The jet widths decrease with the increase of the rotation rate. Second, Figures 3d-f show that the maximum off-equatorial jet speeds decrease from about \(200\) m s\({}^{-1}\) with the decreasing \(\tau_{\text{drag}}=10^{7}\) s to about \(20\) m s\({}^{-1}\) with \(\tau_{\text{drag}}=10^{5}\) s, associated with increasing bottom drag. The off-equatorial jets are more sensitive to the bottom drag, and the zonal jets only develop at the equator with Figure 5: (a) The sketch depiction of the kinetic mechanism of equator-to-subtropical temperature inverse caused by the QBO, from Dickinson (1968). E represents the eastward wind and W represents the westward wind. (b) The colormap shows the mass stream function at the equatorial region of cases with \(\tau_{i}=10^{5}\) and \(\tau_{\text{drag}}=10^{7}\) (same as Figure 4b). The black solid line shows the zonal mean temperature and the blue dash line shows the zonal-mean zonal wind. strong bottom drag. In geostrophic balance, horizontal divergence is larger at low latitude (e.g. Showman and Kaspi, 2013), that \[\nabla\cdot\mathbf{u}=\frac{\beta}{f}v=\frac{v}{R_{J}\tan\phi} \tag{10}\] At low latitudes where \(\tan\phi\) is small, larger divergence allows larger vertical movements, enhancing the horizontal temperature differences and generating waves more easily, promoting the jet formations at low latitudes. The equatorial jet speeds increase at first with \(\tau_{\mathrm{drag}}\) and then decrease, even if they all are in the eastward phase. The maximum equatorial jet speeds increase from about 50 m s\({}^{-1}\) in weak drag case with \(\tau_{\mathrm{drag}}=10^{7}\) s of Figure 3d to about 180 m s\({}^{-1}\) of Figure 3e and then decrease to about 100 m s\({}^{-1}\) of Figure 3f. The off-equatorial westward jets near \(\pm 15^{\circ}\) are suppressed a lot, and the equatorward momentum exchange is also weakened, so the equatorial eastward jets increase from Figure 3d to e; Then the drag increases to such an extent that the eastward equatorial jets are damped and thus weaken again from e to f. In Figure 4, we keep the bottom drag and amplitude of perturbation the same, i.e. \(\tau_{\mathrm{drag}}=10^{7}\) s and \(f_{\mathrm{amp}}=4\times 10^{-6}\) K s\({}^{-1}\), but decrease the storm timescale \(\tau_{\mathrm{s}}\) from \(10^{6}\) s at the top Figure 4a to \(10^{3}\) s at the bottom Figure 4d. The simulation with a long \(\tau_{\mathrm{s}}=10^{6}\) s shows much more turbulence with eddies at mid-to-high latitudes while the short \(\tau_{\mathrm{s}}\) simulations only show robust jets. The peak negative wind speeds (westward jets) are \(\sim-200\) m s\({}^{-1}\) at the equator of all the simulations, but the eastward off-equatorial jets drastically decrease speeds from about 200 m s\({}^{-1}\) to 100 m s\({}^{-1}\) with decreasing \(\tau_{\mathrm{s}}\). We suggest an explanation for the relationship between \(\tau_{\mathrm{s}}\) and the off-equatorial jet structure: the larger \(\tau_{\mathrm{s}}\), the stronger parameterized convective heating, which results in larger EP fluxes, leading a larger zonal acceler Figure 6: Wave-induced acceleration (colorap) (\(\nabla\cdot\mathbf{F}\)) at north hemisphere’s mid- and high-latitudes with zonal-mean zonal wind (solid lines: positive and dash lines: negative) and Eliassen-Palm fluxes (\(\mathbf{F}\)) (white arrows) of four \(\tau_{\mathrm{s}}\) simulations. All quantities are on time averaging from 10000 to 10020 days. Meridional propagation of Rossby waves generates eastward and westward accelerations by eddies, leading to various zonal jets. ation maintaining the zonal jets. Detailed investigations are shown in Subsection 3.3.2. To summarize, with the decreasing \(\tau_{\rm drag}\) and increasing drag in Figure 3, the jets storm timescale, showing that the drag removes the momentum out of zonal jets, confining the jets at low latitudes. With the decreasing \(\tau_{\rm s}\) in Figure 4 and Figure 6, the amplitude of wave-induced acceleration shows a declining tendency, consistent with the lower speeds of zonal jets. ### QQO-like oscillations at the Equatorial region In our simulations, we find the downward migrations of the stacked equatorial jets with alternating wind directions, exhibiting an oscillation similar to the QQO in Figure 7, which shows the equatorial zonal winds with time for 10 Earth years over pressure-time planes. To keep the perturbations the same as that at the bottom, we confine the \(f_{\rm amp}\sqrt{n_{f}}\) as a constant \(1.3\times 10^{-5}\) K s\({}^{-1}\) (e.g. \(f_{\rm amp}=4\times\)\(10^{-6}\) K s\({}^{-1}\) and \(n_{f}=10\)). These cases are with \(\tau_{\rm s}=10^{5}\) s and \(\tau_{drag}=10^{7}\) s. The different cases exhibit complex structures with stacked jets of speeds from \(-200\) m s\({}^{-1}\) to \(150\) m s\({}^{-1}\) between \(10^{-3}\) bar and \(0.1\) bar. The eastward phase tends to last longer in one oscillation period at larger pressure, but the westward phase is dominant at low pressure. Different wavenumber cases have different periods of oscillation. Figure 7b shows the oscillation periods with different forcing wavenumber \(n_{f}\). The periods are defined as the time, which takes eastward wind changing to westward wind and then changing back to eastward wind at the level of \(10^{-2}\) bar. Notice that the periods in Figure 7b do not include the irregular periods, which are shown in Figure 10. Most QBO and QQO theories invoke that the equatorial waves accelerate the low wings of jets, causing the stacked jets to emerge over time and disappear at high pressure (Lindzen and Holton, 1968; Holton and Lindzen, 1972; Plumb, 1977). Here we apply diagnoses similar to Showman et al. (2019) demonstrating that a similar mechanism is driving our simulated QQO-like oscillations. We recognize that the QQO-like oscillation is driven by the vertical momentum fluxes that are carried by the waves. The quasi-geostrophic approximation is no longer applicable at low latitudes, so the EP flux equation should be written in full form (Andrews et al., 1987): \[\mathbf{F} =F_{\phi}+F_{z}\] \[=\rho_{0}R_{J}\cos\phi(\frac{\bar{u_{z}}\overline{v^{\prime} \theta^{\prime}}}{\bar{\theta}_{z}}-\overline{u^{\prime}v^{\prime}})\mathbf{j}\] \[+\rho_{0}R_{J}\cos\phi[(f-\frac{1}{a\cos\phi}\frac{\partial\bar {u}\cos\phi}{\partial\phi})\frac{\overline{v^{\prime}\theta^{\prime}}}{\bar {\theta}_{z}}-\overline{w^{\prime}u^{\prime}}]\mathbf{k} \tag{11}\] and the divergence of \(\mathbf{F}\) is: \[\nabla\cdot\mathbf{F}=\frac{1}{R_{J}\cos\phi}\frac{\partial(F_{\phi}\cos\phi )}{\partial\phi}+\frac{\partial F_{z}}{\partial z} \tag{12}\] If the EP flux is divergent (\(\nabla\cdot\mathbf{F}>0\)), the mean flow obtains eastward momentum and has an eastward acceleration. Figure 8 shows the vertical components of EP fluxes, which are normalized to the maximum absolute values at each pressure level, with the contours of zonal-mean winds. The EP fluxes are separated into the eastward modes (positive value) with speeds less than \(200\) m s\({}^{-1}\) and the westward modes (negative value) with speeds larger than \(-200\) m s\({}^{-1}\). The wave absorption occurs at the critical levels near the bases of zonal jets, shown by the shading of EP fluxes ceasing extending when they encounter the dash lines, indicating the selective absorption effects of jets. The jets are favorable to absorb the waves that their phase speeds are close to the jet speeds. The absorption of eastward-propagating waves causes eastward accelerations and the absorption of westward-propagating waves causes westward accelerations, respectively. In the absence of damping, the waves with phase speeds far larger than the background flows could propagate upward to the top of the atmosphere without causing any alteration. We also notice an unexpected nature of the wave absorption remains in our analysis: some waves are also absorbed even if their speeds are faster than the background zonal-mean zonal winds. That phenomenon may be associated with the nonlinear effects, because the linear-wave-flow interaction theories generally assume small amplitudes of waves and may not completely capture all the details in our simulations. Now we turn to characterize the wave properties. We apply a spectral analysis in the wavenumber-frequency domains about the equatorial waves (e.g. Wheeler and Kiraldis, 1999) in Figure 9 of different forcing wavenumber cases at 10 and \(10^{-2}\) bar. We perform two-dimensional Fourier transforms on the temperature anomalies as a function of longitude and time to obtain the wavenumber-frequency characters at certain pressure levels. Then we obtain the final spectra from the coefficients divided by the background spectra, which are obtained from 40 times 1-2-1 filter smoothing of the raw coefficients. The frequency is cycles per day (CPD), which is the quotient of the angular frequency and \(2\pi\). The positive zonal wavenumber means the eastward propagation of waves and the negative zonal wavenumber means westward propagation. We would like to lay out the theories of equatorial free waves for comparisons of wave properties. The analytic solutions help us classify the waves in Figure 9. Considering a linearized shallow water equation for perturbations on an equatorial \(\beta\)-plane that \(f\approx\beta y\), the analytic solutions of equatorial waves in one layer are written as Equation 13 (Matsuno, 1966): \[\frac{c_{K}}{\beta}(\frac{-k\beta}{x}-k^{2}+\frac{x^{2}}{c_{K}^{2}})=2n+1 \tag{13}\] The curves show dispersion relationships of different equatorial waves, where \(c_{K}\) is the Kelvin wave phase speed, \(k\) is the zonal wavenumber, and \(n=0,1,2,3...\). When \(n=0\), we obtain the solutions of the mixed Rossby-gravity (MRG) waves (dash-dot lines); When \(n=1,2,3...\), we obtain the solutions of the inertia-gravity waves (solid lines) and the Rossby waves (dash lines); Dot lines show the equatorial Kelvin waves with wavenumber-frequency relationship \(\omega=kc\) with equivalent depths \(h_{e}=1000\) m. The equivalent depths are theoretical layer depths used in shallow-water models to specify an intrinsic wave speed in the shallow water system such that \(c_{K}=\sqrt{gh_{e}}\). We tune the \(h_{e}\) to make the theoretical curves closer to our calculated spectral signals in Figure 9. We compare each figure in Figure 9: Rossby wave modes (dash lines) lie in low-frequency regions at 10 bars, with phase speeds ranging from \(-5\) m s\({}^{-1}\) to \(-40\) m s\({}^{-1}\), indicating that the Rossby wave modes could be directly generated by the thermal perturbations in our simulations. The Rossby wave modes increase their Figure 7: (a) Zonal winds at the equator with time shown eastward (red region) and westward wind (blue region) modes between \(\pm 200\) m s\({}^{-1}\). The titles \(n_{f}\) show different zonal forcing wavenumbers. In all simulations, \(f_{\rm amp}\sqrt{n_{f}}=1.3\times 10^{-5}\) K s\({}^{-1}\), the storm timescale \(\tau_{\rm s}=10^{5}\) s, the drag timescale \(\tau_{\rm drag}=10^{7}\) s, and other parameters are as described before. Color bar unit is m s\({}^{-1}\). The period unit is Earth year. (b) A scattering of the relationship between the zonal forcing wavenumber and the period of the QQO-like oscillation. phase speed in the \(10^{-2}\) bar panel, showing the loss of energy at the critical levels when low-frequency-Rossby waves propagate upward through the equatorial jets. The Kelvin wave modes are shown by dot-dash lines. In the lower panel of Figure 9, there are only slow Kelvin waves with phase velocities of more than 10 m s\({}^{-1}\); In the upper panel of Figure 9, the slow Kelvin waves disappear. Most Kelvin modes are with phase velocities \(\sim\) 150 m s\({}^{-1}\), which also illustrates that the slow Kelvin waves are filtered by the background flow. Inertia-gravity waves (solid lines) are present in all cases, indicating nonlinear effects in our experiments: in the lower panel of Figure 9 with 10 bar, the inertia-gravity wave power concentrates around the forcing wavenumber \(n_{f}\), and in the upper panel of Figure 9, the inertia-gravity wave power disperse to wider wavenumbers and frequency regions, indicating that the energy is transferred from the forcing wavenumber \(n_{f}\) to the other wavenumbers. The inertia-gravity wave speed is around 200 to 400 m s\({}^{-1}\), which is greater than the wind speed around 150 m s\({}^{-1}\) in Figure 3, indicating that most of the inertia-gravity waves with phase velocities smaller than the background flow have been absorbed below the critical levels, while the faster inertia-gravity waves are transparent to the background flow, so they can propagate to regions with lower pressure. The MRG waves exhibit the strongest amplitudes in the top panel in all cases, especially the westward propagating waves with zonal wavenumbers between \(-5\) and \(-11\). Their phase speeds are in range from \(-40\) to \(-400\) m s\({}^{-1}\). These MRG waves are easily penetrating the background flow and reach the top of the atmosphere, and they may resemble the equator-trapped planetary-scale waves that have been observed in Allison (1990); Deming et al. (1997). In particular, the power of the MRG waves with \(n_{f}\) = 10 and 20 is larger than the wave power with \(n_{f}\) = 30 and 40. Based on the above wave analysis, we suggest a hypothesis to explain the shortening of QQO-like oscillation as a function of forcing wavenumber \(n_{f}\) in Figure 7. The jet center of the QQO-like oscillation is located at \(10^{-2}\) bar, so the more waves are absorbed at this position, the wave-induced accelerations are greater, and the periods of the QQO-like oscillation are shorter. Essentially, most upward-propagating waves are generated by the cascade of the waves, which are excited by internal forcing, and their wavenumbers are related to the forcing wavenumber \(n_{f}\): larger \(n_{f}\) corresponds to the smaller wavelength, exciting more small-scale gravity waves. These inertia-gravity waves may contribute significantly to the QQO momentum budgets, as shown in Figure 9a, b, c, and d. Compared with each other, the distributions of inertia-gravity waves in Figure 9d are the widest in ranges and correspond to the darkest color, while the inertia-gravity waves in Figure 9a are obviously less widely distributed than which in d. Figure 8: Phase-speed spectra of the Eliassen-Palm flux at eastward wind phase (a) and westward wind phase (b) with \(n_{f}\) = 10 and \(\tau_{\mathrm{s}}\) = \(10^{5}\) s, displaying the absorption of waves at critical levels. The equatorial zonal-mean zonal wind profiles at the corresponding times are overplotted in thick dashed curves. The spectra are normalized at every level by the maximum pressure at that level. Therefore, a larger \(n_{f}\) excites more small-scale inertia-gravity waves, and the small-scale waves are absorbed, resulting in an increase in the momentum acquired by the background flow, and thus the QQO-like oscillation period becomes shorter. In particular, we notice variable periods of the QQO-like oscillations when the simulations continue running. Figures 10a, b show the evolutions of QQO-like oscillations lasting 90 years with \(n_{f}=10\) and 30. It can be seen that the deep equatorial westward jets at 0.1 bar are gradually intensified in Figures 10a. After the simulations reach about 80 years, the maximum deep equatorial westward jet speeds are -200 m s\({}^{-1}\). At this time, the average periods are extended from \(\sim\) 3.4 years to \(\sim\) 5 years. The same results occur in Figure 10b: when the deep equatorial westward jet speeds are maintained at about -40 m s\({}^{-1}\), the periods of the QQO-like oscillations last less than 2 years, while the simulations run between 20 and 40 years, the deep equatorial westward jet speeds increase to \(\sim\) -100 m s\({}^{-1}\), correspondingly, the periods of the QQO-like oscillations also extend to about 3 years. As can be seen in Figure 10c, d, the evolution of the deep equatorial jets and the off-equatorial jets are directly related. After the simulations run for 20 to 30 years, and after 80 years, the off-equatorial jets migrate to equatorial regions, and the momentum exchanges strengthen the deep equatorial westward jets, which filter the upward-propagating equatorial waves, and lead to longer periods of QQO-like oscillations. These results indicate that the change of periods of the QQO-like oscillations in our simulations is related to the interactions between the off-equatorial migrating jets and the equatorial jets. The migrations of the off-equatorial jets have also been mentioned in some idealized GCM studies (e.g. Feldstein Figure 9: Wavenumber-frequency spectra of temperature along equatorial region (\(10^{\circ}\mathrm{S}-10^{\circ}\mathrm{N}\)) at 10000 days exhibiting the QQO-like oscillations. X-axis gives a positive (negative) zonal wavenumber, representing eastward (westward) propagating waves, and Y-axis is the frequency in cycles per day (CPD). The colorscale indicates wave power density. Curves indicate analytic solutions of dispersion relations: Equatorial Rossby waves (dash lines), Mixed Rossby-gravity (MRG) waves (dash-dot lines), inertia-gravity waves (solid lines) and Kelvin waves (dot lines). The equivalent depths (\(h_{e}\)) of solutions are 1000 m. Figure a-d show the Wheeler-Kiladis diagrams at \(10^{-2}\) bar of different cases with forcing wavenumber from 10 to 40. Figure e-h show the Wheeler-Kiladis diagrams at 10 bars of different forcing wavenumber cases. Figure i shows the phase speed by \(\log_{10}\) scale. Figure 10: a,b: The equatorial zonal wind fields (time-pressure plane) for 90 Earth years with \(n_{f}\) = 10, 30. c,d: The zonal wind fields (time-latitude plane) at 0.1 bar for 90 Earth years with \(n_{f}\) = 10, 30. Red indicates eastward jets, and blue indicates westward jets. Color map units are m s\({}^{-1}\). e,f: The zonal mean wind fields (blue line) and wave-induced accelerations (orange line) for the simulations run to 70 years with \(n_{f}\) =10, 30, smoothed by 20-day averaging. In all cases, the storm timescale \(\tau_{\rm s}=10^{5}\) s, and the drag timescale \(\tau_{\rm drag}=10^{7}\) s. 1998; Robinson, 2000; Chan et al., 2007; Chemke & Kaspi, 2015; Ashkenazy & Tziperman, 2016). These migrations are believed caused by that the centers of eddy-induced wave accelerations and the jet centers do not coincide, which are similar to the Figure 10e,f in our simulations. The physical scenario of the variation of QQO's period in our simulations is now clear: the equatorward acceleration biases on the off-equatorial jets cause equatorward migrations, and these jets interact with the deep equatorial westward jets and enhance them; The deep jets filter the upward propagating equatorial waves, reducing the EP fluxes in the oscillation layers and prolonging the oscillation periods. However, we are cautious that the latitudinal jet migrations that occurred in idealized GCMs have not been observed in real atmospheres. The reason may include the simplified parameterizations of the forcing and the lack of some crucial physical processes in idealized models, such as the feedback on the forcing patterns by the emergence of strong jets, water vapor, and moist convection. ### Sensitivity tests of the QQO-like oscillations In view of our limited computing resources, we tested the case \(n_{f}=10\), which has the oscillation periods closer to Jupiter's QQO period, with the different bottom drag (\(\tau_{\rm drag}\)), time correlation (\(\tau_{\rm s}\)) and initial zonal winds. #### 3.3.1 bottom drag The development of prograde (eastward) and retrograde (westward) deep equatorial jets likely have significant influences on the wave properties and the QQO-like oscillations, causing the lengthening of periods (like Figure 7a, \(n_{f}=20,25\) case). That could be caused by the filtering effects of the equatorial jets on the wave properties. We increase the bottom drag to suppress the deep equatorial jets. Figure 11 shows the equatorial zonal winds as a cross-section of time and pressure. The stronger bottom drag (\(\tau_{\rm drag}\) decrease from \(10^{7}\) s to \(10^{5}\) s), the longer period of the QQO-like oscillation (from about 3 Earth years to about 20 Earth years), the phenomenon illustrated the bottom drag may restrict the equatorial waves and weaken the momentum flux. The equatorial jet at 1 bar disappears with a strong bottom drag, but the wind speed of the QQO-like oscillation does not change substantially. Then we show the temperature anomalies in Figure 12. Comparisons between the panels of Figure 12 show that the equatorial waves are weaker with increasing bottom drag. The wave crests separate in large-drag simulations, illustrating that the waves lost high-frequency features and large-wavenumber features when the bottom drag is strong. The disappearance of the small-scale waves could be an explanation for the longer period oscillation in Figure 11. Both cases with \(\tau_{drag}=10^{6}\) and \(10^{5}\) s have weak deep jets, and the QQO-like oscillation period is quite different. Although strong bottom drag suppresses the deep equatorial jets, causing the equatorial waves to propagate much more transparently out of the bottom layers, it damps the velocity anomalies, either. The strong drag in the case with \(\tau_{drag}=10^{5}\) also damp out some waves generated in the deep layers, and the EP fluxes are relatively weak to accelerate the stratosphere oscillation. Finally, stronger drag leads to a longer QQO-like oscillation period. #### 3.3.2 time correlation We then conduct simulations with different storm timescales \(\tau_{\rm s}\) from \(10^{6}\) s to \(10^{3}\) s. \(\tau_{\rm s}\) shows the injection frequency of bottom perturbation, characterizing the wave properties in the time axis. Figure 13 shows the equatorial zonal winds as a function of time, and a bottom drag timescale \(\tau_{\rm drag}=10^{6}\) s is applied to prevent the formation of strong deep equatorial jets. Figure 13 implies that the deep zonal jets develop when the storm timescale decreases. As eastward jets develop, the QQO-like oscillations signature disappears. We speculate that the varying \(\tau_{\rm s}\) could have an influence on the wave flux. Figure 14a shows the sum of the heating rate, providing a view of time-depend perturbations. We focus on one air parcel overlying the deep layer, which is heated by the interior convection, and the "accumulate heating rate" is \(S_{t}\), parameterized by Equation 14: \[S_{t}(t+\delta t)=\sqrt{1-\alpha^{2}}S_{t}(t)+\alpha x_{m} \tag{14}\] \(\alpha\) is de-correlation factor in Equation 4 depend on time step \(\delta t\) and \(\tau_{\rm s}\). \(x_{m}\) is a random number in the range of [-1,1], that the negative value represents cooling, and the positive value represents heating. The result in Figure 14a implies that, without considering radiative cooling, the larger \(\tau_{\rm s}\), the stronger parameterized convective heating. This result is further illustrated by Figure 14b: The root mean square (RMS) of temperature anomaly at equator regions decreases with a decline of \(\tau_{\rm s}\) at equatorial region, suggesting less equatorial wave activities. The RMS of temperature anomalies helps to explain the disappearances of the QQO-like oscillations of short \(\tau_{\rm s}\) case in Figure 13. The wave power and the temperature anomalies are positively correlated. We could expect that the wave power to be weaker when the \(\tau_{\rm s}\) is shorter. The weaker waves are weakened by absorption through the jets and the radiative damping. By the time they reach the top, there is not enough wave power left to generate a new stacked jet. A result is that the jet structures keep steady over time when \(\tau_{\rm s}\) is small. #### 3.3.3 initial zonal winds We initialized our \(n_{f}=10\) case with zonal eastward wind from the observations in Figure 15a. The initial wind profile is a Gaussian equatorial jet, with a maximum speed of 81 m s\({}^{-1}\) and the full width at half maximum of \(\pm 8^{\circ}\). Figure 15b and c show the QQO-like oscillation behavior with initial eastward zonal wind and zero wind at the beginning, respectively. Results show that the wind shears descend slower with initial eastward winds. Except for a \(\sim 2\) year lag in Figure 15b compared to Fig Figure 11: Time varying equatorial zonal winds, with different drag timescales. Red color denotes eastward zonal winds, and blue color denotes westward zonal winds. In all simulations, the forcing amplitude is \(f_{\rm amp}=4\times 10^{-6}\) K s\({}^{-1}\), the zonal forcing wavenumber is \(n_{f}=10\), and the deacy timescale is \(\tau_{\rm s}=10^{5}\) s. Color bar unit is m s\({}^{-1}\). Figure 12: Time-longitude cross sections of equatorial temperature anomalies at equilibrium states. Top panels: 0.001 bar, and bottom panels: 0.01 bar. The parameters are same as that in Figure 11. Figure 13: Time varying equatorial zonal winds, with different storm timescales. Red denotes eastward winds, and blue denotes westward winds. In all simulations, the forcing amplitude is \(f_{\rm amp}=4\times 10^{-6}\) K s\({}^{-1}\), the zonal forcing wavenumber is \(n_{f}=10\), and drag timescale is \(\tau_{\rm drag}=10^{6}\) s. Color bar unit is m s\({}^{-1}\). ure 15c, the QQO-like oscillation periods are the same in Figure 15b and c, which are about 3.5 Earth years. These results suggest that the final states are independent of the initial conditions. The initial wind structures do not participate in the wave-flow interaction, except for some period lags. However, the formation mechanism of Jupiter's deep eastward equatorial jets is still an open question and is out of the scope of our current study. ## 4 Conclusions and Discussion We have presented idealized 3D general circulation models of Jupiter to investigate the formation of zonal jets and the evolution of QQO-like oscillations. We have also performed sensitivity tests to help understand the dynamical origins of the jets and simulated QQO-like oscillations. We have the primary results as follows: \(\bullet\) Our simulations showed that zonal jets emerge naturally in Jupiter's conditions with isotropic internal forcing. The jets are self-organized with speeds in a range of \(\sim\pm 100\) m s\({}^{-1}\). The mechanism of jet formation is related to the meridional propagation of Rossby waves and the inhomogeneous potential vorticity evolution. The wind speed of the jets, as well as the number of the jets, decrease with increasing bottom drag strength. The off-equatorial jets are more sensitive to drag and radiative damping than the equatorial jets. \(\bullet\) In our present work, the 3D simulations generate QQO-like oscillations in the equatorial stratosphere, which are believed to be driven by the interaction of upward-propagating waves from the troposphere into the stably stratified stratosphere. The stacked eastward Figure 14: Influences of \(\tau_{\rm s}\) on perturbations and waves. (a) Time-varying local perturbations with different storm timescale Figure 15: (a) Vertical structures of initial zonal wind profiles, simplified from the Hubble Space Telescope observations (Simon-Miller et al., 2015). Note that tropical zonal wind speeds decrease with height. (b) The zonal winds at the equator start with the initial zonal wind of a. (c) The zonal winds at the equator start with zero wind. and westward jets migrate downward, and the periods of oscillation shorten with the increasing forcing wavenumber. We show that the dominant forcing wave modes with large-scale forcing wavenumber \(n_{f}=10\) produce the closest QQO-like oscillation behavior to the observed QQO periodicities. Besides, the mixed Rossby-gravity waves with a wavenumber of 10 may have a large possibility of influencing the upper layer, exhibiting features similar to observations of cloud plumes. \(\bullet\) We show that the deep equatorial jets have strong influences on the QQO-like oscillations. The migrations of off-equatorial jets increase the strength of the deep equatorial jets, resulting in the prolonging of the QQO-like oscillations shown in our models. Our simulations are independent of the initial conditions. Our simulations are also sensitive to the bottom drag. Detailed diagnostics show that the bottom drag inhibits the formation of deep jets, and it also damps the equatorial waves at the deep layers where they are generated, resulting in longer QQO-like oscillations. The parameters storm timescale \(\tau_{\rm s}\) are associated with the wave power. The shorter storm timescale \(\tau_{\rm s}\), the smaller wave power, result in weakening and disappearance of the off-equatorial jets. Our results support that the upward propagating equatorial waves trigger the QQO-like oscillation as suggested by classical theories (Lindzen, 1970, 1971, 1972). Our model shows that the simulation periods close to Jupiter's QQO, are most likely to be generated by large-scale wave forcing with \(n_{f}=10\), qualitatively in agreement with some simulations. Allison (1990) shows that the Rossby modes exhibit the largest amplified rate around zonal wavenumber 10. Li (1993); Li & Read (2000); Li et al. (2006) also exhibit the scale selections of the dominant Rossby wave modes with wavenumber \(1\pm 15\). And the large-scale waves are also consistent with the long-term infrared hot spot observations (e.g. Hunt & Conrath, 1981; Ortiz et al., 1998; Choi et al., 2013). We can compare qualitatively our results to the previous work and the observations. Friedson (1999) estimate a descent rate of QQO wind shear of 0.16 cm s\({}^{-1}\). Cosentino et al. (2017) determined the descent rate of \(\sim 0.04\) cm s\({}^{-1}\) in their model and \(\sim 0.05\) cm s\({}^{-1}\) from TEXES data. Our best-match model shows the descent rate of 0.039 cm s\({}^{-1}\) to 0.063 cm s\({}^{-1}\). The descent rate is related to the EP flux carried by the waves. Friedson (1999) overestimated the EP fluxes of \(\sim 7\times\) 10\({}^{-4}\) m\({}^{2}\) s\({}^{-2}\), while they span a range from \(\sim 1\times\) 10\({}^{-4}\) to \(\sim 4\times\) 10\({}^{-4}\) m\({}^{2}\) s\({}^{-2}\) in our simulations. Our models show Rossby wave modes with phase speeds of \(-10\sim-50\) m s\({}^{-1}\) and MRG wave modes with phase speeds of \(-40\sim-400\) m s\({}^{-1}\). Although the large-scale waves contribute mostly to generating the QQO, small-scale gravity waves that cannot be resolved in global models could contribute to shaping the QQO's structures and periods (Cosentino et al., 2017), and they are in general essential to force the QBO-like oscillations (Baldwin et al., 2001). Planetary waves occur over a broader range of latitudes off equator (Orton et al., 1994; Deming et al., 1997), but the inertia-gravity waves may contribute a significant part to the QQO momentum budgets (Cosentino et al., 2016, 2017), due to they are no equatorial confinement. Large-scale equatorial waves could be trapped in narrow equatorial waveguides, while small-scale gravity waves don't suffer this restraint (Friedson, 1999). Kelvin and MRG waves are equatorially restrained within \(\pm 5^{\circ}\) in our simulations, however, the equatorial jets extend to about \(\pm 15^{\circ}\). If much higher horizontal resolutions are applied, the simulations may resolve more small-scale gravity waves, strengthening the QQO signals as they did in the QBO models (e.g. Hayashi & Golder, 1994; Takahashi, 1996). Increasing higher vertical resolutions also leads to larger amplitude of the MRG, the Kelvin waves and the EP fluxes (Richter et al., 2014). The behaviors of sub-grid gravity waves that cannot be explicitly captured in GCMs are complex, and the wave-breaking process is affected by many aspects, including the background wind shears (Hines, 1991), the diffusive timescales (Lindzen, 1981), the onsets of convective instability (Smith et al., 1987) and the mixing process of air parcels that do not return the original position (Medvedev & Klassen, 1995). It is difficult to characterize these sub-grid behaviors quantitatively in our simulations, which could influence the gravity waves and the QQO-like oscillations, a task we leave to the future. The common outcome of westward equatorial jets in the troposphere seems to be a general property of the stochastic, isotropic forcing, which occurs in many previous shallow-water models (Scott & Polvani, 2007; Showman, 2007; Zhang & Showman, 2014) and primitive equation models (Showman et al., 2019; Tan, 2022), including models in this study. This is in stark contrast to the strong and stable equatorial superrotation observed in Jupiter's and Saturn's tropospheres. This suggests that some fundamental feedback mechanisms may be missing in the stochastic forcing framework. On the other hand, major mechanisms controlling the equatorial superrotation and general circulation in the tropospheres of Jupiter and Saturn are unsettled and still under active investigation (see a recent review of Showman et al., 2018). Generating a self-consistent equatorial superrotation like the observed ones on Jupiter and Saturn is an important issue in the field of planetary atmosphere and is out of our current scope. We expect that if the tropospheric equatorial superrotation is self-consistently maintained in our models, it will have a quantitative influence on the simulated QQO because of the wave-filtering effects of the jet. Nevertheless, given that our current study has focused on a mechanism study of QQO, the wave-mean-flow interaction mechanism discussed here should still hold. Jupiter's internal heat flux and the interactions with the overlaying stratified layers may not be strictly isotropic and may exhibit certain latitudinal variations. In addition, the QQO is likely to be disrupted by planetary-scale disturbances in the equatorial and low-latitude troposphere (Antunano et al., 2021). These warrant future numerical explorations of how latitudinal-dependent internal forcing would influence the general circulation and the QQO of Jupiter. This work has focused on the isotropic internal forcing for a few reasons. First, the internal heat flux of Jupiter is expected to be horizontally homogeneous on the zeroth order (Ingersoll & Porco, 1978; Ingersoll et al., 2004; Fortney & Nettelmann, 2009). Second, the stochastic and isotropic nature of the forcing allows us to understand the formation and time evolution of zonal jets in a clean setup by excluding latitudinal forcing variations. This ties our work to the rich literature investigating turbulence and jet formation in the context of giant planets' atmospheres using isotropic turbulent forcing (see reviews of Vasavada & Showman, 2005; Galperin et al., 2008; Showman et al., 2018 and many references therein). Third, there are several candidate mechanisms responsible for latitudinal-dependent internal forcing, each could imply different latitudinal forms of the forcing. For instance, rapidly-rotating-shell convection models (e.g., Showman & Kaspi, 2013) suggest that convection tends to be organized into columns parallel to the rotation axis and convective velocities are expected to be greater at high latitudes; even if convective heat flux is isotropic, the perturbations caused in the stratified layers may be stronger at low latitudes where the horizontal divergence of the large-scale motions is on a leading order (Schneider & Liu, 2009); if moist convection is the leading mechanism of internal heat transport in the troposphere of Jupiter (Gierasch et al., 2000; Ingersoll et al., 2000), storms associated with moist convection and latent heating could be enhanced at low latitudes and be self-organized and coupled with the large-scale flows (Lian & Showman, 2010). That said, given the multiple origins of the latitudinal internal forcing variations, a variety of latitudinal forcing forms should be investigated in future studies to understand the sensitivity of circulation and QQO. Therefore, our study with an isotropic forcing serves as a baseline study for comparisons with future Jovian atmospheric models that incorporate (the uncertain) latitudinal forcing variation. ## Acknowledgements This work is supported by the National Natural Science Foundation of China, under Grants No. 41888101. Numerical simulations were conducted at the High-performance Computing Platform of Peking University.
``` jupiterの équatoriale stratosphere において、交互に西側/東側へのジェットの降下波が観察されています。周期は約4年から6年です。この現象は一般にQuasi-quadrennial oscillation (QQO) と呼ばれます。ここでは、三次元一般気象モデルに、等方的な小規模の熱擾乱を注入して、QQOをシミュレートします。内部の熱擾乱が、中高緯度両方の球殻で、 equatorial QQO と複数のジェットストリームを生成する波を刺激することがわかりました。QQOに似ている振動を生成する主な波モードは、緯度10の偏波です。ポテンシャル可視度の非均一な進化は、オフ- equatorial zonal jets の出現を favor します。オフ- equatorial jets は equator に移動し、深層 equatorial jets を強化し、QQO-like Oscill
2309.03598
Enhancing Sample Utilization through Sample Adaptive Augmentation in Semi-Supervised Learning
In semi-supervised learning, unlabeled samples can be utilized through augmentation and consistency regularization. However, we observed certain samples, even undergoing strong augmentation, are still correctly classified with high confidence, resulting in a loss close to zero. It indicates that these samples have been already learned well and do not provide any additional optimization benefits to the model. We refer to these samples as ``naive samples". Unfortunately, existing SSL models overlook the characteristics of naive samples, and they just apply the same learning strategy to all samples. To further optimize the SSL model, we emphasize the importance of giving attention to naive samples and augmenting them in a more diverse manner. Sample adaptive augmentation (SAA) is proposed for this stated purpose and consists of two modules: 1) sample selection module; 2) sample augmentation module. Specifically, the sample selection module picks out {naive samples} based on historical training information at each epoch, then the naive samples will be augmented in a more diverse manner in the sample augmentation module. Thanks to the extreme ease of implementation of the above modules, SAA is advantageous for being simple and lightweight. We add SAA on top of FixMatch and FlexMatch respectively, and experiments demonstrate SAA can significantly improve the models. For example, SAA helped improve the accuracy of FixMatch from 92.50% to 94.76% and that of FlexMatch from 95.01% to 95.31% on CIFAR-10 with 40 labels.
Guan Gui, Zhen Zhao, Lei Qi, Luping Zhou, Lei Wang, Yinghuan Shi
2023-09-07T09:50:45
http://arxiv.org/abs/2309.03598v1
# Enhancing Sample Utilization through Sample Adaptive ###### Abstract In semi-supervised learning, unlabeled samples can be utilized through augmentation and consistency regularization. However, we observed certain samples, even undergoing strong augmentation, are still correctly classified with high confidence, resulting in a loss close to zero. It indicates that these samples have been already learned well and do not provide any additional optimization benefits to the model. We refer to these samples as "naive samples". Unfortunately, existing SSL models overlook the characteristics of naive samples, and they just apply the same learning strategy to all samples. To further optimize the SSL model, we emphasize the importance of giving attention to naive samples and augmenting them in a more diverse manner. Sample adaptive augmentation (SAA) is proposed for this stated purpose and consists of two modules: 1) sample selection module; 2) sample augmentation module. Specifically, the sample selection module picks out naive samples based on historical training information at each epoch, then the naive samples will be augmented in a more diverse manner in the sample augmentation module. Thanks to the extreme ease of implementation of the above modules, SAA is advantageous for being simple and lightweight. We add SAA on top of FixMatch and FlexMatch respectively, and experiments demonstrate SAA can significantly improve the models. For example, SAA helped improve the accuracy of FixMatch from 92.50% to 94.76% and that of FlexMatch from 95.01% to 95.31% on CIFAR-10 with 40 labels. The code is available at [https://github.com/GuanGui-nju/SAA](https://github.com/GuanGui-nju/SAA). ## 1 Introduction For the sake of reducing the cost of manual labeling, semi-supervised learning (SSL), which focuses on how to learn from unlabeled data, is a longstanding yet significant research topic in vision applications. Recently, data augmentation techniques and consistency regularization have been proven to be effective ways of utilizing unlabeled data. For example, FixMatch [39] encourages consistency in predictions between the weakly and strongly augmented versions, and it achieves an accuracy of 92.50% on the CIFAR-10 task with only 40 labels. However, not all unlabeled samples are effectively utilized even with strong augmentation. In Figure 0(a), if the strongly augmented versions are correctly classified with Figure 1: (a) shows an example of _naive sample_. Its augmented versions are correctly classified with high confidence, resulting in the loss close to 0. (b) shows the model performance during FixMatch training. Performance improvements are slow or even stagnant for a period of time. high confidence, leading to a loss close to zero, it indicates that the sample has already been learned well and cannot further improve the model's performance. In other words, the sample was not effectively utilized to benefit model training, and we call this sample "_naive sample_". When the training process contains a large number of _naive samples_, it can cause slow or even stagnant model performance improvements, as shown in Figure 0(b). Unfortunately, existing SSL models [34] overlook the critical point of whether all samples are effectively utilized. Typically, these models apply the same fixed strong augmentation strategy to all samples, resulting in some strongly augmented versions that do not benefit the model train. We emphasize that the key to alleviating this problem lies in how to further explore the value of the _naive samples_ through new learning strategies. A natural idea that reminds us is to develop sample adaptive augmentation (SAA) to identify _naive samples_ and increase their diversity after augmentation. Our proposed SAA is simple yet effective, which consists of two modules: 1) sample selection module and 2) sample augmentation module. The former is responsible for picking out _naive samples_ in each epoch, and the latter applies a more diverse augmentation strategy for _naive samples_. Specifically, in the sample selection module, we first update the historical loss of the samples with exponential moving average (EMA) in each epoch, then these samples will be divided into two parts. The part of the samples with a smaller historical loss is considered to be the _naive sample_. Since historical loss captures the impact of the sample on model training, this approach allows us to identify samples that are not effectively utilized and would benefit from more diverse augmentation. While in the sample augmentation module, the more diverse augmented version of _naive sample_ will be obtained by regrouping multiple strong augmented images, and the remaining samples are applied with the original strong augmentation. Our proposed SAA is simple to implement, requiring only a few lines of code to add our proposed modules to the FixMatch or FlexMatch in PyTorch. It is also lightweight in terms of memory and computation, _i.e_., SAA only needs to add two additional vectors and update them in each epoch, making it an efficient solution for improving SSL models. We extended FixMatch and FlexMatch with SAA and conducted experiments on SSL benchmarks. The results of the experiments demonstrate that SAA can significantly improve performance. In summary, our contribution can be summarized as follows: * **We identify ineffectively utilized samples and emphasize that they should be given more attention.** Under the consistency regularization based on data augmentation, some strongly augmented versions are not beneficial to model training, which results in the values of these samples not being fully exploited and makes the model performance slow to improve. We refer to them as _"naive sample"_, and emphasize that they should be learned with a new learning strategy. * **We propose SAA to make better use of _naive sample_.** To increase the probability that the augmented versions can benefit the model training, a simple yet effective method, sample adaptive augmentation (SAA), is proposed for identifying the _naive samples_ and augmenting them in a more diverse manner. * **We verify the validity of SAA on SSL benchmarks.** Using FixMatch and FlexMatch as the base framework, we proved that our approach can achieve state-of-the-art performance. For example, on CIFAR-10 with 40 labels, SAA helps FixMatch improve its accuracy from 92.50% to 94.76%, and helps FlexMatch improve its accuracy from 95.01% to 95.31%. ## 2 Related Work ### Semi-Supervised Learning Consistency regularization (CR) [2] is the main way to exploit unlabeled data in semi-supervised learning (SSL). The conventional implementation is to perturb the samples and then encourage the model to maintain a consistent prediction. The manner of perturbation has been studied in a variety of ways, _e.g_., stochastic augmentation and drop out [25, 35], feature perturbations [24], adversarial perturbations [30], model perturbations [42]. [4, 44, 3] apply mixup to blend the images, also a perturbation of the image. With strong augmentation technique [7, 10], FixMatch applies a consistency regularization between the weakly augmented and strongly augmented versions, allowing the model to learn a greater diversity of images over long iterations. This approach has greatly simplified the framework and has led to breakthroughs in semi-supervised learning milestones. As we have previously analyzed, the framework applies the same fixed augmentation strategy to all images, which results in the _naive samples_ not being fully utilized. Due to the superiority of the FixMatch framework, a large number of SSL works [52, 12, 48, 55, 16, 13, 54] are now based on it for further optimization, but none of the work considers the effectiveness of utilization of _naive samples_. [12, 54] focuses on improving the quality of the pseudo labels by learning the distribution of unlabeled data. [55, 16] focus on learning the similarity relationship between samples or super-classes. [13, 48, 52] all emphasize the utilization of samples with low confidence. These works also allow the model to learn more samples within a certain number of iterations to some extent, but still ignore the issue of the validity of the augmentation, resulting in these augmented samples still possibly unhelpful to the training of the mode. To the best of our knowledge, none of the SSL works has considered the utilization of _naive samples_. [52] treats each category differently and adjusts the threshold for each category, but we are considering treating each sample differently, with no relationship to the category. ### Hard Example Mining Our work is somewhat related to hard example mining, with the difference that we focus on _naive samples_ that do not benefit model training, while they focus on hard samples that damage model training. A more common approach used to select hard samples is to rely on loss information between the sample and the ground truth [37, 46, 29, 38]. This is related to our approach, but the unlabeled sample is selected by its consistency loss due to the lack of ground-truth. In addition to this, distance metrics [49, 22] and false positive samples [20, 9, 11, 14] are common methods of hard sample selection. However, most of these methods for mining hard samples rely on labels, which is not practical under SSL. Both our proposed method and the field involve the selection of samples, the difference being that they focus on the selection of hard samples that are difficult to train, while we focus on _naive sample_ that contributes no information of model training. ### Date Augmentation Data augmentation is an effective way of expanding the data space [36], which we roughly classify into the following categories: 1) Single perturbation [31, 21, 10]; 2) image blending based [43, 53, 50, 19]; 3) learning based [40, 28, 15]; and 4) search based [6, 27, 7]. Common operations for perturbing a single image include geometric transformations, color transformations, noise injection [31], random erasing [56], kernel flters [21], cutout [10]. [43, 53] direct mixing of the contents of two images, [50] mixes image patches, and [8, 19] Mix the content and style. For learning based strategy, adversarial training [40, 47] and GAN-based [28, 15] train the network to obtain augmented images. [6, 27] find the best combination in the perturbation space using a search strategy. However, single perturbation and image blending-based methods are limited to enhancing the diversity of images. For learning and search-based methods, although they yield augmented images that facilitate model training, their time consumption is huge and therefore this is not suitable for training time-consuming CR-based SSL models. [7] combines random transformations to remove the search process, and is favored by the SSL model. [17] cuts the image and augments the patches, which enhances image diversity and has been validated to be effective on several tasks. Our augmentation is related to [17], but we augmented on the image, not on patches. We will further discuss it in the experimental section. ## 3 Preliminary and Background ### Problem Setting In semi-supervised learning, we denote labeled set \(\mathcal{X}=\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{M},y_{M})\}\), where \(y_{i}\) is the label of the \(i\)-th labeled sample \(x_{i}\). We also denote unlabeled set \(\mathcal{U}=\{u_{1},u_{2},\ldots,u_{N}\}\), where \(u_{i}\) denotes \(i\)-th unlabeled sample, and typically \(|\mathcal{X}|\ll|\mathcal{U}|\). In the implementation, the samples are provided on a per batch basis in each iteration, with a batch of labeled data \(\mathcal{X}\) and batches of unlabeled data \(\mathcal{U}\). How to use this unlabeled data for learning is the focus of SSL research. Generally, in consistency regularization (CR) based SSL models, unlabeled data generate different versions by perturbation, and then the model is encouraged to be consistent in its representations or predictions of these versions. ### Preliminary for CR-based SSL Models Strong augmentation is a good means of applying consistency regularization, and FixMatch [39] is representative of this idea. Many recent semi-supervised works [52, 48, 26, 55, 16] have also used FixMatch as a basis for further optimization, and to clearly introduce our approach, we also use FixMatch as a base framework. We first review FixMatch, whose fundamental idea is that produce pseudo labels on weakly-augmented versions and use them as training targets for their corresponding strongly augmented versions. Of them, the weak augmentation \(\alpha(\cdot)\) includes standard flip and shift operations, while the strong augmentation strategy \(\mathcal{A}(\cdot)\) consists of RandAugment [7] and CutOut [10]. Let \(p_{i}^{w}\) and \(p_{i}^{s}\) represent the model's prediction on \(\alpha(u_{i})\) and \(\mathcal{A}(u_{i})\), respectively. Then this consistency regularization based unsupervised loss for unlabeled samples is, \[\mathcal{L}_{unsup}=\frac{1}{|\mathcal{U}|}\sum_{i=1}^{|\mathcal{U}|}\mathbb{1 }\left(\max(p_{i}^{w})\geq\tau_{c}\right)H(p_{i}^{w},p_{i}^{s}). \tag{1}\] where \(H(p_{1},p_{2})\) denotes the standard cross entropy between \(p_{1}\) and \(p_{2}\), and \(\tau_{c}\) is a pre-defined threshold to retain only high-confidence pseudo-labels. As discussed in FixMatch [39], \(\tau_{c}\) is commonly set as a large value to alleviate the confirmation bias [1] in SSL. Let \(p_{i}\) denotes the model's prediction of \(\alpha(x_{i})\), then the supervised loss for labeled samples is, \[\mathcal{L}_{sup}=\frac{1}{|\mathcal{X}|}\sum_{i=1}^{|\mathcal{X}|}H(q_{i},y_ {i}). \tag{2}\] Finally, the total losses can be expressed as, \[\mathcal{L}=\mathcal{L}_{sup}+\lambda\mathcal{L}_{unsup}. \tag{3}\] where \(\lambda\) is the weight of \(\mathcal{L}_{unsup}\). ### Characteristics and Impact of Naive Samples Taking FixMatch as an example, we tracked the loss of a _naive sample_ and a _non-naive sample_ separately, as shown in Figure 3. Note that we show the original loss without thresholds to exclude the interference of confidence thresholds on the loss values. It can be observed that in most epochs, the _naive sample's_ cross-entropy losses are close to 0, indicating that the learning of \(\mathcal{A}(u_{i})\) does not contribute to the model's training progress. With the same augmentation, the _non-naive sample_ can encourage model optimization in the long term, whereas _naive samples_ cannot. This confirms the necessity for dedicated attention to _naive samples_ and the development of new augmentation strategies that can better exploit their potential value. Moreover, when there are too many _naive samples_ in the training process, they can interfere with model performance improvement, as shown in Figure 0(b). These findings highlight the importance of properly identifying and handling _naive samples_ in SSL tasks. We would highlight that there are several factors that cause the slow performance improvement, such as the number of high confidence pseudo-labels, _etc._ There are also multiple ways to solve the problem, such as adjusting the threshold [52], finding other learnable signals [55], _etc._ In this work, we concentrate on data augmentation. It should be noted that our approach can be used together with the above ways and thus beneficial to SSL. ## 4 SAA: Using Sample Adaptive Augmentation to Aid Semi-Supervised Learning SAA aims to address the issue of not effectively utilizing _naive samples_ by providing them with more attention and exploration of their value in aiding model training. To achieve this goal, SAA designs two modules: the sample selection module and the sample augmentation module. The function of the first module is to identify _naive samples_ in each epoch, while the function of the second module is to apply more diverse augmentation strategies to the _naive samples_ to facilitate their effective learning. ### Sample Selection We introduce two vectors \(\mathcal{H}=\{\mathcal{H}_{1},\mathcal{H}_{2},...,\mathcal{H}_{N}\},\mathcal{ F}=\{\mathcal{F}_{1},\mathcal{F}_{2},...,\mathcal{F}_{N}\}\), where \(N\) is the number of unlabeled samples. \(\mathcal{H}\) records historical consistency loss information for each unlabeled sample, and \(\mathcal{F}\) marks whether the unlabeled sample is a _naive sample_. For unlabeled sample \(u_{i}\), the model calculates the consistency loss \(l_{i}^{t}\) between its weakly- augmented version and strongly augmented versions once in \(t\)-th epoch. Then we update \(\mathcal{H}_{i}^{t}\) with exponential moving average (EMA), which can be expressed as: \[\mathcal{H}_{i}^{t}=(1-a)\mathcal{H}_{i}^{t-1}+\alpha l_{i}^{t}. \tag{4}\] Note that the parameter \(\alpha\) introduced is not an additional model parameter, as the model parameters are also updated with EMA [39, 52]. Since the historical loss information can reflect the magnitude of the impact of a strongly augmented version on the model, it becomes the basis for our decision on _naive sample_. OTSU [33] is a commonly used method of threshold segmentation because of its computa Figure 3: loss of _non-naive sample_ (bottom) and _naive sample_ (top). Figure 2: Overview of our method SAA. The core insight of SAA is that dynamically adjusts augmentation for samples, thus allowing _naive samples_ to be used more effectively. In detail, SAA consists of two modules: sample selection module and sample augmentation module. (a). Each sample \(u_{i}\) corresponds to a marker \(\mathcal{F}_{i}\) and historical loss \(\mathcal{H}_{i}\). In each epoch, samples’ consistency losses are recorded and their historical losses \(\mathcal{H}_{i}\) are updated with EMA. Then based on the historical losses, we divide these samples into two parts by OTSU. The part of the samples with a smaller historical loss are _naive samples_, and their markers are set to 1, then the rest of the markers are set to 0. (b). Sample \(u_{i}\) is augmented in different ways depending on the marker \(\mathcal{F}_{i}\), _i.e._, if \(\mathcal{F}_{i}=0\), it is strongly augmented once, if \(\mathcal{F}_{i}=1\), it is strongly augmented twice, and the two augmented images are regrouped into one image. The regrouping may be in two parts top-bottom or two parts left-right, which is chosen randomly with a probability of 0.5. tional simplicity, stability, and strong self-adaptation. Inspired by this, we calculate the historical loss threshold in each epoch: \[\tau_{s}=\texttt{OTSU}(\mathcal{H}_{1},\mathcal{H}_{2},...,\mathcal{H}_{N}). \tag{5}\] OTSU adaptively divides the sample into two parts based on the historical loss. Samples with small historical losses are considered as _naive samples_ since they provide less help to the model. Then we update \(\mathcal{F}\) by: \[\mathcal{F}_{i}=\mathbb{1}(\mathcal{H}_{i}\leq\tau_{s}). \tag{6}\] It can be seen that the decision of the _naive samples_ is done at every epoch. In other words, whether a sample is a _naive sample_ is related to the model performance. We should note that there may be multiple shifts in \(\mathcal{F}\) in the training process. On the one hand, if the sample is regarded as a _naive sample_, we will apply a more diverse augmentation to it to avoid invalid learning. On the other hand, this more diverse augmentation may be too perturbing for the sample and negatively affect the model, so the augmentation strategy for these samples needs to be adjusted back to the original strategy in a timely manner. ### Sample Augmentation We apply different augmentations to the _non-naive sample_ and the _naive sample_. The former will be applied with the original augmentation \(\mathcal{A}\), while the latter will be applied with a new augmentation \(\mathcal{A}^{\prime}\), which increases the difficulty of the augmented version. This can be expressed as: \[\texttt{Augmented}(u_{i})=\left\{\begin{array}{l}\mathcal{A}(u_{i}), \mathcal{F}_{i}=1\\ \mathcal{A}^{\prime}(u_{i}),\mathcal{F}_{i}=0\end{array}\right. \tag{7}\] We implement this more diverse augmentation \(\mathcal{A}^{\prime}\) in a simple way, _i.e._, by regrouping several \(\mathcal{A}(u_{i})\) into a new image. Formally, a new augmented image \(\mathcal{A}^{\prime}(u_{i})\) can be expressed as: \[\mathcal{A}^{\prime}(u_{i})=\texttt{Concat}(\texttt{Cut}(\mathcal{A}(u_{i})_{1 }),\texttt{Cut}(\mathcal{A}(u_{i})_{2})). \tag{8}\] As shown in Figure 4, we have two strongly augmented images \(\mathcal{A}(u_{i})_{1}\) and \(\mathcal{A}(u_{i})_{2}\). To create a new augmented image \(\mathcal{A}^{\prime}(u_{i})\), we randomly choose one of the following two options with equal probability: 1) Top-bottom concat: We take the top half of \(\mathcal{A}(u_{i})_{1}\) and the bottom half of \(\mathcal{A}(u_{i})_{2}\) and concatenate them to create a new image. 2) Left-right concat: We take the left half of \(\mathcal{A}(u_{i})_{1}\) and the right half of \(\mathcal{A}(u_{i})_{2}\) and concatenate them to create a new image. Image regrouping is a technique to enhance image diversity by combining multiple augmented images into a new image. It is a simple and effective solution that has been shown to be effective in previous works such as "Cut Figure 4: Augmentation for _naive samples_. Mix" [17]. In comparison to learning-based data augmentation methods, which can also yield augmented images suitable for model learning, image regrouping has lower memory and computational overheads. However, in the case of CutMix, augmentation is done on the cut images, which may result in some loss of information about the original image. In contrast, in our method, augmentation is applied to the whole image, which preserves more information about the original image. This is discussed further in the experimental section of the paper. ## 5 Experiments We used FixMatch and FlexMatch as a base framework to verify the validity of SAA on SSL benchmark datasets: CIFAR-10, CIFAR-100 [23], SVHN [32] and STL-10 [5]. In section 5.1, we present the specific implementation details. In section 5.2, we first verify that SAA can help the model improve test accuracy and achieve SOTA on SSL tasks. In addition, we compare the performance for the same number of iterations and verify that SAA can accelerate the model's improvement speed. ### Implementation Details We adopt "WideResNet-28-2" [51] for CIFAR-10 and SVHN, "WideResNet-28-8" [51] for CIFAR-100 and "ResNet18" [18] for STL-10. For a fair comparison, we keep the same set of parameters as FixMatch and FlexMatch with \(\{|\mathcal{B}_{\mathcal{X}}|=64,|\mathcal{B}_{\mathcal{U}}|=7|\mathcal{B}_{ \mathcal{X}}|,\lambda=1\}\). The test model is updated by EMA with a decay rate of 0.999. \(\mathcal{H}\) is updated in the same way and with the same parameters (\(\alpha=0.999\)). FixMatch and FlexMatch set the number of training iterations to \(2^{20}\), and we keep this practice as well. In order to update the sample historical loss \(\mathcal{H}\) and marker \(\mathcal{F}\) in a timely manner, we consider every 102 iterations as one epoch,, a total of 1024 epochs are trained. As the augmentation \(\mathcal{A}^{\prime}\) is not suitable for the initial training of the model, we apply it only after the 100th epoch, while historical loss \(\mathcal{H}\) is recorded from the beginning. We repeat the same experiment for five runs with different seeds to report the mean test accuracy and variance. ### Main Results **SAA improves the performance of baseline models.** As shown in Tabel 1, SAA successfully improves the test accuracy of FixMatch and FlexMatch under all settings. For instance, on CIFAR-10 with 40 labels, FixMatch and FlexMatch achieved a mean accuracy of 92.50% and 95.01%, while with SAA their average accuracy improved to 94.76% and 95.31%. For a challenging and realistic task STL-10, SAA helped FixMatch to improve its accuracy by 2.65% and FlexMatch by 1.70%. FixMatch and FlexMatch outperform even under fully-supervised in some settings,, FixMatch achieved mean test accuracy of 95.81% on CIFAR-10 with 4000 labels and 97.98% on SVHN with 1000 labels. We can notice that the variance of the model becomes slightly larger after applying SAA, as the boosting effect of SAA on the model is different under different seeds. As shown in Figure 4(a), SAA boosts FixMatch at all 5 seeds, but it can boost by 2.51% when the seed is 1 and 1.20% when the seed is 2. **We achieve the SOTA performance.** With FixMatch as the base, SAA can help to bring its performance up to near or even beyond that of other SSL models. For example, on CIFAR-10 with 40 labels, FixMatch (w/SAA) achieves an accuracy of 94.76%, which is within 0.3% of NP-Match. While on CIFAR-10 with 250 labels, FixMatch (w/SAA) \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{40 labels} & CIFAR-10 & \multicolumn{3}{c}{CIFAR-100} & \multicolumn{3}{c}{SVHN} & STL-10 \\ & & 250 labels & 400 labels & 400 labels & 250 labels & 10000 labels & 40 labels & 250 labels & 1000 labels & 1000 labels \\ \hline Mean-Teacher & 29.01\(\pm\)1.60 & 62.54\(\pm\)3.30 & 91.00\(\pm\)0.21 & 18.89\(\pm\)1.44 & 54.83\(\pm\)1.06 & 68.25\(\pm\)0.23 & 63.91\(\pm\)3.96 & 96.55\(\pm\)0.03 & 96.73\(\pm\)0.05 & - \\ MixMatch & 63.81\(\pm\)6.48 & 86.37\(\pm\)0.59 & 93.34\(\pm\)0.26 & 32.41\(\pm\)0.66 & 60.42\(\pm\)0.48 & 72.22\(\pm\)0.29 & 69.40\(\pm\)0.83 & 95.44\(\pm\)0.32 & 96.31\(\pm\)0.37 & 38.02\(\pm\)8.29 \\ ReMixMatch & 90.12\(\pm\)1.03 & 93.70\(\pm\)0.05 & 95.16\(\pm\)0.01 & 57.25\(\pm\)1.05 & 73.97\(\pm\)0.35 & **79.98\(\pm\)0.27** & 75.96\(\pm\)9.13 & 93.64\(\pm\)0.22 & 94.84\(\pm\)0.31 & 75.51\(\pm\)1.25 \\ Dash & 91.84\(\pm\)4.31 & 95.22\(\pm\)0.12 & 95.76\(\pm\)0.06 & 55.17\(\pm\)1.36 & 72.15\(\pm\)0.19 & 77.23\(\pm\)0.21 & 96.97\(\pm\)1.59 & 97.83\(\pm\)0.10 & 97.97\(\pm\)0.06 & 83.17\(\pm\)0.80 \\ CoMatch & 93.09\(\pm\)1.39 & 95.09\(\pm\)0.33 & 95.77\(\pm\)0.04 & 58.61\(\pm\)1.41 & 72.37\(\pm\)0.44 & 77.68\(\pm\)0.24 & - & - & - & - & - \\ STA & 94.83\(\pm\)0.32 & 95.11\(\pm\)0.27 & 95.79\(\pm\)0.15 & 58.66\(\pm\)1.41 & 72.37\(\pm\)0.44 & 77.68\(\pm\)0.24 & 94.37\(\pm\)2.91 & 95.08\(\pm\)1.08 & 95.84\(\pm\)0.24 & - \\ NP-Match & 95.09\(\pm\)0.04 & 95.04\(\pm\)0.06 & 95.89\(\pm\)0.02 & 61.09\(\pm\)0.09 & 73.79\(\pm\)0.26 & 78.78\(\pm\)0.13 & - & - & - & - \\ SimMatch & 94.40\(\pm\)1.37 & 95.16\(\pm\)0.39 & 96.04\(\pm\)0.01 & 62.19\(\pm\)2.21 & 74.93\(\pm\)0.32 & 79.42\(\pm\)0.11 & - & - & - & 89.70\(\pm\)0.82 \\ FixMatch\({}^{\dagger}\) & 92.95\(\pm\)0.67 & 95.10\(\pm\)0.04 & 95.81\(\pm\)0.05 & 53.17\(\pm\)0.51 & 72.64\(\pm\)0.17 & 77.60\(\pm\)0.09 & 96.24\(\pm\)0.98 & 97.54\(\pm\)0.04 & 97.98\(\pm\)0.02 & 85.27\(\pm\)1.15 \\ **FixMatch (w/SAA)** & 94.76\(\pm\)0.99 & 95.21\(\pm\)0.07 & 96.09\(\pm\)0.07 & 54.29\(\pm\)0.73 & 73.18\(\pm\)0.21 & 78.71\(\pm\)0.20 & **97.01\(\pm\)0.72** & **97.68\(\pm\)0.07** & **98.06\(\pm\)0.06** & **87.92\(\pm\)1.46** \\ \hline FlexMatch\({}^{\dagger}\) & 95.01\(\pm\)0.09 & 95.08\(\pm\)0.10 & 95.82\(\pm\)0.02 & 60.51\(\pm\)1.54 & 72.98\(\pm\)0.22 & 78.15\(\pm\)0.17 & 92.42\(\pm\)2.60 & 92.98\(\pm\)1.59 & 93.54\(\pm\)0.28 & 89.15\(\pm\)0.71 \\ **PiesMatch (w/SAA)** & **95.31\(\pm\)0.16** & **95.40\(\pm\)0.19** & **96.14\(\pm\)0.08** & **61.87\(\pm\)1.94** & **75.01\(\pm\)0.41** & **79.88\(\pm\)0.34** & **93.15\(\pm\)2.54** & **93.25\(\pm\)2.41** & **94.41\(\pm\)0.27** & **90.85\(\pm\)0.82** \\ \hline Fully-supervised & 95.38\(\pm\)0.05 & & & & & & & & & \\ \hline \end{tabular} \end{table} Table 1: Performance comparisons on CIFAR-10, CIFAR-100, SVHN, STL-10. We compare the performance with recent SSL works [42, 3, 4, 48, 26, 41, 55, 45]. We apply SAA on the top of FixMatch [39] and FlexMatch [52], respectively. For fair comparison, we re-ran FixMatch and FlexMatch under the exact same random seed, which is denoted by \({}^{\dagger}\). Fully-supervised comparisons follows FlexMatch [52], which is conducted with all labeled data with applying weak data augmentations. Experiments shows SAA provides a significant improvement to the SSL model. When we choose FlexMatch as the base framework, performance reaches SOTA for most settings. achieves an accuracy of 95.21%, which outperforms current SSL models. FlexMatch, which improves FixMatch by adjusting the threshold, can be further improved with the help of SAA. For example, on CIFAR-10 with 40 labels, SAA helped FlexMatch increase its mean accuracy to 95.31%. For more difficult tasks, SAA helped FlexMatch increase its mean accuracy to 75.01% and 90.85% on CIFAR-100 with 2500 labels and STL-10, which outperforms all current SSL models. Note that for unbalanced datasets, FlexMatch's threshold estimates for each class can produce large deviations, which is the reason for FlexMatch performs less favorably under the SVHN tasks. Since SVHN is a simple task, a fixed high threshold in FixMatch is more advantageous. When applying SAA to FixMatch, it is also possible to further improve its performance and outperform existing SSL models. **SAA accelerates the improvement of model performance.** Figure 4(b) shows the performance curve of the model training with the same seed. For example, at the 200k-th and 400k-th iterations, SAA helps FixMatch improve its performance from 86.72% and 87.26% to 91.15% and 91.11%, respectively. FlexMatch can also improve the performance of FixMatch for the same iterations by adjusting the confidence threshold so that the model can learn more samples. SAA, on the other hand, allows _naive samples_ to be learned more effectively, and therefore succeeds in further enhancing the learning of the model. For example, FixMatch reached an accuracy of 87.26%, FlexMatch helped FixMatch improve to 92.74%, and SAA helped FlexMatch further improve to 94.22%. More often, we can observe that FixMatch encountered performance stagnant between approximately 200k-th and 600k-th iterations. This is because during this period the model learns a mass of strongly augmented versions that are non-useful for model performance improvement, while our proposed SAA successfully avoids this phenomenon by changing the augmentation for _naive samples_. ## 6 Discussion **The more diverse augmentation is not applicable to all samples.** Our approach differs from [17] in that we only apply the more diverse augmentation to a subset of samples (i.e., the _naive samples_). We experimentally validated this, as shown in Table 2 with Baseline-1 and Baseline-2. It can be clearly seen that applying diverse augmentation to all samples can lead to instability and reduce performance on some tasks. This indicates that some images have too much semantic information corrupted under augmentation \(\mathcal{A}^{\prime}\), leading to an accumulation of errors. To further explore this, we applied \(\mathcal{A}^{\prime}\) to a randomly selected sample of 50% at each epoch and the mean test accuracy of the model was slightly improved, but still unstable. This further gives us the sense that more diverse augmentation is necessary, but can only be used on the _naive samples_ to work better. **Adaptively dividing _naive samples.** To identify the _naive samples_, we use the historical consistency loss of the sample. We tested this approach on CIFAR-10 and STL-10 with different threshold settings. As shown in Table 2, both \begin{table} \begin{tabular}{l c c} \hline \hline \#Methods of selecting samples to apply \(\mathcal{A}^{\prime}\) & CIFAR-10 & STL-10 \\ \hline Baseline-1: Applying \(\mathcal{A}\) to all samples & 92.50\(\pm\)0.67 & 85.27\(\pm\)1.15 \\ Baseline-2: Applying \(\mathcal{A}^{\prime}\) to all samples & 92.98\(\pm\)2.94 & 83.19\(\pm\)3.98 \\ Baseline-3: Applying \(\mathcal{A}^{\prime}\) to 50\% samples (random) & 94.05\(\pm\)2.00 & 85.98\(\pm\)2.98 \\ \hline Setting the threshold on \(\mathcal{H}\): & 93.82\(\pm\)0.95 & 85.98\(\pm\)1.00 \\ Fixed threshold (0.002) & 94.10\(\pm\)1.22 & 85.22\(\pm\)1.87 \\ Fixed proportion threshold (25\%) & 93.10\(\pm\)0.89 & 85.38\(\pm\)1.20 \\ Fixed proportion threshold (50\%) & 93.87\(\pm\)1.52 & 86.08\(\pm\)1.97 \\ Fixed proportion threshold (75\%) & 93.85\(\pm\)2.29 & 84.29\(\pm\)2.03 \\ OTSU threshold & **94.50\(\pm\)1.05** & **87.92\(\pm\)1.46** \\ \hline \hline \end{tabular} \end{table} Table 2: Different methods of selecting samples that applied with \(\mathcal{A}^{\prime}\). Experiment on conducted on the base of FixMatch. There are three ways to set the threshold on \(\mathcal{H}\): 1) fixed value; 2) percentile of sorted \(\mathcal{H}\); 3) automatic OTSU division. Figure 5: All experiments are conducted on task of CIFAR-10 with 40 labels. (a) shows the performance improvement of SAA on the model with 5 seeds. Although the magnitude of SAA’s performance improvement on the model is different under different seeds, there is a steady improvement in general. (b) shows the performance growth during training. For the same iterations, SAA can significantly improve the performance of the model. the fixed threshold and fixed scale approaches have a boosting effect on the model, although the effect is unstable and varies for different datasets. For example, the fixed threshold \(\tau_{s}=0.002\) outperforms better on CIFAR-10 task, while \(\tau_{s}=0.001\) outperforms better on STL-10 task, so the fixed threshold will be a more tricky hyperparameter for different datasets. Compared with the first two approaches, OTSU is not only better adapted to cross-dataset tasks, but can also be tuned as the model is trained. Figure 5(a) shows the proportion of _naive sample_ divided by the OTSU method at different iterations. We can find that _naive samples_ are not only related to task difficulty, but also to model performance. For simpler datasets, the proportion of _naive samples_ is greater, and as model performance increases, more samples are also treated as _naive samples_. **SAA prefers warm-up models.** We analyze the model warm-up on of CIFAR-10 with 40 labels. The results in Figure 7 show that model warm-up with 100k iterations (10% of total iterations) performs the best. This is because more diverse augmented images are more difficult to recognize, which can damage the initial training of the model. Therefore, SAA performs better on warm-up models. However, warming up the model too soon would reduce the action time of the SAA. **SAA allows the augmented versions can further optimize the model.** Figure 5(b) compares the training loss of _naive samples_ with and without SAA in FixMatch. The plot shows that without SAA, the loss of _naive samples_ remains close to 0 most of the time after the 100th epoch, indicating that strongly augmented versions are not helpful for model training. However, with SAA, the number of times that strongly augmented versions aid model training significantly increases due to the dynamic adjustment of the augmentation strategy for these samples. **Augment on image, not on patches.** Previous work [17] proposed to first cut an image into crops and then apply augmentation on them, while we instead apply augmentation on the whole image and then cut it into crops. We conducted experiments to compare these two methods, and the results are shown in Figure 8. Our augmentation method performs better on both CIFAR-10 and STL-10 tasks. We attribute this to the fact that augmenting the whole image preserves more semantic information, which is safer for training SSL models. **Limitations**. Since the augmentation we use is unlearnable, there is no guarantee that every augmented version is capable of contributing to model learning. Thus, the SAA serves to increase the likelihood that the augmented versions are useful to the model. In addition to this, if the model is already making good use of the sample, further augmentation may not be necessary or may even be detrimental to the model's performance, then the role of SAA is diminished. ## 7 Conclusion In this paper, we first discuss the characteristics of _naive samples_ and their impact on model training and highlight that these samples should receive attention to uncover more value. We propose SAA to achieve this goal, which identifies _naive samples_ in real-time and dynamically adjusts their augmentation strategy so that they can contribute to model training. Our experimental results show that SAA significantly improves the performance of SSL methods, such as FixMatch and FlexMatch, on various datasets. Figure 8: Different more diverse augmentation. Experiments are conducted on CIFAR-10 with 40 labels and STL-10. Figure 6: (a) shows the proportion of _naive sample_ divided by OTSU with model training. We recorded the proportion of _naive samples_ on CIFAR-10 with 40 labels, SVHN with 40 labels and STL-10. The pictures on the left and right are FixMatch (w/SAA) and FlexMatch (w/SAA) respectively. (b) shows the consistency loss of two _naive samples_. The pictures on the left and right are FixMatch and FixMatch (w/SAA) respectively. Figure 7: Model warm-up. Experiments are conducted on CIFAR-10 with 40 labels.
semi-教師あり学習において、ラベルのないサンプルは、拡張と一致性正規化により利用できる。しかし、特定のサンプル、強い拡張を行っても、高い自信で正しい分類が行われるため、損失はゼロに近い値となる。これは、これらのサンプルが既に学習が完了していることを示しており、モデルへの追加最適化のメリットを提供しないことを意味する。これらのサンプルを「未学習のサンプル」と呼ぶ。残念ながら、既存のSSLモデルは未学習サンプルの特性を overlooked し、すべてのサンプルに対して同じ学習戦略を適用している。SSLモデルをさらに最適化するために、未学習サンプルに注意を払い、より多様性に富んだ拡張を行うことが重要である。この目的のために、サンプル適応拡張 (SAA) が提案され、2つのモジュールから成り立つ。1) サンプル選択モジュール; 2) サンプル拡張モジュール。具体的には、サンプル選択モジュールは、各エポックでの過去のトレーニング情報
2309.14782
Perturbative computation of thermal characteristics of the Stoner phase transition
We apply the thermal (imaginary time) perturbative expansion to the relevant effective field theory to compute characteristics of the phase transition to the ordered state which can occur at low temperatures in the gas of (nonrelativistic) spin 1/2 fermions interacting through a short-range spin independent repulsive binary interaction potential. We show how to obtain a systematic expansion of the system's free energy depending on the densities $n_+$ and $n_-$ of spin-up and spin-down fermions. In this paper we truncate this expansion at the second order and determine, by numerically minimizing the free energy, the equilibrium proportions of $n_+$ and $n_-$ (that is, the system's polarization) as functions of the temperature, the system's overall density $n = n_+ + n_-$ and the strength of the interaction.
Oskar Grocholski, Piotr H. Chankowski
2023-09-26T09:31:02
http://arxiv.org/abs/2309.14782v1
# Perturbative computation of thermal characteristics of the Stoner phase transition ###### Abstract We apply the thermal (imaginary time) perturbative expansion to the relevant effective field theory to compute characteristics of the phase transition to the ordered state which can occur at low temperatures in the gas of (nonrelativistic) spin 1/2 fermions interacting through a short range spin independent repulsive binary interaction potential. We show how to obtain a systematic expansion of the system's free energy depending on the densities \(n_{+}\) and \(n_{-}\) of spin up and spin down fermions. In this paper we truncate this expansion at the second order and determine, by numerically minimizing the free energy, the equilibrium proportions of \(n_{+}\) and \(n_{-}\) (that is, the system's polarization) as functions of the temperature, the system's overall density \(n=n_{+}+n_{-}\) and the strength of the interaction. _Keywords_: Diluted gas of interacting fermions, phase transitions, thermodynamic potentials, effective field theory, scattering lengths Introduction There is a qualitative argument that in the gas of spin 1/2 fermions interacting through a short range repulsive spin-independent binary potential a phase transition to the ordered state should occur, if the interaction strength and/or the system's density is sufficiently large. Indeed, at zero temperature, when the entropy factor does not intervene, the configuration of the system in which there are more fermions in one spin state than in the other one may be energetically favoured because, due to the Pauli exclusion principle the \(s\)-wave interaction of fermions in the same spin state is impossible and the resulting decrease of the interaction energy may be greater than the associated increase of the kinetic energy (increase of the Fermi energy of the more populated spin state). Theoretical investigation of this phenomenon, called the Stoner transition, taking into account its temperature dependence, requires the full machinery of statistical mechanics. The standard textbook treatment of the problem [1, 2], equivalent to the so-called mean field approach or the Hartree-Fock approximation, employs the pseudo-potential method which allows to determine in the first order approximation the Hamiltonian spectrum and to compute the Canonical Ensemble partition function of the system. In this approximation the phase transition is continuous (with the divergent magnetic susceptibility characterized by the critical exponent \(\gamma=1\) and a finite discontinuity of the heat capacity) and at low temperatures (where the Sommerfeld expansion can be used to obtain analytical expression for the relevant chemical potentials) it occurs when [3, 1, 2] \[k_{\rm F}a_{0}\geq\frac{\pi}{2}\left[1+\frac{\pi^{2}}{12}\left(\frac{k_{\rm B }T}{\varepsilon_{\rm F}}\right)^{2}+\ldots\right],\] where the (overall) Fermi wave vector and energy \[k_{\rm F}=\left(3\pi^{2}\,\frac{N}{V}\right)^{1/3},\qquad\varepsilon_{\rm F}= \frac{\hbar^{2}k_{\rm F}^{2}}{2m_{f}}\,, \tag{1}\] characterize the density of the system and \(a_{0}>0\) is the \(s\)-wave scattering length characterizing the strength of the (repulsive) interaction. The continuous character of the Stoner transition obtained in this approximation is, however, accidental - it is due to a numerical coincidence specific for a (three-dimensional) system of spin \(s=1/2\) fermions only (in the same approximation the transition is of first order if \(s>1/2\) and/or \(D\neq 3\)). In fact, computing the system's free energy beyond the mean field approximation, using the ordinary second order perturbative expansion, it was found [4] that at low temperatures it is of the first order, just as had been suggested in [5] on the basis of the generic presence of non-analytic terms (resulting from the coupling of the order parameter to the gap-less modes) in the free energy which cause the transition to have the first order character. The character of the considered transition (its dependence on the parameter \(k_{\rm F}a_{0}\)) can be most easily investigated at zero temperature because then the problem reduces to the computation of the ground-state energy density \(E_{\Omega}/V\) of the system of fermions interacting through a binary spin-independent repulsive potential as a function of the system's density \(n=N/V\) and its polarization \[P=(N_{+}-N_{-})/N\,. \tag{2}\] Such a computation is most easily performed using the modern effective field theory approach, the application of which to this problem has been pioneered in [6]. In this approach the underlying spatially nonlocal field-theory interaction (see e.g. [7] for the exposition of the relevant formalism of the second quantization), resulting from the ordinary potential two-body interaction, is replaced by an infinite set of local (contact) effective interactions \[\hat{V}_{\rm int}=C_{0}\!\int\!d^{3}{\bf x}\,\psi_{+}^{\dagger}\psi_{+}\psi_{- }^{\dagger}\psi_{-}+\hat{V}_{\rm int}^{(C_{2})}+V_{\rm int}^{(C_{2}^{\prime})}+\ldots \tag{3}\] \(\psi_{\pm}({\bf x})\) are here the usual field operators of spin up and down fermions; the terms \(V_{\rm int}^{(C_{2})}\), \(V_{\rm int}^{(C_{2}^{\prime})}\) (which will be not needed in this work) represent local operators of lower length dimension with four fermionic fields and two spatial derivatives and the ellipsis stands for other local operators (with more derivatives and/or field operators) of yet lower length dimension (see [6]). The amount of work needed to obtain the systematic expansion of the ground state energy in powers (which can be modified by logarithms) of \(k_{\rm F}R\), where \(R\) is the characteristic length scale of the underlying two-body spin independent interaction potential, is in this way greatly reduced. This is because in this approach the coupling constants, like \(C_{0}\) in (3), of the effective local interactions are directly determined in terms of the scattering lengths \(a_{\ell}\) and effective radii \(r_{\ell}\), \(\ell=0,1,\ldots\), (which are assumed to be of order \(\sim R\)) parametrizing the low energy expansion in powers of the relative momentum \(\hbar|{\bf k}|\) of the elastic scattering amplitude of two fermions. The simplifications brought in by the effective field theory method allowed to easily reproduce [8] and generalize to arbitrary repulsive potentials and arbitrary spins \(s\)[9] the old result of Kanno [10] who computed the order \((k_{\rm F}R)^{2}\) correction to the energy density using the specific hard sphere interaction of spin \(s=1/2\) fermions. The first order character of the phase transition at \(T=0\) is then clearly seen in the form of the energy density obtained in this approximation plotted as a function of the order parameter \(P\): starting from some value of \(k_{\rm F}a_{0}\) the energy density develops the second minimum well away from the one at \(P=0\) and at \(k_{\rm F}a_{0}=1.054\) (for \(s=1/2\)) this second minimum becomes deeper than that at \(P=0\). However the analysis of the dependence on the order parameter of the system's energy density which includes the complete order \((k_{\rm F}R)^{3}\) corrections obtained recently in [11, 12] using the same effective field theory approach shows that, independently of the value \(s\) of the fermion spin, they have the effect of erasing the first order character of the Stoner transition, making it almost indistinguishable from the continuous one. This is reflected in the fact that the height of the hill separating the minimum at \(P\neq 0\) from the one at \(P=0\) is greatly reduced (for higher spins also the position of the nontrivial minimum of \(E_{\Omega}/V\) as a function of the relevant order parameter is strongly shifted towards its small values) compared to the situation without these corrections. Moreover, there are claims [13] based on a resummation of an infinite subclass of Feynman diagrams contributing to the ground-state energy density that the transition (at \(T=0\)) is indeed continuous. Although it is not obvious that the contributions taken into account in this resummation are really the dominant ones [12], the results it leads to seem to agree well, as far as the critical value of \(k_{\rm F}a_{0}\) is concerned, with the numerical quantum Monte Carlo simulations [14]. In view of this situation it is desirable to investigate how the higher order corrections influence the character of the Stoner phase transition at nonzero temperatures. With this goal in mind in this paper we formulate a systematic perturbative expansion of the thermodynamic potentials of the system in question applying the standard imaginary time formalism [7] within the effective field theory. We show that the expansion of the free energy is in this approach particularly simple being given by the same connected vacuum Feynman diagrams which give nonzero contributions to the energy density expressed in terms of the chemical potentials of the noninteracting system. In the numerical analysis we restrict ourselves in this paper only to the second order contributions reproducing the results obtained in [4], but with more labour the computations can be extended to higher orders as well. ## 2 Perturbative expansion of the thermodynamic potential \(\Omega(T,V,\mu_{+},\mu_{-})\) The natural equilibrium statistical physics formalism in which to treat the problem of the gas of fermions the interactions of which preserve their spins, and therefore the numbers \(N_{\sigma}\) of particles with the spin projection \(\sigma\), is the Grand Canonical Ensemble with separate chemical potentials \(\mu_{\sigma}\) associated with the individual spin projections. One is therefore interested in the statistical operator (as usually, \(\beta\equiv 1/k_{\rm B}T\)) \[\hat{\rho}=\frac{1}{\Xi_{\rm stat}}\,e^{-\beta\hat{K}}\,,\ \ \ \ \ {\rm in\ which}\ \ \ \ \hat{K}=\hat{H}_{0}-\sum_{\sigma}\mu_{\sigma}\hat{N}_{\sigma}+\hat{V}_{\rm int }\equiv\hat{K}_{0}+\hat{V}_{\rm int}\,, \tag{4}\] and in computing the statistical sum (we specify the notation to the case of spin \(1/2\) fermions, so that \(\sigma=+,-\)) \[\Xi_{\rm stat}(T,V,\mu_{+},\mu_{-})={\rm Tr}\Big{(}e^{-\beta\hat{K}}\Big{)}\,, \tag{5}\] from which all the necessary thermodynamic potentials can in principle be obtained by performing the standard steps. The free part \(\hat{K}_{0}\) of the operator \(\hat{K}=\hat{K}_{0}+\hat{V}_{\rm int}\), where \(\hat{V}_{\rm int}\) will be taken in the form (3), is \[\hat{K}_{0}=\sum_{{\bf p},\sigma}\left(\varepsilon_{\bf p}-\mu_{\sigma} \right)a^{\dagger}_{{\bf p},\sigma}a_{{\bf p},\sigma}=\sum_{\sigma}\int\! \frac{d^{3}{\bf p}}{(2\pi)^{3}}\left(\varepsilon_{\bf p}-\mu_{\sigma}\right) a^{\dagger}_{\sigma}({\bf p})\,a_{\sigma}({\bf p})\,, \tag{6}\] with \(\varepsilon_{\bf p}\equiv\hbar^{2}{\bf p}^{2}/2m_{f}\), in the normalizations in the finite volume \(V\) and in an infinite space, respectively. To compute perturbatively the statistical sum \(\Xi_{\rm stat}(T,V,\mu_{+},\mu_{-})\) one introduces [7] the (imaginary time) interaction picture evolution operator \[{\cal U}_{I}(\tau_{2},\tau_{1})=e^{\tau_{2}\hat{K}_{0}}\,e^{-(\tau_{2}-\tau_{ 1})\hat{K}}\,e^{-\tau_{1}\hat{K}_{0}}\,, \tag{7}\] which satisfies the differential equation \[\frac{d}{d\tau_{2}}\,{\cal U}_{I}(\tau_{2},\tau_{1})=-V^{I}_{\rm int}(\tau_{2})\,{ \cal U}_{I}(\tau_{2},\tau_{1})\,,\] (\(V^{I}_{\rm int}(\tau_{2})=e^{\tau_{2}\hat{K}_{0}}V_{\rm int}e^{-\tau_{2}\hat{K}_{ 0}}\)) with the "initial" condition \({\cal U}_{I}(\tau,\tau)=\hat{1}\) and which formally can be written in the form \[{\cal U}_{I}(\tau_{2},\tau_{1})={\rm T}_{\tau}\exp\biggl{\{}-\int_{\tau_{1}}^{ \tau_{2}}\!d\tau\,V^{I}_{\rm int}(\tau)\biggr{\}}\,,\] in which \({\rm T}_{\tau}\) is the symbol of the "chronological" ordering. Since \(e^{-\beta\hat{K}}=e^{-\beta\hat{K}_{0}}{\cal U}_{I}(\beta,0)\), the statistical sum can be represented as \[\Xi_{\rm stat}={\rm Tr}\Bigl{(}e^{-\beta\hat{K}_{0}}\,{\cal U}_{I}(\beta,0) \Bigr{)}\equiv\Xi_{\rm stat}^{(0)}\,{\rm Tr}\Bigl{(}\hat{\rho}^{(0)}\,{\cal U} _{I}(\beta,0)\Bigr{)}\,, \tag{8}\] where \(\hat{\rho}^{(0)}\) and \(\Xi_{\rm stat}^{(0)}\) are the statistical operator and the statistical sum of the noninteracting system, respectively. The perturbative expansion of \(\Xi_{\rm stat}\) is then given by the series \[\Xi_{\rm stat}=\Xi_{\rm stat}^{(0)}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\! \int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}{\rm Tr}\left(\hat {\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{I}_{\rm int}( \tau_{1})]\right). \tag{9}\] The corresponding expansion of the potential \(\Omega(T,V,\mu_{+},\mu_{-})=-\frac{1}{\beta}\ln\Xi_{\rm stat}(T,V,\mu_{+},\mu_ {-})\) is \[\Omega=\Omega^{(0)}-\frac{1}{\beta}\ln\Biggl{\{}\sum_{n=0}^{\infty}\frac{(-1) ^{n}}{n!}\!\int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}\,{\rm Tr }\Bigl{(}\hat{\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{ I}_{\rm int}(\tau_{1})]\Bigr{)}\Biggr{\}}\,, \tag{10}\] its first term \(\Omega^{(0)}\) being the textbook expression [1] (\(\varepsilon_{\bf k}=\hbar^{2}{\bf k}^{2}/2m_{f}\)) \[\Omega^{(0)}(T,V,\mu_{\sigma})=-\frac{1}{\beta}\,\sum_{\sigma}V\!\int\!\frac{d ^{3}{\bf k}}{(2\pi)^{3}}\ln\Bigl{(}1+e^{-\beta(\varepsilon_{\bf k}-\mu_{ \sigma})}\Bigr{)}\,. \tag{11}\] Or, since the logarithm picks up connected contributions only, \[\Omega=\Omega^{(0)}-\frac{1}{\beta}\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\! \int_{0}^{\beta}\!d\tau_{n}\ldots\int_{0}^{\beta}\!d\tau_{1}{\rm Tr}\Bigl{(} \hat{\rho}^{(0)}\,{\rm T}_{\tau}[V^{I}_{\rm int}(\tau_{n})\ldots V^{I}_{\rm int }(\tau_{1})]\Bigr{)}^{\rm con}\,. \tag{12}\] In this form the expression for \(\Omega\) is just the thermal analog of the expansion of the formula1 Footnote 1: Here \(T\) denotes time and not the temperature. \[E_{\Omega}=E_{\Omega_{0}}-\lim_{T\to\infty}\frac{\hbar}{iT}\,\langle\Omega_{0 }|{\rm T}_{t}\exp\!\left(-\frac{i}{\hbar}\!\int_{-T/2}^{T/2}\!dt\,V^{I}_{\rm int }(t)\right)|\Omega_{0}\rangle^{\rm con}\,, \tag{13}\] used in [6, 8, 11, 12] for computing the ground state energy \(E_{\Omega}\) of the system. It is clear that the correspondence between the two formalisms is \(\beta\leftrightarrow iT/\hbar\) (it transforms the \(K\)-picture operators into the Heisenberg picture ones and vice versa). The formula (13) for the ground state energy is thus obtained from the thermal expansion (12) by taking the limit \(\beta\to\infty\) and simultaneously adjusting the chemical potential \(\mu_{\sigma}\) so that there are \(N_{\sigma}\) particles with the spin projection \(\sigma\) (see below). Evaluation of the successive terms of the expansion (12) reduces, owing to the thermal analog of the Wick formula (see [7]), to drawing all possible connected Feynman diagrams with a given numbers of different interaction vertices arising from \(\tilde{V}_{\rm int}\) joined by the oriented lines and integrating over the positions \({\bf x}\) and "times" \(\tau\) ascribed to these vertices the corresponding products of free thermal propagators \[-{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\tau_{2}-\tau_{1};{\bf x}_{2}-{\bf x}_ {1})=\frac{1}{\beta}\sum_{n}\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,e^{-i\omega _{n}^{F}(\tau_{2}-\tau_{1})}\,e^{i{\bf k}\cdot({\bf x}_{2}-{\bf x}_{1})}\left( -\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\omega_{n}^{F},{\bf k})\right),\] the Fourier transforms \(-\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}\) of which have the form [7] (the definition of \(\omega_{n}^{F}\) as well as of \(\omega_{n}^{B}\) are given in (A.1) and (A.2)) \[-\tilde{\cal G}^{(0)}_{\sigma_{2},\sigma_{1}}(\omega_{n}^{F},{\bf k})=\frac{- \delta_{\sigma_{2},\sigma_{1}}}{i\omega_{n}^{F}-(\varepsilon_{\bf k}-\mu_{ \sigma_{1}})}\,,\] associated with the (oriented) lines connecting the vertices of the diagram. The resulting Feynman rules in the "momentum" space are almost identical with the ordinary ones except that the integrations over frequencies are replaced by summations over the (fermionic) Matsubara frequencies \(\omega_{n}^{F}=(\pi/\beta)(2n+1)\), \(n\in{\mathbb{Z}}\). In this way one obtains the expansion of the potential \(\Omega(T,V,\mu_{+},\mu_{-})\) the successive terms of which depend on the chemical potentials \(\mu_{+}\) and \(\mu_{-}\) which must be adjusted in successive orders of the expansion to yield through the relations \[N_{\pm}=-\left(\partial\Omega/\partial\mu_{\pm}\right)_{T,V}\,, \tag{14}\] the prescribed densities \(n_{+}=N_{+}/V\) and \(n_{-}=N_{-}/V\) of particles with the spin projections up and down. It will be instructive to recover first, using this formalism, the textbook results [1, 2] of the mean field approximation. The first correction \(\Omega^{(1)}\) to the Grand potential is given by the single diagram shown in Figure 1. The corresponding expression reads \[\Omega^{(1)}=\frac{1}{\beta}\,C_{0}\!\int_{0}^{\beta}\!d\tau\!\int\!d^{3}{\bf x }\,{\rm Tr}\Big{(}\hat{\rho}^{(0)}{\rm T}_{\tau}[\hat{\psi}_{+}^{\dagger I} \hat{\psi}_{+}^{I}\hat{\psi}_{-}^{\dagger I}\hat{\psi}_{-}^{I}]\Big{)}=C_{0}\, V\,{\cal G}^{(0)}_{++}(0,{\bf 0})\,{\cal G}^{(0)}_{--}(0,{\bf 0})\,. \tag{15}\] Using the summation formula (A.1) one obtains \[{\cal G}^{(0)}_{\pm\pm}(0,{\bf 0})=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\, \Big{[}1+e^{\beta(\varepsilon_{\bf k}-\mu_{\pm})}\Big{]}^{-1}\,. \tag{16}\] Figure 1: The first order correction \(\Omega^{(1)}\) to the thermodynamic potential \(\Omega(T,V,\mu_{+},\mu_{-})\). Solid and dashed lines represent fermions with opposite spin projections. As will be shown in the next section, to the first order in the coupling \(C_{0}\) the free energy \(F(T,V,N_{+},N_{-})\) is given by \[F(T,V,N_{+},N_{-}) = \Omega^{(0)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})+N_{+}\mu_{+}^{(0)}+N _{-}\mu_{-}^{(0)} \tag{17}\] \[+ \Omega^{(1)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})+\dots,\] where \(\mu_{\pm}^{(0)}\) are the zeroth order chemical potentials determined by the conditions analogous to (14) but with \(\Omega\) replaced by \(\Omega^{(0)}\) given by (11). It is convenient to define the function \[f(\nu)\equiv\frac{3}{2}\!\int_{0}^{\infty}\!d\xi\,\frac{\xi^{1/2}}{1+e^{\xi- \nu}}\equiv\frac{3\sqrt{\pi}}{4}\,f_{3/2}(\nu)\,, \tag{18}\] and to rewrite these conditions in the form \[f(\nu_{\pm})=\left(\frac{\varepsilon_{\rm F}^{(0)}(n_{\pm})}{k_{\rm B}T} \right)^{3/2}\,, \tag{19}\] in which \(\nu_{\pm}\equiv\mu_{\pm}/k_{\rm B}T\) and \(\varepsilon_{\rm F}^{(0)}(n)=(6\pi^{2}n)^{2/3}\hbar^{2}/2m_{f}\) is the Fermi energy of the system of \(N=nV\) spin 0 noninteracting fermions enclosed in the volume \(V\). The function \(f(\nu)\), which is a decent monotonically growing function of \(\nu\) mapping \(\mathbb{R}\) onto \(\mathbb{R}_{+}\), has the inverse, so after writing \(n_{\pm}\) as \((n/2)(1\pm P)\) the solutions take the form2 Footnote 2: Inverting the appropriate expansions of the integral (18) given e.g. in [1] it is straightforward to find that asymptotically \[f^{-1}(x)=\left\{\begin{array}{c}\ln\!\left(\sqrt{2}-\sqrt{2-(4x/3)\sqrt{8/ \pi}}\right)+\dots,\quad x\ll 1\\ x^{2/3}\left[1-(\pi^{2}/12)\,x^{-4/3}-(\pi^{4}/80)\,x^{-8/3}-(1511\pi^{6}/20736 0)\,x^{-4}+\dots\right],\quad x\gg 1\end{array}\right.\] \[\frac{\mu_{\pm}^{(0)}}{k_{\rm B}T}=f^{-1}\!\left((1\pm P)\left(\frac{ \varepsilon_{\rm F}(n)}{k_{\rm B}T}\right)^{3/2}\right), \tag{20}\] in which \(\varepsilon_{\rm F}(n)\) the system's overall Fermi energy (1). Expressed in terms of the zeroth order chemical potentials \(\mu_{\pm}^{(0)}\), the first order correction (15) can be simply written as \(\Omega^{(1)}(T,V,\mu_{+}^{(0)},\mu_{-}^{(0)})=C_{0}V(N_{+}/V)(N_{-}/V)\), i.e. it is independent (when expressed in terms of the particle densities) of the temperature. Minimization with respect to \(N_{+}\) of \(F(T,V,N_{+},N-N_{+})\) truncated to the first order in the coupling \(C_{0}\) (at fixed \(N\)) then leads to the equilibrium condition \[\mu_{+}^{(0)}(N_{+})-\mu_{-}^{(0)}(N-N_{+})+\frac{C_{0}}{V}\,(N-2N_{+})=0\,,\] which, because \(N-2N_{+}=-NP\) and (to this order) \(C_{0}=(4\pi\hbar^{2}/m_{f})a_{0}\), can be rewritten in the familiar form [1] \[\mu_{+}^{(0)}(N_{+})-\mu_{-}^{(0)}(N_{-})=\frac{8}{3\pi}\,\varepsilon_{\rm F} \left(k_{\rm F}a_{0}\right)P\,. \tag{21}\] This leads to the continuous phase transition. The effect of the external magnetic field \({\cal H}\) can be also taken into account by simply including the interaction with it in the free part of the Hamiltonian, i.e. by replacing \(\mu_{\pm}\) in (6) by \(\tilde{\mu}_{\pm}=\mu_{\pm}\pm{\cal H}\) (the magnetic moment has been here included in \({\cal H}\) which has therefore the dimension of energy). Since ultimately the free energy will be cast in the form in which its dependence on \(N_{\pm}\) and \({\cal H}\) enters only through \(\tilde{\mu}_{\pm}^{(0)}\) which should be determined from the conditions \[\frac{\tilde{\mu}_{\pm}^{(0)}}{k_{\rm B}T}=\frac{\mu_{\pm}^{(0)} \pm{\cal H}}{k_{\rm B}T}=f^{-1}\Bigg{(}(1\pm P)\left(\frac{\varepsilon_{\rm F}( n)}{k_{\rm B}T}\right)^{3/2}\Bigg{)}\,, \tag{22}\] this prescription remains valid to all orders of the expansion. In particular, in the first order approximation the equilibrium condition, written in the convenient dimensionless variables \[t\equiv\frac{T}{T_{\rm F}}\equiv\frac{k_{\rm B}T}{\varepsilon_{ \rm F}}\,,\ \ \ \ \ h\equiv\frac{{\cal H}}{\varepsilon_{\rm F}}\,,\ \ \ \ \delta_{\pm}\equiv\frac{\mu_{\pm}^{(0)}}{ \varepsilon_{\rm F}}\,, \tag{23}\] takes the form \[\frac{8}{3\pi}\,(k_{\rm F}a_{0})\,P+2h=t\left[f^{-1}\bigg{(}\frac{1+P}{t^{3/2} }\bigg{)}-f^{-1}\bigg{(}\frac{1-P}{t^{3/2}}\bigg{)}\right]. \tag{24}\] If the asymptotic expansion of \(f^{-1}(x)\) for \(x\gg 1\) is used, this reproduces the equilibrium condition derived in [1]. For further applications it will be convenient to write down explicitly the formula (17) (including the external magnetic field \({\cal H}\)) expressing it through the introduced dimensionless variables (23) and the polarization (2): \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{F}{\varepsilon_{\rm F}V} =-\frac{3\sqrt{\pi}}{4}\,t^{5/2}\left[f_{5/2}(\tilde{\nu}_{+})+f_ {5/2}(\tilde{\nu}_{-})\right]\] \[\ \ \ \ \ \ \ \ \ \ \ +(1+P)(\tilde{\delta}_{+}-h)+(1-P)( \tilde{\delta}_{-}+h)+(k_{\rm F}a_{0})\,\frac{4}{3\pi}\,(1-P^{2})+\dots \tag{25}\] Here3 Footnote 3: By the appropriate change of variables and the integration by parts \(\Omega^{(0)}\) given by (11) is written in terms of the standard integral (26) with \(p=5/2\)[1]. \[f_{p}(\nu)=\frac{1}{\Gamma(p)}\int_{0}^{\infty}\frac{d\xi\,\xi^{ p-1}}{1+e^{\xi-\nu}}\,, \tag{26}\] and \(\tilde{\nu}_{\pm}\) (and \(\tilde{\delta}_{\pm}\equiv t\tilde{\nu}_{\pm}\)) are given by (22). In the limit \(T\to 0\) (\(t\to 0\)) in which \(\tilde{\nu}_{\pm}\gg 1\), \(f_{5/2}(\nu)=(4/3\sqrt{\pi})(2/5)\nu^{5/2}+\dots\), while (c.f. the expansion of the function \(f^{-1}(x)\) given in the footnote) \(\tilde{\nu}_{\pm}=(1\pm P)^{2/3}+\dots\) and the right hand side of (25) tends to \[-\frac{2}{5}\left[(1+P)^{5/3}+(1-P)^{5/3}\right]+(1+P)\left[(1+P )^{2/3}-h\right]\] \[\ Expansion of the free energy From the thermodynamic point of view much more convenient to work with than the potential \(\Omega\) is the free energy \(F=\Omega+\mu_{+}N_{+}+\mu_{-}N_{-}\) which canonically depends on \(T\), \(V\) and the particle numbers \(N_{\pm}\). It turns out that the expansion of this potential is also simpler. We will derive it here up to the third order following the method outlined in [15]. To make the notation more transparent we will denote the chemical potentials as \[\mu_{+}\equiv x=x_{0}+x_{1}+x_{2}+\ldots,\ \ \ \ \ \mu_{-}\equiv y=y_{0}+y_{1}+y_{2}+\ldots, \tag{27}\] where the successive terms \(x_{n}\), \(y_{n}\) correspond to the successive terms \(\Omega^{(n)}\) of the expansion of the potential \(\Omega\). Introducing the notation \(\Omega_{x}^{(n)}\), \(\Omega_{y}^{(n)}\), \(\Omega_{xx}^{(n)}\), etc. for the first, second, etc. derivatives of \(\Omega^{(n)}\) with respect to their chemical potential arguments and expanding the right hand side of the relation (\(N_{x}\equiv N_{+}\), \(N_{y}\equiv N_{-}\)) \[F =\Omega^{(0)}(x_{0}+x_{1}+x_{2}+x_{3}+\ldots,\,y_{0}+y_{1}+y_{2}+ y_{3}+\ldots)\] \[+\Omega^{(1)}(x_{0}+x_{1}+x_{2}+\ldots,\,y_{0}+y_{1}+y_{2}+\ldots)\] \[+\Omega^{(2)}(x_{0}+x_{1}+\ldots,\,y_{0}+y_{1}+\ldots)+\Omega^{(3 )}(x_{0}+\ldots,y_{0}+\ldots)+\ldots\] \[+(x_{0}+x_{1}+x_{2}+x_{3}+\ldots)N_{x}+(y_{0}+y_{1}+y_{2}+y_{3}+ \ldots)N_{y}\,,\] one obtains, using the zeroth order relations \(\Omega_{x}^{(0)}=-N_{x}\) and \(\Omega_{y}^{(0)}=-N_{y}\) and the fact that \(\Omega^{(0)}(x_{0},y_{0})=\Omega^{\rm free}(x_{0})+\Omega^{\rm free}(y_{0})\) (cf. the formula (11)), i.e. that \(\Omega_{xy}^{(0)}=0\), \[F =\left(\Omega^{(0)}+x_{0}N_{x}+y_{0}N_{y}\right)+\left(\Omega^{(1 )}\right)\] \[+\left(\Omega^{(2)}+x_{1}\,\Omega_{x}^{(1)}+y_{1}\,\Omega_{y}^{(1 )}+\frac{1}{2}\,x_{1}^{2}\,\Omega_{xx}^{(0)}+\frac{1}{2}\,y_{1}^{2}\,\Omega_{yy }^{(0)}\right)\] \[+\left(\Omega^{(3)}+x_{1}\,\Omega_{x}^{(2)}+y_{1}\,\Omega_{y}^{(2 )}+\frac{1}{2}\,x_{1}^{2}\,\Omega_{xx}^{(1)}+\frac{1}{2}\,y_{1}^{2}\,\Omega_{yy }^{(1)}+x_{1}\,y_{1}\,\Omega_{xy}^{(1)}\right. \tag{28}\] \[\qquad+x_{2}\,\Omega_{x}^{(1)}+y_{2}\,\Omega_{y}^{(1)}+x_{1}\,x_ {2}\,\Omega_{xx}^{(0)}+y_{1}\,y_{2}\,\Omega_{yy}^{(0)}+\frac{1}{6}\,x_{1}^{3} \,\Omega_{xxx}^{(0)}+\frac{1}{6}\,y_{1}^{3}\,\Omega_{yyy}^{(0)}\bigg{)}+\ldots,\] all functions being now evaluated at \(x_{0}\) and \(y_{0}\) (at \(\tilde{x}_{0}=\mu_{+}^{(0)}+{\cal H}\) and \(\tilde{y}_{0}=\mu_{-}^{(0)}-{\cal H}\) if there is an external magnetic field). The terms in the successive brackets are the successive terms of the expansion of the free energy. The first order correction \(F^{(1)}\) used in the preceding section is indeed given by \(\Omega^{(1)}(x_{0},y_{0})\) (by \(\Omega^{(1)}(\tilde{x}_{0},\tilde{y}_{0})\)). Furthermore, expanding around \(x_{0}\) and \(y_{0}\) (or \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)) the right hand side of the relation (14) which determines the chemical potential \(x\) \[-N_{x}=\Omega_{x}^{(0)}+(x_{1}+x_{2})\,\Omega_{xx}^{(0)}+\frac{1}{2}\,x_{1}^{2 }\,\Omega_{xxx}^{(0)}+\Omega_{x}^{(1)}+x_{1}\,\Omega_{xx}^{(1)}+y_{1}\,\Omega_ {xy}^{(1)}+\Omega_{x}^{(2)}+\ldots,\] and the other similar relation for \(y\), and taking into account that \(x_{0}\) and \(y_{0}\) are such that \(-N_{x}=\Omega_{x}^{(0)}\), \(-N_{y}=\Omega_{y}^{(0)}\), one obtains \[x_{1} =-\frac{\Omega_{x}^{(1)}}{\Omega_{xx}^{(0)}}\,,\] \[x_{2} =-\frac{\Omega_{x}^{(2)}}{\Omega_{xx}^{(0)}}+\frac{\Omega_{xx}^{( 1)}\Omega_{x}^{(1)}}{[\Omega_{xx}^{(0)}]^{2}}+\frac{\Omega_{xy}^{(1)}\Omega_ {y}^{(1)}}{\Omega_{xx}^{(0)}\Omega_{yy}^{(0)}}-\frac{\Omega_{xxx}^{(0)}[ \Omega_{x}^{(1)}]^{2}}{2[\Omega_{xx}^{(0)}]^{3}}\,. \tag{29}\] \(y_{1}\) and \(y_{2}\) are given by the analogous formulae. Inserting the corrections to the chemical potentials determined in this way into the formulae for \(F^{(2)}\) and \(F^{(3)}\) one finds that (again all functions are evaluated at \(x_{0}\) and \(y_{0}\) or at \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)) \[F^{(2)}=\Omega^{(2)}-\frac{[\Omega_{x}^{(1)}]^{2}}{2\,\Omega_{xx}^{(0)}}-\frac{ [\Omega_{y}^{(1)}]^{2}}{2\,\Omega_{yy}^{(0)}}\,. \tag{30}\] and (the formulae for \(x_{1}\) and \(y_{1}\) immediately imply that the first four terms in the last line of the formula (28) sum up to zero) that \[F^{(3)}=\Omega^{(3)}-\frac{\Omega_{x}^{(2)}\Omega_{x}^{(1)}}{ \Omega_{xx}^{(0)}}-\frac{\Omega_{y}^{(2)}\Omega_{y}^{(1)}}{\Omega_{yy}^{(0)}} \tag{31}\] \[\qquad\qquad+\frac{\Omega_{xx}^{(1)}[\Omega_{x}^{(1)}]^{2}}{2\,[ \Omega_{xx}^{(0)}]^{2}}+\frac{\Omega_{yy}^{(1)}[\Omega_{y}^{(1)}]^{2}}{2\,[ \Omega_{yy}^{(0)}]^{2}}+\frac{\Omega_{xy}^{(1)}\Omega_{x}^{(1)}\Omega_{y}^{( 1)}}{\Omega_{xx}^{(0)}\Omega_{yy}^{(0)}}-\frac{\Omega_{xxx}^{(0)}[\Omega_{x}^ {(1)}]^{3}}{6\,[\Omega_{xx}^{(0)}]^{3}}-\frac{\Omega_{yyy}^{(0)}[\Omega_{y}^{( 1)}]^{3}}{6\,[\Omega_{yy}^{(0)}]^{3}}\,.\] It will be seen that the extra terms in (30) precisely cancel the contributions to \(\Omega^{(2)}\) of those diagrams which do not contribute to the expansion of the formula (13) for the ground state energy density. The analogous cancellation of the extra terms in (31) and the in \(\Omega^{(3)}\) is demonstrated in Appendix B. ## 4 Computation of \(F^{(2)}\) Diagrams contributing to \(\Omega^{(2)}\) are shown in Figures 2 and 3 (the left one). It is straightforward to check that the contributions \(\Omega^{(2)b}\) and \(\Omega^{(2)c}\) of the ones of Figure 2 cancel against the last two terms in the formula (30). Indeed, with the help of the summation rules collected in Appendix A and taking into account that these contributions are evaluated at \(x_{0}\) and \(y_{0}\) one easily obtains (\(\Omega^{(2)c}\) is given by an analogous formula) \[\Omega^{(2)b}=\frac{C_{0}^{2}V}{2}\left(\frac{N_{-}}{V}\right)^{2}\int\!\frac {d^{3}{\bf p}}{(2\pi)^{3}}\left[\frac{d}{da}\,\frac{1}{1+e^{\beta a}}\right]_ {a=\varepsilon_{\bf p}-x_{0}}=-\frac{1}{2}\,C_{0}^{2}V\beta\,(n_{x}-n_{xx})n_ {y}^{2}\,, \tag{32}\] where the second form of \(\Omega^{(2)b}\) is given in the notation introduced in Appendix B. With the help of the formulae (B.1), (B.2) it is immediately seen that it is canceled by the second term of (30). Thus \[\Omega^{(2)b}+\Omega^{(2)c}-\frac{[\Omega_{x}^{(1)}]^{2}}{2\,\Omega_{xx}^{(0)} }-\frac{[\Omega_{y}^{(1)}]^{2}}{2\,\Omega_{yy}^{(0)}}=0\,.\] Hence, \(F^{(2)}=\Omega^{(2)a}\) evaluated at \(x_{0}\) and \(y_{0}\) (or at \(\tilde{x}_{0}\) and \(\tilde{y}_{0}\)). The integrals and sums corresponding to the left diagram of Figure 3 giving \(\Omega^{(2)a}\) can be written in three different forms (corresponding to three different routings of the internal momenta and frequencies) of which two can be composed of two "elementary" blocks \(A\) and \(B\) shown in Figure 3 right: \[\Omega^{(2)a}=-\frac{1}{2}\,C_{0}^{2}V\,\frac{1}{\beta}\sum_{l\in\mathbb{Z}} \!\int\!\frac{d^{3}{\bf q}}{(2\pi)^{3}}\,[A(\omega_{l}^{B},{\bf q})]^{2}=-\frac {1}{2}\,C_{0}^{2}V\,\frac{1}{\beta}\sum_{l\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf q }}{(2\pi)^{3}}\,[B(\omega_{l}^{B},{\bf q})]^{2}.\] where (here \(n_{\pm}({\bf p})\equiv[1+\exp\{\beta(\varepsilon_{\bf p}-\mu_{\pm}^{(0)})\}]^ {-1}\), \(\mu_{+}^{(0)}\equiv x_{0}\), \(\mu_{-}^{(0)}\equiv y_{0}\)) \[A(\omega_{l+1}^{B},{\bf q}) =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf k}}{ (2\pi)^{3}}\,\frac{1}{i\omega_{n}^{F}-(\varepsilon_{\bf k}-x_{0})}\,\frac{1}{ i\omega_{l-n}^{F}-(\varepsilon_{\bf q-k}-y_{0})}\] \[=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,\frac{n_{+}({\bf k})+n_{ -}({\bf q}-{\bf k})-1}{i\omega_{l+1}^{B}-(\varepsilon_{\bf k}-x_{0}+ \varepsilon_{\bf q-k}-y_{0})}\,. \tag{33}\] and \[B(\omega_{l}^{B},{\bf q}) =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\!\int\!\frac{d^{3}{\bf k}}{ (2\pi)^{3}}\,\frac{1}{i\omega_{n}^{F}-(\varepsilon_{\bf k+q}-x_{0})}\,\frac{1 }{i\omega_{n+l}^{F}-(\varepsilon_{\bf k}-y_{0})}\] \[=\int\!\frac{d^{3}{\bf k}}{(2\pi)^{3}}\,\frac{n_{+}({\bf k}+{\bf q })-n_{-}({\bf k})}{i\omega_{l}^{B}-(\varepsilon_{\bf k}-y_{0}-\varepsilon_{\bf k +q}+x_{0})}\,. \tag{34}\] (The contributions \(\Omega^{(3)a}\) and \(\Omega^{(3)b}\) of the left and right diagrams shown in Figure 7 can be written analogously with \([A(\omega_{l}^{B},{\bf q})]^{3}\) and \([B(\omega_{l}^{B},{\bf q})]^{3}\), respectively [11, 12]). With the help of the sum rule (A.5) the sum over \(l\) of two \(A\)-blocks can be done and gives (the symbol \(\int_{\bf k}\) stands for the integral over the measure \(d^{3}{\bf k}/(2\pi)^{3}\)) \[\int_{\bf k}\!\int_{\bf p}\frac{[n_{+}({\bf k})+n_{-}({\bf q}-{ \bf k})-1][n_{+}({\bf p})+n_{-}({\bf q}-{\bf p})-1]}{\varepsilon_{\bf k}+ \varepsilon_{\bf q-k}-\varepsilon_{\bf p}-\varepsilon_{\bf q-p}}\] \[\times\left(\frac{1}{1-e^{\beta(\varepsilon_{\bf k}-x_{0})}e^{ \beta(\varepsilon_{\bf q-k}-y_{0})}}-\frac{1}{1-e^{\beta(\varepsilon_{\bf p }-x_{0})}e^{\beta(\varepsilon_{\bf q-p}-y_{0})}}\right).\] The identity \[n_{+}({\bf k})+n_{-}({\bf q}-{\bf k})-1=n_{+}({\bf k})\,n_{-}({\bf q}-{\bf k}) \left[1-e^{\beta(\varepsilon_{\bf k}-x_{0})}e^{\beta(\varepsilon_{\bf q-k}-y _{0})}\right]. \tag{35}\] Figure 3: The order \(C_{0}^{2}\) diagram contributing to the correction \(\Omega^{(2)}\) and two “elementary” one-loop diagrams out of which the second order and the third order corrections with the \(C_{0}\) couplings can be constructed. Solid and dashed lines denote propagators of fermions with the spin projections \(+\) and \(-\), respectively. and the fact that the two terms in the bracket above gives equal contributions allows then to write \[\Omega^{(2)a}=C_{0}^{2}V{\int_{\bf q}}{\int_{\bf p}}{\int_{\bf k}}\frac{n_{+}({\bf k })\,n_{-}({\bf q}-{\bf k})[1-n_{+}({\bf p})-n_{-}({\bf q}-{\bf p})]}{\varepsilon _{\bf k}+\varepsilon_{{\bf q}-{\bf k}}-\varepsilon_{\bf p}-\varepsilon_{{\bf q }-{\bf p}}}\,.\] (36) It is interesting to notice that because the integral of the quartic product \(n_{+}({\bf k})\,n_{-}({\bf q}-{\bf k})n_{+}({\bf p})\,n_{-}({\bf q}-{\bf p})\) vanishes (the numerator is even with respect to the interchange \({\bf k}\leftrightarrow{\bf p}\) while the denominator is odd), the expression for \(\Omega^{(2)a}\) can be written (after the change \({\bf p}=-{\bf u}+{\bf s}\), \({\bf k}=-{\bf t}+{\bf s}\), \({\bf q}=2{\bf s}\) of the integration variables) in the form completely analogous to the expression giving \(E_{\Omega}/V\) (see [8]), the only modification being the change in the prefactor and the replacement of \(\theta(k-|{\bf v}|)\) and \(\theta(|{\bf v}|-k)\) by \(n(|{\bf v}|)\) and \(1-n(|{\bf v}|)\), respectively. (Curiously enough, we have found that this simple analogy does not work for the diagrams of Figure 7). It is straightforward to see that the expression (36) is divergent, the divergence arising from the unity in the square bracket in the numerator. In the variables \({\bf s}\), \({\bf t}\) and \({\bf u}\) the integral over \({\bf u}\) is the one evaluated with the cutoff \(\Lambda\) in [8] and using this result and changing once more the variables to \({\bf k}={\bf t}-{\bf s}\), \({\bf p}={\bf t}+{\bf s}\), after adding the contribution \(\Omega^{(1)}\) and expressing \(C_{0}\) in terms of the scattering lengths \(a_{0}\) \[C_{0}(\Lambda)=\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\left(1+\frac{2}{\pi}\,a_{0} \Lambda+\ldots\right), \tag{37}\] [16, 11, 12], one arrives at the finite (to the second order) result \[\Omega^{(1)}+\Omega^{(2)a} =\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\left(1+\frac{2}{\pi}\,\Lambda a _{0}+\ldots\right)V\,\frac{N_{-}}{V}\,\frac{N_{+}}{V}\] \[-\frac{\Lambda}{2\pi^{2}}\left(\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0} \right)^{2}\frac{m_{f}}{\hbar^{2}}\,V\,\frac{N_{-}}{V}\,\frac{N_{+}}{V}+ \Omega^{(2)a}_{\rm finite}=\frac{4\pi\hbar^{2}}{m_{f}}\,a_{0}\,V\,\frac{N_{-} }{V}\,\frac{N_{+}}{V}+\Omega^{(2)a}_{\rm finite}\,.\] The finite part of \(\Omega^{(2)a}\), \[\Omega^{(2)a}_{\rm fin}=-C_{0}^{2}V{\int_{\bf q}}\int_{\bf k}n_{+}({\bf k})\,n_ {-}({\bf q}-{\bf k})\int_{\bf p}\frac{n_{+}({\bf p})+n_{-}({\bf q}-{\bf p})}{ \varepsilon_{\bf k}+\varepsilon_{{\bf q}-{\bf k}}-\varepsilon_{\bf p}- \varepsilon_{{\bf q}-{\bf p}}}\,,\] upon setting first \({\bf k}={\bf k}_{1}\), \({\bf q}-{\bf k}={\bf k}_{2}\) and then replacing in the term with \(n_{-}({\bf k}_{1}+{\bf k}_{2}-{\bf p})\) the variable \({\bf k}_{1}+{\bf k}_{2}-{\bf p}\) by \({\bf p}^{\prime}\) (upon which \(\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}}\to\varepsilon_{{\bf p}^{\prime}}\) but at the same time \(\varepsilon_{\bf p}\to\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}^{\prime}}\)) can be cast in the convenient symmetric form \[\Omega^{(2)a}_{\rm fin}=F^{(2)}=-C_{0}^{2}V{\int_{\bf k}}\,{\int_{\bf k}}_{2} \,n_{+}({\bf k}_{1})\,n_{-}({\bf k}_{2})\int_{\bf p}\frac{n_{+}({\bf p})+n_{-} ({\bf p})}{\varepsilon_{{\bf k}_{1}}+\varepsilon_{{\bf k}_{2}}-\varepsilon_{ \bf p}-\varepsilon_{{\bf k}_{1}+{\bf k}_{2}-{\bf p}}}\,.\] (38) The expression (38) is very similar4 to the formula (5) used in [4] as the second order contribution to the system's internal energy density \(u\), except that the latter has an extra factor of 2. The foundation of the formula for \(f=u-Ts\) (which, apart from this factor of 2, is equivalent to our one) used in [4] is, however, somewhat unclear: to obtain their second order correction to the energy density \(u\) these authors took the expression (15) given in Section 11.4 of [2] which is obtained by simply using the finite temperature distributions in place of the zero temperature ones in the ordinary second order correction to the ground state energy of the system and have taken the entropy density \(s\) as given by the zeroth order textbook formula. In contrast our expression (38) results from a systematic, well founded expansion and the coefficient in (38) is unambiguously fixed by the cancellation of the divergence. After integrating over the cosine of the angle between \({\bf p}\) and \({\bf k}_{1}+{\bf k}_{2}\) one can write the resulting expression in the form \[F^{(2)}=-V\,\frac{C_{0}^{2}m_{f}}{(2\pi)^{2}\hbar^{2}}\!\int_{{ \bf k}_{1}}\int_{{\bf k}_{2}}\frac{n_{+}({\bf k}_{1})\,n_{-}({\bf k}_{2})}{|{ \bf k}_{1}+{\bf k}_{2}|}\int_{0}^{\infty}\!dp\,p\,[n_{+}({\bf p})+n_{-}({\bf p})]\] \[\times\ln\!\left|\frac{(p-\Delta_{+})(p-\Delta_{-})}{(p+\Delta_{+ })(p+\Delta_{-})}\right|, \tag{39}\] in which \[\Delta_{\pm}\equiv\frac{1}{2}\,|{\bf k}_{1}+{\bf k}_{2}|\pm\frac{1}{2}\,|{\bf k }_{1}-{\bf k}_{2}|\,.\] It is clear that the singularity at \(|{\bf k}_{1}+{\bf k}_{2}|=0\) in (39) is spurious: if \({\bf k}_{1}+{\bf k}_{2}={\bf 0}\) then \(\Delta_{-}=-\Delta_{+}\) and the innermost integral vanishes. ## 5 Numerical evaluation The most difficult part of the computation is the accurate and efficient numerical evaluation of the multiple integrals in the expression (39). Rescaling the momentum integration variables \({\bf k}_{1}=k_{\rm F}{\bf v}_{1}\), etc. and inserting \(C_{0}=(4\pi\hbar/m_{f})a_{0}\) one can write the second order contribution to the right hand side of (25) as \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{F^{(2)}}{\varepsilon_{\rm F }V}=-(k_{\rm F}a_{0})^{2}\,\frac{6}{\pi^{2}}\int_{0}^{\infty}\!dv_{1}\,v_{1}^{ 2}\,n(v_{1},\nu_{+},t)\!\int_{0}^{\infty}\!dv_{2}\,v_{2}^{2}\,n(v_{2},\nu_{-},t)\] \[\times\sum_{\sigma=\pm}\sum_{\sigma^{\prime}=\pm}\!\int_{-1}^{1}\! d\xi\,\frac{I(\Delta_{\sigma},\nu_{\sigma^{\prime}},t)}{\sqrt{v_{1}^{2}+v_{2}^{ 2}+2\xi v_{1}v_{2}}}\,, \tag{40}\] where \(\nu_{\pm}=\mu_{\pm}^{(0)}/k_{\rm B}T\equiv\delta_{\pm}/t\) (\(\delta_{\pm}=\mu_{\pm}^{(0)}/\varepsilon_{\rm F}\)), \[n(v,\nu,t)=\left[1+\exp\!\left(\frac{v^{2}}{t}-\nu\right)\right]^{-1},\] \[I(\Delta,\nu,t)=\int_{0}^{\infty}\!du\,u\,n(u,\nu,t)\,\ln\!\left|\frac{u- \Delta}{u+\Delta}\right|.\] and \[\Delta_{\pm}(v_{1},v_{2},\xi)=\frac{1}{2}\sqrt{v_{1}^{2}+v_{2}^{2}+2\xi v_{1} v_{2}}\pm\frac{1}{2}\sqrt{v_{1}^{2}+v_{2}^{2}-2\xi v_{1}v_{2}}\,.\] The trick allowing to realize the numerical computation is to make first, for fixed values of \(t\) (temperature) and \(P\) (the system's polarization) which together, through (20) determine \(\nu_{+}\) and \(\nu_{-}\), an interpolation of the functions \(I(|\Delta|,\nu_{+},t)\) and \(I(|\Delta|,\nu_{-},t)\) (because, obviously, \(I(-|\Delta|,\nu_{\pm},t)=-I(|\Delta|,\nu_{\pm},t)\)) in the variable \(w=1/(1+|\Delta|)\) (to interpolate on the compact interval \([0,1]\)) and then performing numerically the integrations over \(v_{1}\), \(v_{2}\) and \(\xi\) using these interpolations. In the actual code written in the _Python_ programming language the functions \(I(|\Delta|,\nu_{\pm},t)\) are evaluated with the help of the adaptive integration routine (scipy.integrate.quad; the integration domain is split ted into three subdomains to accurately handle the logarithmic singularity - in the relevant regions near \(w=1/(1+\Delta)\equiv w_{0}\) we substitute \(r^{3}=|w-w_{0}|\) so that the integrand behaves like \(r^{2}\ln(r)\) and can be treated using the quadrature methods - and its sharp falloff, especially for small temperatures \(t\), near \(u^{2}=t\nu\) of the distribution \(n(u,\nu,t))\) and then interpolated using the cubic spline interpolation routines of _Python_. The remaining triple integral over \(v_{1}\), \(v_{2}\) and \(\xi\) are again performed with the help of the Clenshaw-Curtis quadrature in the variables \(w_{1,2}=1/(1+v_{1,2})\) (again to have a compact integration domain and again splitting it into subdomains to better handle the regions \(v_{1}^{2}\approx t\nu_{+}\) and \(v_{2}^{2}\approx t\nu_{-}\)); the spurious singularity at \(|{\bf v}_{1}+{\bf v}_{2}|=0\) is taken care of by simply taking somewhat different numbers for the \(v_{1}\) and \(v_{2}\) grids. To check the correctness of the code we have first compared its results for \(t\to 0\) (replacing the distributions \(n(u,\nu,t)\) by the Heaviside theta functions) with the second order correction \(E_{\Omega}^{(2)}\) to the system's ground-state energy which as a function of \(P\) is known analytically [10, 9] (the function \(J_{K}(x,y)\) is given e.g. by the formula (4) in [12]): \[\frac{6\pi^{2}}{k_{\rm F}^{3}}\,\frac{E_{\Omega}^{(2)}}{\varepsilon_{\rm F}V} =(k_{\rm F}a_{0})^{2}\,\frac{6}{5\pi^{2}}\,J_{K}((1+P)^{1/3},\,(1-P)^{1/3})\,.\] At \(P=0\) (equal densities of spin up and spin down fermions) \(J_{K}=4(11-\ln 4)/21\) and the right hand side of the above formula (setting in all these comparisons \(k_{\rm F}a_{0}=1\)) equals \(0.22264482\) while the _Python_ code for the right hand side of (40) gives the value \(0.22264522\). For \(P=0.5\) the code gives \(0.17184256\) to be compared with \(0.17184207\) while at \(P=0.9\) the numbers to be compared are \(0.046470057\) and \(0.046470077\) (at \(P=1\) both are zero reflecting the impossibility of the \(s\)-wave interactions of two fermions in the same spin state). For nonzero temperatures the results obtained using the Clenshaw-Curtis quadrature have been compared with the ones obtained using the more accurate (but more time consuming) adaptive integration routine. The comparison shows that the relative uncertainty \(\Delta_{F}\) (the difference of the results of the two methods divided by their mean) is typically of order \(10^{-5}\), varying rather irregularity with \(P\) and increasing somewhat with \(t\); in our further estimates we set \(\Delta_{F}=10^{-5}\) for \(t\stackrel{{<}}{{{}_{\sim}}}0.1\), \(\Delta_{F}=1.5\times 10^{-5}\) for \(0.1<t\leq 0.2\) and \(\Delta_{F}=2\times 10^{-5}\) for \(0.2<t\). While this accuracy superficially looks quite satisfactory, it is, nevertheless, barely sufficient: for values of the parameters (\(t\) and/or \(k_{\rm F}a_{0}\)) at which spontaneous ordering appears there is a very delicate cancellation between different contributions to \(F\) and the (relative) error of order \(10^{-5}\) in \(F^{(2)}\) can, and in some cases indeed does, lead do the appearances of very shallow fake minimum near \(P=0\). ## 6 Results For a fixed value of the temperature, the system's free energy \(F\) as a function of the polarization (and of the parameter \(k_{\rm F}a_{0}\) in which, in the approximation to which our analysis is restricted, it is a polynomial of the second order) can be efficiently obtained by evaluating numerically the integrals in (40) for several values of \(P\) and constructing the cubic spline interpolation. The resulting free energy differences, \(F(P)-F(0)\), are plotted in Figure 4 as functions of the polarization \(P\) for two temperatures: \(t=0.1\) and \(0.15\) and several values of \(k_{\rm F}a_{0}\) (obtained by constructing the interpolation based on 11 points in \(P\) only). In view of the mentioned uncertainty in the computation of \(F^{(2)}\) the critical value of \(k_{\rm F}a_{0}\) and the value of the polarization \(P\) at the transition must be determined by requiring that the value of \(F\) at a minimum developing away from \(P=0\) differs from the one at \(P=0\) at least by \(\Delta_{F}F^{(2)}(0)\). In this way one can properly handle the mentioned fake minima close to \(P=0\) one of which can be observed in the right panel of Fig. 4 (for \(t=0\) that such a minimum is indeed produced by the inaccuracies of the numerical code can be substantiated by comparing with the analytically known dependence of the ground-state energy on \(P\)). The actual procedure which has been adopted to determine the polarization and its uncertainty is as follows. For a fixed value of the parameter \(k_{\rm F}a_{0}\) (which is successively increased from 0 in steps \(\Delta(k_{\rm F}a_{0})=0.001\)) the values of \(F\) on a preliminary grid of \(P\)-values \(P_{n}=n\Delta P\) with \(n=0,\ldots,n_{\rm max}=32\) are obtained. If the minimal value of \(F\) occurs for \(n_{\rm min}=n_{\rm max}\) the polarization is taken as the maximal (\(P=1\)); if \(n_{\rm min}=0\) or \(|F(P_{n_{\rm min}})-F(0)|\leq\Delta_{F}F^{(2)}(0)\) the polarization is taken as vanishing (\(P=0\)). If \(n_{\rm min}\neq 0,n_{\rm max}\) and \(|F(P_{n_{\rm min}})-F(0)|>\Delta_{F}F^{(2)}(0)\) Figure 4: Plots of the differences \(F(P)-F(0)\) in units of \((k_{\rm F}^{3}/6\pi^{2})(\hbar^{2}k_{\rm F}^{2}/2m_{f})\) of the system of spin \(1/2\) fermions as a function of the order parameter \(P\) for two representative values of the temperature \(t\equiv T/T_{\rm F}\) as obtained in the second order of the perturbative expansion. Left: \(t=0.1\); the successive curves (from below) correspond to \(k_{\rm F}a_{0}=1.0718\) (the lowest, blue, line), \(k_{\rm F}a_{0}=1.0723\) (yellow), \(k_{\rm F}a_{0}=1.0728\) (green), \(1.0733\) (red) and \(1.0738\) (the highest, blue, line). Right: \(t=0.15\); the successive curves (from below) correspond to \(k_{\rm F}a_{0}=1.0978\) (the lowest, blue, line), \(k_{\rm F}a_{0}=1.983\) (yellow), \(k_{\rm F}a_{0}=1.0988\) (green), \(1.0993\) (red) and \(1.0998\) (the highest, blue, line). the polarization is taken as truly nonvanishing. If it is nonvanishing for the first time (as far as the increasing values of \(k_{\rm F}a_{0}\) are concerned), one determines \(n_{\rm down}\) and \(n_{\rm up}\) such that \(|F(P_{n_{\rm min}})-F(P_{n_{\rm down/up}})|<2\Delta_{F}F^{(2)}(0)\) (of course, \(n_{\rm down}=0\) and/or \(n_{\rm up}=n_{\rm max}\) if these criteria cannot not be fulfilled for intermediate values of \(n\)) and finds the values of \(F\) on a finer grid of \(P\)-values with \(|P_{j+1}-P_{j}|=0.0001\) and \(P_{n_{\rm down}}\leq P_{j}\leq P_{n_{\rm up}}\). If the minimum of \(F\) found on the finer grid occurs for \(P_{j_{\rm min}}<0.02\), it is assumed that it is a numerical artifact and the polarization is taken as vanishing. In the opposite case the polarization is taken to be nonvanishing and that value of \(k_{\rm F}a_{0}\) is recorded as the critical one (for the considered temperature). In this case on the finer grid one seeks a range (\(P_{j_{\rm down}}\), \(P_{j_{\rm up}}\)) of \(P\) around \(P_{j_{\rm min}}\) in which \(|F(P_{j_{\rm min}})-F(P_{j})|>\Delta_{F}F^{(2)}(0)\) for \(P_{j_{\rm down}}\leq P_{j}\leq P_{j_{\rm up}}\); if such a range cannot be found the transition is classified as continuous (the polarization at the considered temperature is assumed to increase continuously from zero as \(k_{\rm F}a_{0}\) is increased) while if a nontrivial range is obtained, the transition is classified as first order and \(P_{j_{\rm min}}-P_{j_{\rm down}}\) and \(P_{j_{\rm up}}-P_{j_{\rm min}}\) are taken as the uncertainties of the determination of the polarization right at the transition. For values of \(k_{\rm F}a_{0}\) higher than the critical one (determined as described above for the considered temperature) \(F\) is evaluated on a finer grid of points \(P_{j}\) with \(P_{n_{\rm min}-1}\leq P_{j}\leq P_{n_{\rm min}+1}\) and \(P_{j+1}-P_{j}=0.001\) and the corresponding polarization is determined as the position of the minimum of \(F\) on this finer grid. In this way one finds that \((k_{\rm F}a_{0})_{\rm cr}=1.05409\) at \(t=0\) (which perfectly agrees with the known value obtained by computing the system's ground-state energy [11] and with [4]), \((k_{\rm F}a_{0})_{\rm cr}=1.05858\) at \(t=0.05\), \((k_{\rm F}a_{0})_{\rm cr}=1.07282\) at \(t=0.1\), \((k_{\rm F}a_{0})_{\rm cr}=1.09881\) at \(t=0.15\) and \((k_{\rm F}a_{0})_{\rm cr}=1.13845\) at \(t=0.2\). The corresponding values of the polarization right at the transition point are \(P_{\rm cr}=0.575^{+0.017}_{-0.019}\) (again in agreement with the value found in [11]) \(0.558^{+0.017}_{-0.017}\), \(0.477^{+0.019}_{-0.021}\), \(0.325^{+0.035}_{-0.048}\) and \(0.197^{+0.045}_{-0.096}\). The dependence of the polarization as a function of the "gas parameter" \(k_{\rm F}a_{0}\) is shown, for a few values of the temperature \(t\), in the left panel of Figure 5. This is essentially the same plot as the one presented in [4] (the agreement with the critical values of the gas parameters at successive temperatures that can be read off from the plot there seems to be quite good) except that in Figure 5 marked are also the uncertainties in the determination (following from the procedure just described) of the polarization right at the transition. Owing to the efficiency of our numerical code (stemming basically from the trick with the interpolations) the procedure of finding the polarization of the system described above can be applied also at fixed values of \(k_{\rm F}a_{0}\) (replacing the grid in \(k_{\rm F}a_{0}\) by a one in \(t\)). The resulting polarization of the system as a function of the temperature for several fixed values of the gas parameter is shown in the right panel of Figure 5. Knowing the polarization as a function of the other parameters it is possible to construct the free energy \(F(T,V,N)\equiv F(T,V,N,P(T,N/V))\) for several values of \(k_{\rm F}a_{0}\) and to determine also other thermodynamic characteristics of the system. For example, using the grid in \(t\) the second derivative of the free energy \(F(T,V,N))\) with respect to the temperature can in principle be obtained yielding the system's heat capacity. The result of such an exercise is shown in Figure 6 for two values of the "gas parameter". It is shows that the discontinuity of the heat capacity at the transition point grows with the value of \(k_{\rm F}a_{0}\) (i.e. also with the increasing temperature, if \(k_{\rm F}a_{0}\) is varied). However for higher values of \(k_{\rm F}a_{0}\) the numerical inaccuracies do not allow for a reliable computation. Indeed, as the transition at higher temperatures becomes continuous a divergence of the heat capacity probably starts to build up making the numerical computation of the second derivative of the free energy unstable for \(t\stackrel{{>}}{{{}_{\sim}}}0.12\). Similarly, it is in principle possible to determine the system's polarization taking into account an infinitesimally weak external magnetic field (this as explained influences only the determination of the zero-th order chemical potentials \(\tilde{\mu}_{\pm}^{(0)}\) from the conditions (22)) and to compute the system's magnetic susceptibility \(\chi_{T}\) by constructing the derivative of the polarization with respect to \({\cal H}\). While such a computation seems to indicate that at least at low temperatures, at which the transition is (in the approximation to which our computation is restricted) is first order, the susceptibility also has a finite discontinuity at the transition point, it is not sufficiently stable numerically to yield reliable values of \(\chi_{T}\) and it is probably more practical to obtain it by computing the (connected) two point correlation function from the formula \(\chi_{T}=(\beta/V)\tilde{G}_{\rm con}^{(2)}({\bf 0})\). We do not attempt this here. ## 7 Conclusions We have developed a systematic perturbative expansion of the grand thermodynamic potential \(\Omega\) and of the free energy \(F\) of the system of (nonrelativistic) interacting spin \(1/2\) fermions. We have applied this expansion within the effective field theory in which the underlying repulsive spin-independent binary interaction of fermions is replaced by an infinite number of contact interaction terms and which allows to directly express Figure 5: Polarization \(P=(N_{+}-N_{-})/N\) of the system of spin \(1/2\) fermions with a short range repulsive interaction obtained from the free energy \(F\) computed up to the second order of the perturbative expansion. In the left panel as a function of the “gas parameter” \(k_{\rm F}a_{0}\) for several values of the temperature (counting from the left): \(t\equiv T/T_{F}=0\), \(0.1\), \(0.15\) and \(0.2\) (\(T_{\rm F}\equiv\varepsilon_{\rm F}/k_{\rm B}\)). Marked are also uncertainties of the value of \(P\) right at the transition points. In the right panel as a function of the temperature for several fixed values of \(k_{\rm F}a_{0}\). computed quantities in terms of the scattering lengths and effective radii which characterize the underlying interaction potential. We have shown (up to the third order but the result seems to be valid in general) that to the expansion of the free energy effectively contribute only those Feynman diagrams which give nonvanishing contributions to the ground-state energy of the system evaluated at zeroth order chemical potentials (associated with spin up and spin down fermions). Our numerical analysis has been restricted here to the first nontrivial order of the perturbative expansion (i.e. the first one going beyond the textbook mean field approximation) in which the results are still universal, i.e. on the form of the underlying interaction depend only through the \(s\)-wave scattering length \(a_{0}\) (in the next order the results start to depend also on the \(p\)-wave scattering length \(a_{1}\) and the effective radius \(r_{0}\)). We have devised a method for efficient numerical evaluation of the requisite nested integrals and used it to compute the system's polarization and its value right at the transition point paying attention to the uncertainty of the determination of the latter quantity which is crucial is assessing the character of the transition. For low temperatures, \(T\stackrel{{<}}{{\sim}}0.1\,T_{\rm F}\), we have also managed to determine the system's heat capacity encountering, however, some problems with the accuracy of numerical evaluation of the derivatives of the free energy which seem to prevent obtaining (at least without substantial improvements in the method) reliable values of the heat capacity for higher temperatures as well as determining the system's magnetic susceptibility. Of course, since the perturbative computation of the system's ground-state energy agrees with the results obtained (for specific forms of the underlying interaction) using the Quantum Monte Carlo approach only for \(k_{\rm F}a_{0}\stackrel{{<}}{{\sim}}0.5\), the results presented here cannot be taken very seriously. Moreover it is now known that already the inclusion of the third order corrections to the system's ground-state energy (free energy at zero temperature) significantly weaken the first order character of the transition (at zero temperature) to the ordered state. For these reasons our effort summarized here should Figure 6: Heat capacity (in units of \(Nk_{\rm B}\)) of the system of spin 1/2 fermions with a short range repulsive interaction as a function of the temperature for two different fixed values of the parameter \(k_{\rm F}a_{0}\) obtained from the free energy \(F\) computed up to the second order. be treated rather as a preliminary step taken towards extending the computation to a higher order and towards a possible implementation of a resummation of some class of the contributions to the free energy in the spirit of the approach of [13]. Such a resummation can probably also allow to overcome the limitation, inherent in the effective field theory approach, to sufficiently small temperatures only: as this approach relies on the clean separation of the scales (\(R\ll k_{\mathrm{F}}^{-1}\), where \(R\) is the characteristic length of the underlying interaction) it cannot be applied, at least if restricted to a finite order of the perturbative expansion, when \(k_{\mathrm{B}}T\) becomes comparable with the energy scale set by \(\varepsilon_{\mathrm{F}}\). We plan to return to these issues in the forthcoming paper. ## Appendix A The following summation formulae hold [7] (the limit \(\eta\to 0^{+}\) is implicit): \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{e^{i\eta\omega_{n}^{F} }}{i\omega_{n}^{F}-x} =\frac{1}{1+e^{\beta x}}\,, \omega_{n}^{F} \equiv\frac{\pi}{\beta}\left(2n+1\right),\] (A.1) \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{e^{i\eta\omega_{n}^{B} }}{i\omega_{n}^{B}-x} =\frac{1}{1-e^{\beta x}}\,, \omega_{n}^{B} \equiv\frac{2\pi}{\beta}\,n\,,\] (A.2) and, by decomposing into simple fractions, \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n+l}^{F}-y} =\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n}^{F}-(y-i\omega_{l}^{B})}\] \[=\frac{1}{i\omega_{l}^{B}-(y-x)}\left(\frac{1}{1+e^{\beta x}}- \frac{1}{1+e^{\beta y}}\right),\] (A.3) \[\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{l-n}^{F}-y} =-\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega_{n}^{F}-x} \,\frac{1}{i\omega_{n}^{F}-(i\omega_{l+1}^{B}-y)}\] \[=\frac{1}{i\omega_{l+1}^{B}-(y+x)}\left(\frac{1}{1+e^{\beta x}}- \frac{1}{1+e^{-\beta y}}\right).\] (A.4) Similarly, \[\frac{1}{\beta}\sum_{l\in\mathbb{Z}}\frac{1}{i\omega_{l}^{B}-x} \,\frac{1}{i\omega_{l}^{B}-y} =\frac{1}{x-y}\left(\frac{1}{1-e^{\beta x}}-\frac{1}{1-e^{\beta y}} \right).\] (A.5) Useful can be also the formula \[\frac{1}{x-a_{1}}\,\frac{1}{x-a_{2}}\,\dots\,\frac{1}{x-a_{n}} =\sum_{l=1}^{n}\left(\prod_{k\neq l}^{n}\frac{1}{a_{l}-a_{k}}\right) \frac{1}{x-a_{l}}\,.\] (A.6) ## Appendix B Here we demonstrate the cancellation of the additional terms in the formula (31) for \(F^{(3)}\) against the contributions of diagrams which vanish at zero temperature. Analogous cancellation of the contribution of the diagrams shown in Figure 2 against the additional terms in (30) has been checked in the main text. It will be convenient to introduce first the following notation: \[n_{x} \equiv\int_{\bf k}\frac{1}{1+e^{\beta(\varepsilon_{\bf k}-x_{0})}}\,,\] \[n_{xx} \equiv\int_{\bf k}\frac{1}{[1+e^{\beta(\varepsilon_{\bf k}-x_{0}) ]^{2}}}\,,\] \[n_{xxx} \equiv\int_{\bf k}\frac{1}{[1+e^{\beta(\varepsilon_{\bf k}-x_{0}) ]^{3}}}\,,\] etc. From (11) one immediately obtains (all functions are evaluated at \(x_{0}\) and \(y_{0}\)) \[\Omega^{(0)}_{x} =-V\,n_{x}\,,\] \[\Omega^{(0)}_{xx} =-V\beta\,(n_{x}-n_{xx})\,,\] (B.1) \[\Omega^{(0)}_{xxx} =-V\beta^{2}\,(n_{x}-3n_{xx}+2n_{xxx})\,.\] Analogously can be written the derivatives of \(\Omega^{(0)}\) with respect to \(y\). The necessary derivatives of \(\Omega^{(1)}=C_{0}V\,n_{x}\,n_{y}\) take the form \[\Omega^{(1)}_{x} =C_{0}V\beta\,(n_{x}-n_{xx})\,n_{y}\,,\] \[\Omega^{(1)}_{y} =C_{0}V\beta\,n_{x}\,(n_{y}-n_{yy})\,,\] \[\Omega^{(1)}_{xx} =C_{0}V\beta^{2}\,(n_{x}-3n_{xx}+2n_{xxx})\,n_{y}\,,\] (B.2) \[\Omega^{(1)}_{yy} =C_{0}V\beta^{2}\,n_{x}\,(n_{y}-3n_{yy}+2n_{yyy})\,,\] \[\Omega^{(1)}_{xy} =C_{0}V\beta^{2}\,(n_{x}-n_{xx})(n_{y}-n_{yy})\,.\] To \(\Omega^{(3)}\), in addition to the two "mercedes-type" diagrams shown in Figure 7 (the contributions \(\Omega^{(3)a}\) and \(\Omega^{(3)b}\)), contribute also the "mitsubishi-type" diagrams of Figure 8 (the contributions \(\Omega^{(3)c}\) and \(\Omega^{(3)d}\)), the two diagrams of Figure 9 (the contributions \(\Omega^{(3)e}\) and \(\Omega^{(3)f}\)) and the single "audi-type" diagram of Figure 9 (\(\Omega^{(3)g}\)). The computation of \(\Omega^{(3)c}\) and \(\Omega^{(3)d}\) is straightforward (it is analogous to that of \(\Omega^{(2)b}\) and \(\Omega^{(2)c}\)) and yields \[\Omega^{(3)c}+\Omega^{(3)d}=\frac{1}{6}\,C_{0}^{3}V\beta^{2}\left[(n_{x}-3n_{ xx}+2n_{xxx})\,n_{y}^{3}+n_{x}^{3}\,(n_{y}-3n_{yy}+2n_{yy})\right].\] Figure 7: The particle-particle and the particle-hole diagrams (the “mercedes-like” diagrams) contributing in the order \(C_{0}^{3}\) to \(\Omega^{(3)}\). One then readily sees that in (31) this is cancelled by the last two terms: \[\Omega^{(3)a}+\Omega^{(3)b}-\frac{\Omega^{(0)}_{xxx}[\Omega^{(1)}_{x}]^{3}}{6\,[ \Omega^{(0)}_{xx}]^{3}}-\frac{\Omega^{(0)}_{yyy}[\Omega^{(1)}_{y}]^{3}}{6\,[ \Omega^{(0)}_{yy}]^{3}}=0\,.\] One has now to consider the terms \(-\Omega^{(2)}_{x}\Omega^{(1)}_{x}/\Omega^{(0)}_{xx}\) and \(-\Omega^{(2)}_{y}\Omega^{(1)}_{y}/\Omega^{(0)}_{yy}\) in (31). \(\Omega^{(2)}\) is given by three diagrams shown in Figure 3 left and 2. It is convenient to write the contribution the first one, \(\Omega^{(2)a}\), in the form \[\Omega^{(2)a}=-\frac{C_{0}^{2}V}{2}\,\frac{1}{\beta}\sum_{l\in \mathbb{Z}}\!\int_{\mathbf{q}}\!\int_{\mathbf{k}}\!\int_{\mathbf{p}}\frac{1}{ \beta}\sum_{n\in\mathbb{Z}}\frac{1}{i\omega^{F}_{n}-(\varepsilon_{\mathbf{k}} -x)}\,\frac{1}{i\omega^{F}_{n+l}-(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)}\] \[\times\frac{1}{\beta}\sum_{m\in\mathbb{Z}}\frac{1}{i\omega^{F}_{ m}-(\varepsilon_{\mathbf{p}}-y)}\frac{1}{i\omega^{F}_{m-l}-(\varepsilon_{ \mathbf{p}-\mathbf{q}}-y)}\,.\] (B.3) Differentiating it with respect to \(x\) one obtains the expression which is a sum of the two terms \[\Omega^{(2)a}_{x}=\frac{C_{0}^{2}V}{2}\,\frac{1}{\beta}\sum_{l \in\mathbb{Z}}\!\int_{\mathbf{q}}\!\int_{\mathbf{k}}\!\int_{\mathbf{p}}\frac{1 }{\beta}\sum_{m\in\mathbb{Z}}\frac{1}{i\omega^{F}_{m}-(\varepsilon_{\mathbf{p} }-y)}\frac{1}{i\omega^{F}_{m-l}-(\varepsilon_{\mathbf{p}-\mathbf{q}}-y)}\] \[\times\frac{1}{\beta}\sum_{n\in\mathbb{Z}}\left\{\frac{1}{[i \omega^{F}_{n}-(\varepsilon_{\mathbf{k}}-x)]^{2}}\,\frac{1}{i\omega^{F}_{n+l} -(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)}\right.\] \[+\left.\frac{1}{i\omega^{F}_{n}-(\varepsilon_{\mathbf{k}}-x)}\, \frac{1}{[i\omega^{F}_{n+l}-(\varepsilon_{\mathbf{k}+\mathbf{q}}-x)]^{2}} \right\}.\] These two terms are equal - to see this, it suffices to set in the second term \(\mathbf{q}=-\mathbf{q}^{\prime}\), \(\mathbf{k}=\mathbf{k}^{\prime}+\mathbf{q}^{\prime}\), \(\mathbf{p}=\mathbf{p}^{\prime}-\mathbf{q}^{\prime}\) and \(l=-l^{\prime}\), \(n=n^{\prime}+l^{\prime}\)\(m=m^{\prime}-l^{\prime}\). Thus, multiplying this by \(-\Omega^{(1)}_{x}/\Omega^{(0)}_{xx}\) which simply equals \(C_{0}\,n_{y}\) one readily sees (taking into account that \(\mathcal{G}^{(0)}_{--}(0,\mathbf{0})=n_{y}\)) that this precisely cancels \(\Omega^{(3)e}\). Thus, in \(F^{(3)}\) \[\Omega^{(3)e}+\Omega^{(3)f}-\frac{\Omega^{(1)}_{x}}{\Omega^{(0)}_{xx}}\,\Omega ^{(2)a}_{x}-\frac{\Omega^{(1)}_{y}}{\Omega^{(0)}_{yy}}\,\Omega^{(2)a}_{y}=0\,,\] After these cancellations one is left with \[F^{(3)}=\Omega^{(3)a}+\Omega^{(3)b} +\Omega^{(3)g}+C_{0}\,n_{y}\left(\Omega^{(2)b}_{x}+\Omega^{(2)c} _{x}\right)+C_{0}\,n_{x}\left(\Omega^{(2)b}_{y}+\Omega^{(2)c}_{y}\right)\] \[+\frac{\Omega^{(1)}_{xx}[\Omega^{(1)}_{x}]^{2}}{2\,[\Omega^{(0)}_ {xx}]^{2}}+\frac{\Omega^{(1)}_{yy}[\Omega^{(1)}_{y}]^{2}}{2\,[\Omega^{(0)}_{ yy}]^{2}}+\frac{\Omega^{(1)}_{xy}\Omega^{(1)}_{x}\Omega^{(1)}_{y}}{\Omega^{(0)}_{xx} \Omega^{(0)}_{yy}}\,.\] Explicitly the last line of \(F^{(3)}\) reads \[\frac{1}{2}\,C_{0}^{3}V\beta^{2}\left\{\left(n_{x}-3n_{xx}+2n_{xxx} \right)n_{y}^{3}\right.\] \[\left.+n_{x}^{3}\left(n_{y}-3n_{yy}+2n_{yyy}\right)+2\,n_{x}\left(n _{x}-n_{xx}\right)\left(n_{y}-n_{yy}\right)n_{y}\right\},\] while the contribution \(\Omega^{(3)g}\) of the "audi-type" diagram of Figure 8 can be written as \[\Omega^{(3)g}=C_{0}^{3}V\beta^{2}\,n_{x}\,(n_{x}-n_{xx})(n_{y}-n_{yy})\,n_{y}\,.\] Finally, \[\Omega_{x}^{(2)b}+\Omega_{x}^{(2)c}=-\frac{1}{2}\,C_{0}^{2}V\beta^{2}\left[ \left(n_{x}-3n_{xx}+2n_{xxx}\right)n_{y}^{2}+2\,n_{x}\left(n_{x}-n_{xx}\right) (n_{y}-n_{yy})\right],\] (the sum \(\Omega_{y}^{(2)b}+\Omega_{y}^{(2)c}\) is given by an analogous expression) and after a straightforward algebra all the extra terms cancel out, so that eventually, \(F^{(3)}=\Omega^{(3)a}+\Omega^{(3)b}\) that is, it given solely by the "mercedes-like" diagrams evaluated at the zeroth order chemical potentials \(x_{0}\), \(y_{0}\) (or \(\tilde{x}_{0}\), \(\tilde{y}_{0}\), if there is an external magnetic field). The diagrams canceled by the extra terms in the formulae (30) and (31) are precisely those (see e.g. [6]) which vanish at zero temperature, that is do not contribute to the expansion of the formula (13) for the ground state energy. One can also simplify the formula (29) for the second order correction \(x_{2}\) to the \(\mu_{+}\) chemical potential (and the analogous formula for \(y_{2}\)). After a straightforward algebra one obtains \[x_{2}=-\frac{\Omega_{x}^{(2)a}}{\Omega_{xx}^{(0)}}\,,\] all other terms neatly canceling. Of course, \(x_{1}=C_{0}\,n_{-}\), \(y_{1}=C_{0}\,n_{+}\) but it is perhaps more instructive to write5 Footnote 5: This form clearly shows, since the cancellation of the divergences in the sum \(\Omega^{(1)}+\Omega^{(2)a}\) has already been demonstrated, that the computed perturbatively chemical potentials \(x\) and \(y\) are to this order finite, after the cutoff dependence of the coupling \(C_{0}\) is taken into account. The argument obviously generalizes to all orders: if the free energy \(F\) is made finite by the renormalization of the couplings, so must be the chemical potentials. \[x=x_{0}-\frac{1}{\Omega_{xx}^{(0)}}\left(\Omega_{x}^{(1)}+\Omega_{x}^{(2)a}+ \ldots\right).\] determining the system's polarization. Indeed, only if this cancellation holds is the minimization of the free energy written in the form \[F=\Omega^{(0)}(x_{0},y_{0})+(x_{0}\,n_{x}+y_{0}\,n_{y})\,V+\Omega^{(1)}(x_{0},y_{ 0})+\Omega^{(2)a}(x_{0},y_{0})+\ldots,\] with respect to \(n_{x}\) (keeping \(n_{y}=n-n_{x}\)), which (taking into account that \(\Omega^{(0)}_{x}+n_{x}V=0\), \(\Omega^{(0)}_{y}+n_{y}V=0\)) amounts to \[(x_{0}-y_{0})\,V=-\left[\Omega^{(1)}_{x}+\Omega^{(2)a}_{x}+\ldots\right]\frac {\partial x_{0}}{\partial n_{x}}+\left[\Omega^{(1)}_{y}+\Omega^{(2)a}_{y}+ \ldots\right]\frac{\partial y_{0}}{\partial n_{y}}\,.\] equivalent to the condition \(\mu_{+}=\mu_{-}\) written in the form \(x_{0}+x_{1}+x_{2}+\ldots=y_{0}+y_{1}+y_{2}+\ldots\), that is \[x_{0}-y_{0}=\frac{\Omega^{(1)}_{x}+\Omega^{(2)a}_{x}+\ldots}{\Omega^{(0)}_{xx }}-\frac{\Omega^{(1)}_{y}+\Omega^{(2)a}_{y}+\ldots}{\Omega^{(0)}_{yy}}\,.\] The equivalence follows from noticing that since \(n_{x}=-\Omega^{(0)}_{x}/V\), \(n_{y}=-\Omega^{(0)}_{y}/V\), the derivatives of \(x_{0}\) and \(y_{0}\) are precisely equal to \[\frac{\partial x_{0}}{\partial n_{x}}=-\frac{V}{\Omega^{(0)}_{xx}}\,,\ \ \ \ \ \frac{\partial y_{0}}{\partial n_{y}}=-\frac{V}{\Omega^{(0)}_{yy}}\,.\] From this argument it immediately follows that \(x_{3}=-(\Omega^{(3)a}_{x}+\Omega^{(3)b}_{x})/\Omega^{(0)}_{xx}\) and \(y_{3}=-(\Omega^{(3)a}_{y}+\Omega^{(3)b}_{y})/\Omega^{(0)}_{yy}\). Restricted to the first order the left hand side of the equality \(x_{0}+x_{1}=y_{0}+y_{1}\) reads \[x_{0}+x_{1}+\ldots=k_{\rm B}T\,f^{-1}\!\left(\left(\frac{\varepsilon^{(0)}_{ \rm F}(n_{+})}{k_{\rm B}T}\right)^{3/2}\right)+C_{0}\,n_{-}+\ldots\] The right hand side is given by the analogous formula. If one sets here \(n_{\pm}=(N/2V)(1\pm P)\), this reproduces the condition (21).
Thermal (imaginary time) perturbative expansionを用いて、(関連する)有効場理論を計算し、低温における秩序化状態の特性を計算します。これは、(非relativistic)スピン1/2のフェルミオンが短距離でスピン非依存で反発的な二つの相互作用ポテンシャルにより相互作用するガの特性です。この論文では、密度$n_+$と$n_-$の2次までの体系の自由エネルギーの体系的展開を求めて、数値最小化により、温度、全体の密度$n = n_+ + n_-$と相互作用の強さに対して、スピンアップとスピンダウンフェルミオンの平衡 proportions (つまり、体系の偏り) を決定します。
2309.04591
An adaptive Bayesian approach to gradient-free global optimization
Many problems in science and technology require finding global minima or maxima of various objective functions. The functions are typically high-dimensional; each function evaluation may entail a significant computational cost. The importance of global optimization has inspired development of numerous heuristic algorithms based on analogies with physical, chemical or biological systems. Here we present a novel algorithm, SmartRunner, which employs a Bayesian probabilistic model informed by the history of accepted and rejected moves to make a decision about the next random trial. Thus, SmartRunner intelligently adapts its search strategy to a given objective function and moveset, with the goal of maximizing fitness gain (or energy loss) per function evaluation. Our approach can be viewed as adding a simple adaptive penalty to the original objective function, with SmartRunner performing hill ascent or descent on the modified landscape. This penalty can be added to many other global optimization algorithms. We explored SmartRunner's performance on a standard set of test functions, finding that it compares favorably against several widely-used alternatives: simulated annealing, stochastic hill climbing, evolutionary algorithm, and taboo search. Interestingly, adding the adaptive penalty to the first three of these algorithms considerably enhances their performance. We have also employed SmartRunner to study the Sherrington-Kirkpatrick (SK) spin glass model and Kauffman's NK fitness model - two NP-hard problems characterized by numerous local optima. In systems with quenched disorder, SmartRunner performs well compared to the other global optimizers. Moreover, in finite SK systems it finds close-to-optimal ground-state energies averaged over disorder.
Jianneng Yu, Alexandre V. Morozov
2023-09-08T20:54:57
http://arxiv.org/abs/2309.04591v1
# An adaptive Bayesian approach to gradient-free global optimization ###### Abstract Many problems in science and technology require finding global minima or maxima of various objective functions. The functions are typically high-dimensional; each function evaluation may entail a significant computational cost. The importance of global optimization has inspired development of numerous heuristic algorithms based on analogies with physical, chemical or biological systems. Here we present a novel algorithm, SmartRunner, which employs a Bayesian probabilistic model informed by the history of accepted and rejected moves to make a decision about the next random trial. Thus, SmartRunner intelligently adapts its search strategy to a given objective function and moveset, with the goal of maximizing fitness gain (or energy loss) per function evaluation. Our approach can be viewed as adding a simple adaptive penalty to the original objective function, with SmartRunner performing hill ascent or descent on the modified landscape. This penalty can be added to many other global optimization algorithms. We explored SmartRunner's performance on a standard set of test functions, finding that it compares favorably against several widely-used alternatives: simulated annealing, stochastic hill climbing, evolutionary algorithm, and taboo search. Interestingly, adding the adaptive penalty to the first three of these algorithms considerably enhances their performance. We have also employed SmartRunner to study the Sherrington-Kirkpatrick (SK) spin glass model and Kauffman's NK fitness model - two NP-hard problems characterized by numerous local optima. In systems with quenched disorder, SmartRunner performs well compared to the other global optimizers. Moreover, in finite SK systems it finds close-to-optimal ground-state energies averaged over disorder. ## Introduction Many models in fields of enquiry as diverse as natural and social sciences, engineering, machine learning, and quantitative medicine are described by complex non-linear functions of many variables. Often, the task is to find globally optimal solutions of these models, which is equivalent to finding global minima or maxima of the corresponding model functions. The global optimization problem arises in engineering design, economic and financial forecasting, biological data analysis, potential energy models in physics and chemistry, robot design and manipulations, and numerous other settings. Notable examples include finding the minimum of protein free energy in computer simulations of protein folding [1, 2], finding high-fitness solutions in evolving populations subject to mutation, selection, recombination, and genetic drift [3, 4, 5] (biological fitness quantifies the degree of reproductive success of an organism in an evolving population), and minimizing the error function in deep-learning neural network models [6, 7]. Mathematically, the global optimization problem is defined as finding the maximum (or the minimum) of a real-valued function \(\mathcal{F}(X)\), where \(X\) denotes a collection of discrete or continuous variables that describe the state of the system. The states of the system may be subject to nonlinear constraints. Here we focus on maximizing \(\mathcal{F}(X)\), which we will refer to as the fitness function; with \(\mathcal{F}(X)=-E(X)\), this is equivalent to minimizing an energy or error function \(E(X)\). In the energy function case, \(E(X)\) may signify the energy of a microstate or a free energy of a coarse-grained/mesoscopic state. The number of variables in \(X\) may be large in real-world applications and \(\mathcal{F}(X)\) may be costly to evaluate, making it highly desirable to develop efficient global optimization algorithms which require as few fitness function evaluations as possible to reach high-quality solutions. The set of fitness values assigned to all states of the system forms a fitness landscape - a high-dimensional surface which global optimization algorithms must traverse on their way to the mountain peaks that correspond to high-scoring solutions. If the fitness function is concave everywhere, the fitness landscape consists of a single peak and the global maximum is easy to find. However, in most problems of interest fitness landscapes contain multiple local maxima and saddle points which can trap the optimizer. There is no guarantee of finding the global maximum in this case unless all system states can be examined, which is usually not feasible because their number is exponentially large. A well-known worst-case scenario is a "golf-course" landscape which is flat everywhere apart from a few states that form a basin of attraction for an isolated deep hole, or a tall peak. In protein folding, this scenario is known as Levinthal's paradox [8] - proteins cannot fold on biologically reasonable time scales if they need to sample a sizable fraction of their microscopic configurations. While Levinthal's paradox has been resolved by introducing the concept of a protein folding funnel [1, 2, 9, 10], generally there is no guarantee of finding the global maximum in a reasonable number of steps, and global optimization is demonstrably an NP-hard problem [11]. If the gradient of the fitness function can be computed efficiently, it should be used to guide the search because the gradient vector indicates the direction of the steepest ascent. Here, we focus on systems with discrete or discretized states and assume that the gradient is not available. Namely, we consider an undirected graph with \(N\) nodes or vertices, where \(N\) is the total number of system states which may be astronomically large or even unknown. Each node \(i=1\ldots N\) is assigned a state \(X_{i}\) and a corresponding fitness value \(\mathcal{F}(X_{i})\). This definition describes a vast number of systems that are either naturally discrete (e.g., spin glasses [12]) or discretized by superimposing a lattice on a continuous landscape. Besides the fitness function, a global optimization algorithm requires a move set - a deterministic or stochastic rule for moving between states on the fitness landscape. A move set defines state neighborhoods - a set of states reachable from a given state in a single jump. The size of the neighborhood is typically fixed but may also change in complex ways, e.g. with recombination moves described below. Numerous empirical approaches have been developed over the years to tackle the problem of gradient-free optimization. Usually, these algorithms are based on an analogy with a physical, chemical or biological process in which some kind of optimization is known to occur. For example, the celebrated simulated annealing algorithm [13] is a Monte Carlo technique based on an analogy with a physical annealing process in which the material starts at a high temperature to enable constituent molecules or atoms to move around. The temperature is gradually decreased, allowing the material to relax into low-energy crystalline states. The rate of temperature decrease is a key parameter of the simulated annealing algorithm [14]. Numerous modifications of the basic simulated annealing approach have been developed over the years: parallel tempering Monte Carlo [15], replica Monte Carlo [16], population annealing [17], simulated tempering [18], and many others. Generally speaking, the idea of these algorithms is to overcome free energy barriers by simulating a broad range of temperatures. Besides estimating various thermodynamic quantities by Monte Carlo sampling, some of these algorithms have also been applied to combinatorial optimization problems such as the search for the ground states of Ising spin glasses [19]. Genetic or evolutionary algorithms [20, 21, 22] are based on an analogy with the evolution of a biological population: a population of candidate solutions is subjected to multiple rounds of recombination, mutation, and selection, enabling "the survival of the fittest". Harmony search is a music-inspired algorithm, applying such concepts as playing a piece of music from memory, pitch adjustment, and composing new notes to an evolving population of harmonies [23, 24]. Particle swarm algorithms draw their inspiration from the collective behavior of bird flocks and schools of fish [25, 26]. Taboo search is a deterministic strategy in which all nearest neighbors of the current state are examined and the best move is accepted [27]. To avoid returning to previously examined states via deterministic cycles, a fixed-length "taboo" list is kept of the recently visited states that are temporarily excluded from the search. Stochastic hill climbing employs a procedure in which the moves are accepted or rejected using a sigmoid (two-state) function with a fixed temperature \(T\)[28]. As in simulated annealing, this strategy allows for deleterious moves whose frequency depends on the value of \(T\). Many other heuristic algorithms and variations of the above algorithms are available in the literature [29, 30, 31, 32, 33, 34, 35]. Here we propose a novel global optimization algorithm which we call SmartRunner. SmartRunner is not based on an analogy with a physical, chemical or biological system. Instead, the algorithm uses previously accumulated statistics on rejected and accepted moves to make a decision about its next move. Thus, SmartRunner adapts its search strategy intelligently as a function of both local and global landscape statistics collected earlier in the run, with the goal of maximizing the overall fitness gain. Generally speaking, SmartRunner can be viewed as a stochastic extension of the Taboo search policy. However, unlike the Taboo algorithm, it does not need to evaluate fitness values of every neighbor of the current state, which may be computationally expensive. Moreover, it replaces infinite penalties assigned to the states in the "taboo" list by node-dependent penalties which only become infinite when all the nearest neighbors of the node is question have already been explored. We benchmark SmartRunner on a set of challenging global optimization problems and show that it consistently outperforms several other state-of-the-art algorithms. Moreover, we demonstrate that the SmartRunner approach amounts to hill climbing on a dynamically redefined fitness landscape. This redefinition can be used to enhance the performance of many other global search approaches such as simulated annealing or evolutionary algorithms. ## Materials and Methods **Bayesian estimation of the probability to find a novel beneficial move.** _Unweighted moves._ Consider a fitness landscape with a move set that defines \(\mathcal{N}\) nearest neighbors for each discrete system state \(X_{i}\) (\(i=1\ldots N\)). We divide all neighbors of the state \(X_{i}\) into two disjoint subsets: one set \(S_{p}^{i}\) of size \(U_{p}^{i}\geq 0\) contains all states with fitness \(\leq\mathcal{F}_{i}\), while the other set \(S^{i}\) of size \(U^{i}=\mathcal{N}-U_{p}^{i}\geq 0\) contains all states with fitness \(>\mathcal{F}_{i}\). Moves between \(X_{i}\) and any state in the set \(S_{p}^{i}\) are deleterious or neutral, while moves to any state in the set \(S^{i}\) are beneficial. Generally, we expect the size of \(S^{i}\) to be small: \(U^{i}\ll\mathcal{N}\simeq U_{p}^{i}\) because as a rule it is more difficult to find a beneficial move than a deleterious or neutral one. We assign the system state \(X_{i}\) to the node \(i\) on a network, with nodes representing system states and edges representing nearest-neighbor jumps. We consider a single random walker that explores the network. At each step, the walker is equally likely to initiate a jump to any of the \(\mathcal{N}\) neighbors of the current node. Let us say that the random walker is currently at node \(i\) and has made \(n\) unsuccessful attempts to make a move \(i\to j\in\mathrm{nnb}(i)\), where \(\mathrm{nnb}(i)=S_{p}^{i}\cup S^{i}\) is a set that contains all the nearest neighbors of node \(i\) (for simplicity, let us assume for the moment that all deleterious and neutral moves are rejected while a beneficial move, once found, is immediately accepted). After \(n\) trials, we have data \(\mathcal{D}=\{K_{p},m_{p},K,m\}\), where \(K_{p}\) is the total number of visits to the nodes in \(S_{p}^{i}\) and \(K=n-K_{p}\) is the total number of visits to the nodes in \(S^{i}\). Furthermore, \(m_{p}\leq K_{p}\) and \(m\leq K\) are the number of _unique_ visited nodes in \(S_{p}^{i}\) and \(S^{i}\), respectively. The probability of observing \(\mathcal{D}\) is given by \[P(\mathcal{D}|U^{i})=\binom{n}{K}\left(\frac{U^{i}}{\mathcal{N}}\right)^{K} \left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n-K}. \tag{1}\] Correspondingly, the probability of \(U^{i}\) given the data is \[P(U^{i}|\mathcal{D})=\frac{P(\mathcal{D}|U^{i})P(U^{i})}{\sum_{U^{\prime}=0}^ {\mathcal{N}-m_{p}}P(\mathcal{D}|U^{\prime})P(U^{\prime})}, \tag{2}\] where \(P(U)\) is the prior probability that there are \(U\) nearest neighbors of node \(i\) whose fitness is higher than \(\mathcal{F}_{i}\). Choosing an uninformative prior, we obtain: \[P(U)=\frac{1}{\mathcal{N}+1}. \tag{3}\] Note that \(\sum_{U=0}^{\mathcal{N}}P(U)=1\). Then Eq. (2) yields \[P(U^{i}|\mathcal{D})=\frac{1}{Z}\left(\frac{U^{i}}{\mathcal{N}}\right)^{K} \left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n-K}, \tag{4}\] where \(Z=\sum_{U^{\prime}=0}^{\mathcal{N}-m_{p}}\left(\frac{U^{\prime}}{\mathcal{N}} \right)^{K}\left(1-\frac{U^{\prime}}{\mathcal{N}}\right)^{n-K}\). Focusing on the \(K=0\), \(m=0\) limit (that is, on the case where no beneficial moves have yet been found) and assuming \(U^{i}\ll\mathcal{N}\), we obtain \[P(\mathcal{D}|U^{i})=\left(1-\frac{U^{i}}{\mathcal{N}}\right)^{n}\simeq e^{- \gamma U^{i}}, \tag{5}\] where \(\gamma=n/\mathcal{N}\). Furthermore, \[Z\simeq\sum_{U^{\prime}=0}^{\mathcal{N}-m_{p}}e^{-\gamma U^{\prime}}=\frac{1- e^{-\gamma\widetilde{\mathcal{N}}}}{1-e^{-\gamma}}, \tag{6}\] where \(\widetilde{\mathcal{N}}=\mathcal{N}-m_{p}+1\). Note that the exponential substitution becomes inaccurate for the terms in the sum in which \(U^{\prime}\) approaches \(\mathcal{N}-m_{p}\); however, since \(m_{p}\leq\mathcal{N}\), these terms are suppressed in the \(n\gg 1\) limit compared to the accurately approximated terms with \(U^{\prime}\ll\mathcal{N}-m_{p}\). We observe that \(m_{p}\) is a stochastic variable whose expectation value can be shown to be \[E[m_{p}]=\mathcal{N}\left[1-(1-\frac{1}{\mathcal{N}})^{n}\right]\simeq\mathcal{ N}\left[1-e^{-\gamma}\right], \tag{7}\] where the last approximation requires \(\mathcal{N}\gg 1\). Finally, \[P(U^{i}|\mathcal{D})=\frac{1}{Z}e^{-\gamma U^{i}}=e^{-\gamma U^{i}}\frac{1-e^ {-\gamma}}{1-e^{-\gamma\widetilde{\mathcal{N}}}}. \tag{8}\] If \(\mathcal{N}\gg 1\) and \(\mathcal{N}\gg m_{p}\), Eq. (8) yields \[P(U^{i}=0|\mathcal{D})\simeq\frac{1-e^{-\frac{n}{\mathcal{N}}}}{1-e^{-n}} \simeq 1-e^{-\frac{n}{\mathcal{N}}}, \tag{9}\] where the last approximation is valid for \(n\gg 1\). Thus, the probability to find beneficial moves, \(P(U^{i}>0|\mathcal{D})\simeq e^{-n/\mathcal{N}}\), decreases exponentially with \(n\). Note that if \(n=0\) (no random trials have been made), Eq. (8) yields \(P(U^{i}>0|\mathcal{D})=\mathcal{N}/(\mathcal{N}+1)\), consistent with the prior probability in Eq. (3) which assigns equal weights to all \(\mathcal{N}+1\) values of \(U^{i}\). Thus, to begin with the system is very optimistic that a beneficial move will be found. However, if \(m_{p}=\mathcal{N}\) (that is, all moves have been tried and none are beneficial), Eq. (8) yields \(P(U^{i}>0|\mathcal{D})=0\), as expected. Thus, the system gradually loses its optimism about finding a beneficial move as it makes more and more unsuccessful trials. Finally, we compute the probability of finding a higher-fitness target in the next step: \[p_{f}=\sum_{U^{i}=0}^{\mathcal{N}-m_{p}}\frac{U^{i}}{\mathcal{N}}P(U^{i}| \mathcal{D})=\frac{1}{\mathcal{N}}\frac{e^{-\gamma}-\widetilde{\mathcal{N}}e ^{-\gamma\widetilde{\mathcal{N}}}+e^{-\gamma(\widetilde{\mathcal{N}}+1)}( \widetilde{\mathcal{N}}-1)}{(1-e^{-\gamma})(1-e^{-\gamma\widetilde{\mathcal{N }}})}. \tag{10}\] In the beginning of the search, \(n\ll\mathcal{N}\) and, correspondingly, \(m_{p}\ll\mathcal{N}\). If, in addition, \(n\gg 1\) (which implies \(\mathcal{N}\gg 1\)), Eq. (10) simplifies considerably: \[p_{f}\simeq\frac{1}{\mathcal{N}}\frac{1-n/\mathcal{N}}{n/\mathcal{N}}=\frac{1 }{n}\left[1+\mathcal{O}(\frac{n}{\mathcal{N}})\right]. \tag{11}\] Note that in this limit \(p_{f}\) is independent of \(\mathcal{N}\) to the leading order. If \(m_{p}=\mathcal{N}\), \(\widetilde{\mathcal{N}}=1\) and \(p_{f}=0\) in Eq. (10), as expected. Note that if \(n=m_{p}=0\), \(\widetilde{\mathcal{N}}=\mathcal{N}+1\) and \(\gamma=0\). Then \(Z=\mathcal{N}+1\) from Eq. (6), leading to the following simplification of Eq. (10): \[p_{f}=\frac{1}{\mathcal{N}(\mathcal{N}+1)}\sum_{U^{i}=0}^{\mathcal{N}}U^{i}= \frac{1}{2}. \tag{12}\] Thus, not surprisingly, the probability of finding a higher-fitness target before making any moves is \(1/2\). After making a single move and not finding a higher-fitness target (\(n=1\), \(m_{p}=1\)), \(\widetilde{\mathcal{N}}=\mathcal{N}\) and \(\gamma=1/\mathcal{N}\). With the additional assumption that \(\mathcal{N}\gg 1\), we obtain: \[p_{f}\simeq\frac{1-2e^{-1}}{1-e^{-1}}+\mathcal{O}(\frac{1}{\mathcal{N}})\simeq 0.42. \tag{13}\] In summary, the probability of finding a beneficial move, \(p_{f}\), starts out at \(0.5\) and decreases with the number of trials until either a beneficial move is found (in which case Eq. (10) is no longer applicable) or there are no more novel moves to find (in which case \(p_{f}=0\)). The asymptotic \(\simeq 1/n\) behavior of \(p_{f}\) is universal in the \(n\gg 1\) limit (Eq. (11)). Finally, we observe that the above formalism can be extended to any subsets \(S_{p}^{i}\) and \(S^{i}\) since the initial division into deleterious/neutral moves in \(S_{p}^{i}\) and beneficial moves in \(S^{i}\) was arbitrary. Thus, even if a beneficial move is found, we can add it to \(S_{p}^{i}\) and regard \(S^{i}\) as the set of _remaining_, or _novel_ beneficial moves. _Weighted moves._ The probability to find a novel beneficial move (Eq. (10)) was derived under the assumption that the total number of neighbors \(\mathcal{N}\) is known and that the move set is unweighted - each new move is chosen with equal probability \(1/\mathcal{N}\). However, move sets may be intrinsically weighted: for example, in systems with recombination relative weights of recombination moves depend on the genotype frequencies in the population. In addition, it may be of interest to assign separate weights to classes of moves, such as one- and two-point mutations in sequence systems, or one- and two-spin flips in spin systems. In this section, we relax the assumption of unweighted moves, while still treating \(\mathcal{N}\) as a known constant. Specifically, we consider a set of weights \(\{w_{j}\}_{j=1}^{\mathcal{N}}\) associated with \(i\to j\in\mathrm{nnb}(i)\) moves. The probability of a \(i\to j\) jump is then given by \(p(i\to j)=w_{j}/W\), where \(W=\sum_{j=1}^{\mathcal{N}}w_{j}=\sum_{j=1}^{U_{p}^{i}}w_{j}+\sum_{j=1}^{U^{i} }w_{j}\equiv W_{U^{i}_{p}}+W_{U^{i}}\) is the sum over all nearest-neighbor weights, and \(W_{U^{i}_{p}}\) and \(W_{U^{i}}\) are partial sums over the weights in \(S_{p}^{i}\) and \(S^{i}\), respectively. Consequently, \[P(\mathcal{D}|U^{i},\{w_{j}\})=\binom{n}{K}\left(\frac{W_{U^{i}}}{W}\right)^{K }\left(1-\frac{W_{U^{i}}}{W}\right)^{n-K}, \tag{14}\] which in the \(K=0\) case reduces to \[P(\mathcal{D}|U^{i},\{w_{j}\})=\left(1-\frac{W_{U^{i}}}{W}\right)^{n}\simeq e^ {-\frac{W_{U^{i}}}{W}n}. \tag{15}\] Next, we integrate the likelihood over the edge weights: \[P(\mathcal{D}|U^{i})=\int_{0}^{\infty}dw_{1}\ldots dw_{\mathcal{N}}P(w_{1}) \ldots P(w_{\mathcal{N}})e^{-\frac{n}{W}(w_{1}+\cdots+w_{U^{i}})}. \tag{16}\] We represent the probability distribution of edge weights by a Gaussian mixture model, which can be used to describe multimodal distributions of arbitrary complexity:[36] \[P(w)=\frac{1}{\Omega}\sum_{k=1}^{\mathcal{P}}\frac{p_{k}}{\sqrt{2\pi}\sigma_{ k}}e^{-\frac{(w-\bar{w}_{k})^{2}}{2\sigma_{k}^{2}}}, \tag{17}\] where \(\Omega\) is the normalization constant, \(\mathcal{P}\) is the number of Gaussian components and \(p_{k}\) is the relative weight of component \(k\): \(\sum_{k=1}^{\mathcal{P}}p_{k}=1\). In the \(\mathcal{N}\gg 1\) limit, we expect \(W\simeq\langle W\rangle=\mathcal{N}\sum_{k}p_{k}\bar{w}_{k}\equiv\mathcal{N} \bar{w}\), such that Eq. (16) simplifies to \[P(\mathcal{D}|U^{i})\simeq\prod_{j=1}^{U^{i}}\int_{0}^{\infty}dw_{j}P(w_{j})e^ {-\frac{w_{j}}{\langle W\rangle}n}=e^{-\beta U^{i}}, \tag{18}\] where \[e^{-\beta}=\frac{1}{2\Omega}\sum_{k=1}^{\mathcal{P}}p_{k}\mathrm{erfc}\left(\frac {c_{k}-\bar{w}_{k}}{\sqrt{2}\sigma_{k}}\right)e^{-\alpha_{k}}. \tag{19}\] Here, \(\alpha_{k}=\frac{n\bar{w}_{k}}{\langle W\rangle}-\frac{n^{2}\sigma_{k}^{2}}{2 \langle W\rangle^{2}}=\gamma\frac{\bar{w}_{k}}{\bar{w}}-\gamma^{2}\frac{\sigma_ {k}^{2}}{2\bar{w}^{2}}\), \(c_{k}=\frac{\sigma_{k}^{2}n}{\langle W\rangle}=\gamma\frac{\sigma_{k}^{2}}{\bar {w}}\) and \(\mathrm{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_{x}^{\infty}dte^{-t^{2}}\) is the complementary error function. The normalization constant is given by \[\Omega=\frac{1}{2}\sum_{k=1}^{\mathcal{P}}p_{k}\mathrm{erfc}\left(-\frac{\bar{ w}_{k}}{\sqrt{2}\sigma_{k}}\right). \tag{20}\] Note that if all the Gaussians are narrow (\(\sigma_{k}\ll\bar{w}_{k}\), \(\forall k\)), \(\mathrm{erfc}\left(-\frac{\bar{w}_{k}}{\sqrt{2}\sigma_{k}}\right)\to 2\) and thus \(\Omega\to 1\), as expected. If the edge weights are Gaussian distributed with mean \(\bar{w}\) and standard deviation \(\sigma\) (i.e., \(\mathcal{P}=1\)), Eq. (19) becomes \[e^{-\beta}=\frac{\mathrm{erfc}\left(\frac{c-\bar{w}}{\sqrt{2}\sigma}\right)}{ \mathrm{erfc}\left(-\frac{\bar{w}}{\sqrt{2}\sigma}\right)}e^{-\alpha}, \tag{21}\] where \(\alpha=\gamma-\gamma^{2}\frac{\sigma^{2}}{2\bar{w}^{2}}\) and \(c=\gamma\frac{\sigma^{2}}{\bar{w}}\). If in addition all weights are equal, \(\frac{\sigma}{\bar{w}}\to 0\) and \(\beta\to\gamma\) in Eq. (21), such that Eq. (18) for the likelihood reduces to Eq. (5). Thus, the difference between \(\beta\) and \(\gamma\) is due to fluctuation corrections. The model evidence \(Z\), the posterior probability \(P(U^{i}|\mathcal{D})\) and \(p_{f}\), the probability of finding a higher-fitness target in the next step, are found by substituting \(\gamma\to\beta\) into Eqs. (6), (8) and (10), respectively. Note that if \(n\to 0\), \(\beta\to 0\) as well and therefore \(p_{f}\to 1/2\) since the argument leading to Eq. (12) still holds. Moreover, Eq. (10) still yields \(p_{f}=0\) when all the neighbors have been explored (\(m_{p}=\mathcal{N}\)). Finally, if \(n,m_{p}\ll\mathcal{N}\) and \(n\gg 1\), \(\alpha\simeq\gamma\) and the ratio of complementary error functions in Eq. (21) is \(\simeq 1\). Then the argument leading to Eq. (11) also holds, yielding \(p_{f}\simeq 1/n\) asymptotically even in the weighted move case. Thus, introducing weighted moves does not lead to qualitative differences in the \(p_{f}\) dependence on \(n\) - any substantial differences are localized to the intermediate region: \(1\leq n\leq 30\) or so, and in many systems the \(p_{f}\) curves for weighted and unweighted moves overlap almost completely (cf. red and blue curves in Fig. S1A-D). Next, we consider the exponential probability distribution of edge weights - an archetypical distribution often found in natural and artificial networks:[37] \[P(w)=\frac{1}{\bar{w}}e^{-\frac{w}{\bar{w}}}, \tag{22}\] where \(\bar{w}\) denotes the mean of the exponential distribution, such that \(\langle W\rangle=\mathcal{N}\bar{w}\). It is easy to show that the likelihood \(P(\mathcal{D}|U^{i})\) is given by Eq. (18) with \(\beta_{\mathrm{exp}}=\log\left(1+\frac{n}{N}\right)\). Consequently, as in the case of the Gaussian mixture model, the model evidence \(Z\), the posterior probability \(P(U^{i}|\mathcal{D})\) and \(p_{f}\) are given by Eqs. (6), (8) and (10), respectively, but with \(\beta_{\mathrm{exp}}\) instead of \(\gamma\). Clearly, \(\beta_{\mathrm{exp}}\to 0\) as \(n\to 0\) and therefore \(p_{f}\to 1/2\) as in the Gaussian mixture case. Similarly, \(p_{f}=0\) once \(m_{p}=\mathcal{N}\). Lastly, the \(n,m_{p}\ll\mathcal{N}\) limit yields \(\alpha\simeq\gamma\), which in turn leads to \(p_{f}\simeq 1/n\) under the additional assumption of \(n\gg 1\). Thus, the dependence of \(p_{f}\) on \(n\) for exponentially distributed weights is again qualitatively similar to the \(p_{f}\) functions in the corresponding unweighted cases (cf. red and blue curves in Fig. S1E,F). _Simplified treatment of \(p_{f}\)._ The computation of \(p_{f}\) for the unweighted and the exponentially distributed cases requires the knowledge of \(m_{p}\) and \(\mathcal{N}\) besides the number of trials \(n\). For weighted move sets in the Gaussian mixture model, one would additionally require \(p_{k}\), \(\bar{w}_{k}\) and \(\sigma_{k}\) for each Gaussian component. Unless these parameters are known _a priori_, they would have to be estimated from a sample of edge weights, increasing the computational burden. Moreover, this extra effort may not be justified, in the view of the nearly universal dependence of \(p_{f}\) on \(n\) in all three cases considered above. Even keeping track of \(\mathcal{N}\) may be complicated for some move sets, e.g. a recombination+mutation move set employed in genetic algorithms.[20, 21, 22] With recombination, \(\mathcal{N}\) depends on the current state of the population and therefore generally changes with time. Hence, computing \(\mathcal{N}\) at each step would increase the complexity of the algorithm. We propose to capitalize on the approximate universality of \(p_{f}\) by creating a minimal model for it which depends only on the number of trials \(n\). Specifically, we define \[p_{f}(n)=\begin{cases}\frac{n^{2}}{250}-\frac{2n}{25}+\frac{1}{2}&\text{if }0 \leq n\leq 5,\\ \frac{1}{n}&\text{if }n>5.\end{cases} \tag{23}\] This model has the right asymptotics at \(n=0\) and in the \(n,m_{p}\ll\mathcal{N}\), \(n\gg 1\) limit, but does not go to \(0\) identically when \(m_{p}=\mathcal{N}\) because enforcing this condition requires the knowledge of \(\mathcal{N}\). However, if \(\mathcal{N}\gg 1\), as can be expected with complex move sets, \(n\gg 1\) at \(m_{p}=\mathcal{N}\) and the difference between the small but non-zero value of \(p_{f}\) in Eq. (23) and zero will be immaterial (cf. green curves in Fig. S1 for \(p_{f}(n)\) in several model systems). **Implementation of the optimal search policy: the SmartRunner algorithm.** Given Bayesian probabilistic estimates of finding novel moves between a given node and its nearest neighbors, we need to formulate an optimal search policy in order to maximize the expected fitness gain over the course of the run with \(l_{\text{tot}}\) random trials. Assuming that the walker is currently at node \(i\), there are two options after each random trial: stay at node \(i\) (thereby rejecting the move) or jump to a neighboring node \(j\). If the walker stays at node \(i\), we expect it to search for \(l_{i}\) steps before finding a higher-fitness node which has not been detected before. Then the value of the policy of staying at \(i\) can be evaluated as \[\mathcal{S}_{i}=\overline{\Delta\mathcal{F}_{b}}+\overline{R}(l_{\text{rem}}- l_{i})=-\overline{R}l_{i}+\mathcal{C}, \tag{24}\] where \(\overline{\Delta\mathcal{F}_{b}}=\overline{\mathcal{F}_{\underline{k}}- \mathcal{F}_{i}}\) is the expected fitness gain of the newly found beneficial move to a node \(k\in\text{nnb}(i)\) and \(\overline{R}\) is the expected rate of fitness gain per step times the number of steps remaining in the run. Furthermore, \(l_{\text{rem}}\leq l_{\text{tot}}\) is the number of steps remaining in the simulation, and \(l_{i}\) is the expected number of steps needed to find \(k\): \[l_{i}=\text{rnd}[\frac{1}{p_{f}^{i}}], \tag{25}\] where \(p_{f}^{i}\) is given by Eq. (10) or Eq. (23) (with the dependence on the node index \(i\) made explicit for clarity) and \(\text{rnd}[\ ]\) is the rounding operator. Finally, \(\mathcal{C}=\overline{\Delta\mathcal{F}_{b}}+\overline{R}\,l_{\text{rem}}\) denotes a constant contribution independent of the node index, under the assumption that \(\overline{\Delta\mathcal{F}_{b}}\) is the same for all nodes. The value of the policy of jumping to a downstream node \(j\) from \(i\) is given by \[\mathcal{L}_{i\to j}=(\mathcal{F}_{j}-\mathcal{F}_{i})-\overline{R}(l_{j}+1)+ \mathcal{C}, \tag{26}\] where the extra factor of \(-\!\!R\) accounts for the jump between the current node \(i\) and the new node \(j\), which reduces the total number of remaining steps by \(1\). We represent each visited state as a node and each rejected or accepted move as an edge on a directed graph \(\mathcal{G}\), implemented using the DiGraph class from NetworkX1. Thus, node \(i\) is part of the directed graph \(\mathcal{G}\) which contains information about all previously attempted moves. Depending on which previous moves have been explored, node \(i\) may be connected to multiple downstream nodes by directed edges; in general, these edges may form directed cycles. To see if the jump to one of the nodes downstream of node \(i\) will yield a more beneficial policy, we traverse \(\mathcal{G}\) recursively starting from the node \(i\), for up to \(l_{\max}\) steps. For computational convenience, \(G\) is implemented with two types of nodes:'regular' nodes \(i,j,k,\dots\) which denote states on the fitness landscape and are therefore assigned fitness values \(\mathcal{F}_{i},\mathcal{F}_{j},\mathcal{F}_{k},\dots\). (black circles in Fig. 1), and 'terminal' nodes \(i_{t},j_{t},k_{t},\dots\) which are assigned fitness values \(-\!\!Rl_{i},-\!\!Rl_{j},-\!\!\overline{R}l_{k},\dots\) (green circles in Fig. 1). Note that we have omitted the node-independent contribution \(\mathcal{C}\) in Eqs. (24) and (26). The edges of \(\mathcal{G}\) connecting two regular nodes (solid black arrows in Fig. 1): \(m\to n\) are assigned a weight of \(\mathcal{F}_{n}-\mathcal{F}_{m}\). The edges of \(\mathcal{G}\) connecting a regular node to a terminal node (dashed green arrows in Fig. 1): \(m\to m_{t}\) are assigned a weight of \(-\!\!R\,l_{m}\). By construction, terminal nodes do not have descendants and each regular node has exactly one terminal descendant. Footnote 1: [https://networkx.org/documentation/stable/reference/classes/digraph.html](https://networkx.org/documentation/stable/reference/classes/digraph.html) The policy values are computed using a set of recursive paths on the directed graph \(\mathcal{G}\). All Figure 1: **A schematic representation of the directed graph \(\mathcal{G}\) that represents the search process.** Regular nodes (system states) are represented by black circles with the corresponding fitness values; terminal nodes are shown as green circles. Directed edges connecting regular and terminal nodes are shown as dashed green lines and assigned a value of \(-\!\!\overline{R}l_{m}\), where \(l_{m}\) is the expected number of steps to find a novel beneficial move starting from regular node \(m\), and \(\overline{R}\) is the expected rate of fitness gain per step. Directed edges connecting two regular nodes are shown as solid black lines and assigned a value of the fitness difference between the two nodes. Note that the set of children of a given regular node always has one terminal node and \(m_{p}=(0\dots\mathcal{N})\) regular nodes depending on how much exploration has been done. In general, \(\mathcal{G}\) may contain directed cycles. valid paths must start from node \(i\) and end at one of the terminal nodes reachable with \(\leq l_{\max}\) steps. The goal is to identify a path which has the maximum weight among all paths. Note that with a single step, the only valid path is \(i\to i_{t}\) and its weight is given by \(\mathcal{S}_{i}\) from Eq. (24). The minimum allowed value of \(l_{\max}\) is thus equal to 2 because this enables computations of the path weight as \(\mathcal{L}_{i\to j}\) in Eq. (26) for \(j\in\operatorname{nnb}(i)\). Larger values of \(l_{\max}\) will enable longer jumps if any are available; longer jumps entail repeated application of Eq. (26) to compute the total path weight. If the winning path is \(i\to i_{t}\), the walker stays at the node \(i\) and makes another random trial, updating its \(p_{f}^{i}\) accordingly. If the winning path is \(i\to j_{t}\) (where \(j\) may be several steps away depending on the value of \(l_{\max}\)), the walker jumps to the node \(j\) and makes a new random trial from that node. The node \(j\) statistics such as \(n\) and \(m_{p}\) are initialized if the node has not been visited before in the run, and updated otherwise. Note that if Eq. (10) is used to compute \(p_{f}^{i}\), it is possible to obtain \(p_{f}^{i}=0\) and therefore \(l_{i}=\infty\) in Eq. (25), which is represented computationally by a large positive constant. The case in which both node \(i\) and all its neighbors \(j\) reachable in \(\leq l_{\max}\) steps are in this category requires special treatment because the \(\overline{R}\,l_{i}\) and \(\overline{R}\,l_{j}\) penalties cancel out and SmartRunner essentially becomes a local optimizer driven solely by fitness differences. To avoid being trapped in local fitness maxima in this special case, SmartRunner employs two alternative strategies. In the first strategy, a random path is chosen in the ensemble of all paths with \(\leq l_{\max}\) steps, instead of the path with the maximum sum of edge weights. In the second strategy, a longer random path to the boundary of the \(p_{f}=0\) region is constructed explicitly; the random path can have up to \(10^{3}\) steps. In both strategies, if the boundary of the \(p_{f}=0\) region is not reached, the procedure is repeated at subsequent steps, resulting in an overall random walk to the boundary of the "maximally-explored" region. The SmartRunner algorithm depends on \(\overline{R}\), the expected rate of fitness gain per step. Larger positive values of \(\overline{R}\) will encourage jumps to less-explored regular nodes even if those have slightly lower fitness values and will therefore promote landscape exploration. Smaller positive values of \(\overline{R}\) will encourage more thorough exploration of the current node but will not fully prevent deleterious moves. Negative values of \(\overline{R}\) however will prevent all further exploration. We adjust the value of \(\overline{R}\) adaptively as follows. The algorithm starts out with a user-provided initial value \(\overline{R}_{\text{init}}\). For each move, either accepted or rejected, the corresponding fitness value is recorded in a fitness trajectory array. Once \(M\) values are accumulated in the array, a linear model is fit to the fitness trajectory, yielding the fitted slope \(R^{\text{fit}}\). Finally, \(\overline{R}\) is computed as \[\overline{R}=\begin{cases}\alpha R^{\text{fit}}&\text{if }R^{\text{fit}}\geq \epsilon,\\ \alpha\epsilon\exp(R^{\text{fit}}-\epsilon)&\text{if }R^{\text{fit}}<\epsilon, \end{cases} \tag{27}\] where \(\epsilon\) is a small positive constant. Note that the second line serves to'rectify' the values of \(R^{\text{fit}}\) that follow below the threshold \(\epsilon\), preventing \(\overline{R}\) from ever reaching negative values. The value of \(\overline{R}\) is recomputed every \(M\) steps using Eq. (27), providing adaptive feedback throughout the run. The positive hyperparameter \(\alpha\) is the level of 'optimism' - how much more optimistic the system is about its future success compared to past performance. As discussed above, larger values of \(\alpha\) will promote landscape exploration. The SmartRunner algorithm can be summarized as the following sequence of steps: **SmartRunner Algorithm** **INPUT:** Initial state: \(X_{0}\) Fitness landscape function: \(X\to\mathcal{F}\) Move set function: \(X^{\mathrm{old}}\to X^{\mathrm{new}}\) Total number of iterations: \(l_{\mathrm{tot}}\) Maximum length of paths explored from each state: \(l_{\mathrm{max}}\) Initial guess of the fitness rate: \(\overline{R}_{0}\) Optimism level: \(\alpha\) Length of sub-trajectory for recomputing \(\overline{R}\): \(M\) 1. Initialize directed graph \(\mathcal{G}\). 2. Initialize regular node \(X_{0}\) with \(\mathcal{F}(X_{0})\). 3. Initialize terminal node \(X_{0,t}\). 4. Initialize \(\overline{R}=\overline{R}_{0}\). 5. Initialize \(l=0\). 6. Add an edge \(X_{0}\to X_{0,t}\) with a weight \(-\overline{R}l_{X_{0}}\). **do:** 1. Generate a random move: \(X\to X^{\prime}\). 2. If \(X^{\prime}\notin\mathcal{G}\): add \(X^{\prime}\) to \(\mathcal{G}\) with \(\mathcal{F}(X^{\prime})\); add a terminal node \(X^{\prime}_{t}\); add an edge \(X^{\prime}\to X^{\prime}_{t}\) with a weight \(-\overline{R}l_{X^{\prime}}\). 3. If \(X\to X^{\prime}\notin\mathcal{G}\): add an edge \(X\to X^{\prime}\) with a weight \(\mathcal{F}(X^{\prime})-\mathcal{F}(X)\). 4. Update statistics for \(X\), recompute \(l_{X}\) and update the \(X\to X_{t}\) edge weight. 5. Recursively compute sums of edge weights for all paths of length \(\leq l_{\mathrm{max}}\) starting at \(X\) and ending at downstream terminal nodes. If \(l_{X}=\infty\) for the \(XX_{t}\) path and \(l_{Y_{k}}=\infty\) for all other paths \(X\ldots Y_{k}Y_{k,t}\) in the ensemble, initiate a random walk; otherwise, stay at \(X\) or jump to \(Y_{k}\) according to the path with the maximum sum of edge weights. 6. If \(l=M,2M,3M,\ldots\): recompute \(\overline{R}\) using Eq. (27). **while**\(l\leq l_{\mathrm{tot}}\) **OUTPUT:** Globally best state: \(X^{\mathrm{best}}\), \(\mathcal{F}(X^{\mathrm{best}})\) Fitness trajectory: \(\{\mathcal{F}\}_{l=1}^{l_{\mathrm{tot}}}\) Total number of fitness function evaluations: \(f_{\mathrm{eval}}\) **Adaptive fitness landscape.** The stay or leave policy defined by Eqs. (24) and (26) amounts to an adaptive redefinition of the fitness landscape: \[\mathcal{F}_{i}\to\widetilde{\mathcal{F}}_{i}=\mathcal{F}_{i}-\overline{R}l_{ i}, \tag{28}\] where \(\overline{R}\,l_{i}\) is a positive occupancy penalty whose overall magnitude is controlled by the hyperparameter \(\overline{R}\). The penalty increases as the node \(i\) is explored more and more, resulting in progressively larger values of \(l_{i}\). Note that if Eq. (23) is used to estimate \(p_{f}\), the only additional piece of information required to compute \(\widetilde{\mathcal{F}}_{i}\) from \(\mathcal{F}_{i}\) is the total number of trials \(n_{i}\) at the node \(i\), which is easy to keep track of. Thus, \(\widetilde{\mathcal{F}}_{i}\) can serve as input not only to SmartRunner, which in this view amounts to hill climbing on the \(\widetilde{\mathcal{F}}\) landscape, but to any global optimization algorithm. In algorithms where individual moves are accepted or rejected sequentially (e.g., Simulated Annealing, Stochastic Hill Climbing), we compare \(\widetilde{\mathcal{F}}_{i}\) with \(\widetilde{\mathcal{F}}_{j}-\overline{R}\) to account for the fact that jumping from node \(i\) to node \(j\) decreases the total number of remaining steps by 1 (cf. Eq. (26)). In algorithms which involve non-sequential scenarios (e.g, Evolutionary Algorithm), modified fitnesses \(\widetilde{\mathcal{F}}\) from Eq. (28) are used directly instead of \(\mathcal{F}\). ## Results SmartRunner can climb out of deep local maxima.To demonstrate the ability of SmartRunner to traverse local basins of attraction leading to suboptimal solutions, we have constructed a 2D fitness landscape defined by a weighted sum of two Gaussians (Fig. 2). The left basin of attraction leads to a local maximum (\(\mathcal{F}^{\star}=50.17\)) which is much smaller compared to the global maximum on the right (\(\mathcal{F}^{\star}=78.48\)). The two basins of attraction are separated by a steep barrier. We start the SmartRunner runs from the left of the local basin of attraction, making sure that the walker rapidly falls there first, reaching the local maximum in a few thousand steps (Fig. 2A). Afterwards, the walker explores the local basin of attraction more and more extensively (Fig. 2B,C) until the barrier is finally overcome and the global maximum is found (Fig. 2D). The exploration strategy is automatically adapted to the fitness landscape features rather than being driven by external parameters such as the simulated annealing temperature. SmartRunner performance on 4D test functions.Next, we have explored SmartRunner performance on three standard 4D test functions often used to benchmark global optimization algorithms:[29] Rastrigin, Ackley and Griewank (se SI Methods for function definitions). The test functions are defined in standard hypercube ranges and supplemented with periodic boundary conditions. The resulting fitness landscapes are discretized using the same step size \(\Delta x\) in all 4 directions, resulting in \(1.63\times 10^{9}\), \(1.17\times 10^{10}\) and \(2.08\times 10^{12}\) distinct fitness states for Rastrigin, Ackley and Griewank functions, respectively. All three test functions are characterized by multiple local maxima; the unique global maximum is located at \(\mathfrak{X}=(0,0,0,0)\) and corresponds to \(\mathcal{F}=0\). The landscapes are explored by randomly choosing one of the directions and then increasing or decreasing the corresponding coordinate by \(\Delta x\) (the _nnb_ moveset). Fig. 3 shows the performance of SmartRunner on the Rastrigin test function: Fig. 3A is a hyperparameter scan which shows no consistent trend in the dependence of the average best fitness values \(\langle\mathcal{F}_{\text{best}}\rangle\) on \(\overline{R}_{\text{init}}\), the initial rate of fitness gain per step. This is expected because the value of \(\overline{R}\) is reset adaptively during the run (cf. Eq. (27)). In contrast, there is a slight preference for lower values of optimism \(\alpha\). Fig. 3B shows the corresponding average of function evaluations - unique fitness function calls which can be used as a measure of algorithm performance, especially in cases where fitness function calls are expensive, making it advisable to focus on maximizing the average fitness gain per function evaluation. As expected, the optimal values of \(\alpha\) correspond to the lower number of function evaluations since lower values of \(\alpha\) tend to favor exploitation (i..e., a more thorough search of the neighbors of the current state) over exploration (which favors more frequent jumps between landscape states). Figs. S2A,B and S2C,D show the corresponding results for Ackley and Griewank test functions, respectively. Lower values of \(\alpha\) work better for Ackley, while \(\alpha\geq 5\) are preferable for Griewank, indicating that in general a scan over several values of \(\alpha\) may be required. Since the Griewank landscape is considerably larger and the global maximum is not always found, we also show the maximum best-fitness value found over 50 independent runs, and the corresponding number of function evaluations (Fig. S2E,F). For lower values of \(\alpha\), the global maximum is not always found but rather another high-fitness solution. With reasonable hyperparameter settings, all 50 SmartRunner runs find the global maximum of the Rastrigin landscape (Fig. 3C), requiring \(\simeq 15500\) function evaluations on average. Fig. 3E shows three representative fitness trajectories - rapid convergence to the vicinity of the global maximum. The best fitness values of \(\alpha\) are also found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run, while the best fitness values of \(\alpha\) are found in the run. The best fitness values of \(\alpha\) are found in the run maximum is observed in \(\leq 4\times 10^{4}\) steps, regardless of the starting state. The dynamics of global optimization strongly depend on the moveset type. To explore whether SmartRunner can adapt to movesets with non-local moves, we have considered the _spmut_ moveset in which a randomly chosen coordinate is changed to an arbitrary new value on the discretized landscape. Thus, instead of updating a given coordinate in \(\pm\Delta x\) increments, most moves change the coordinate by many multiples of \(\Delta x\), creating a densely connected landscape: for example, the number of nearest neighbors is \(200\times 4=800\) for the Rastrigin function, instead of just 8 with the _nnb_ moveset (the abbreviation _spmut_ stands for single-point mutations, since a given \(x_{i}\) can'mutate' into any other \(x_{j}\), \(j\neq i\) from a discrete set). Fig. 4 shows that SmartRunner reliably finds the global maximum with the _spmut_ moveset. The dependence on \(\overline{R}_{\rm init}\) is weak and the lower values of \(\alpha\) are preferable (Fig. 4A). The number of fitness function calls is much higher for the same total number of steps (\(10^{5}\)) as with the _nnb_ moveset (Fig. 4B,D). All 50 runs find the global maximum with optimal or nearly-optimal hyperparameter settings (Fig. 4C), and fitness trajectories quickly converge to high-quality solutions (Fig. 4E). Similar behavior is observed with Ackley and Griewank functions: lower values of \(\alpha\) work better and the number of function evaluations is several times larger compared to the _nnb_ moveset (Fig. S3). Thus, using the _nnb_ moveset is preferable for all three landscapes. Figure 3: **SmartRunner exploration of the Rastrigin test function: _nnb_ moveset.** (A) A scan over SmartRunner hyperparameters (\(l_{\rm max}=2\)): the initial value of the expected rate of fitness gain per step \(\overline{R}_{\rm init}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) A histogram of best fitness values for the heatmap cell with \(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\). (D) A histogram of the number of unique fitness function calls for the heatmap cell with \(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\). (E) Plots of 3 representative SmartRunner trajectories (\(\overline{R}_{\rm init}=0.1\), \(\alpha=1.0\)). Next, we have asked whether the observed differences in SmartRunner performance at different hyperparameter values are statistically significant. Using the Rastrigin function as an example, we have employed one-sided Kolmogorov-Smirnov (KS) tests for the best-fitness distributions (Fig. S4). The distribution with the highest average of best-fitness values in Figs. 3A and 4A was compared with all the other distributions. We find that the differences between the distributions are not statistically significant with the _nnb_ moveset (Fig. S4A). In contrast, using \(\alpha\geq 6\) with the _spmut_ moveset leads to statistically significant degradation of SmartRunner performance (Fig. S4B). We have also used KS tests to investigate the effects of the \(l_{\max}\) hyperparameter (Fig. S5). Since for all three test functions best-fitness distributions with \(l_{\max}=3\) are not significantly better than the corresponding \(l_{\max}=2\) distributions with the same \(\alpha\) and \(\overline{R}_{\mathrm{init}}\) hyperparameter settings, we typically use \(l_{\max}=2\) as it is less expensive computationally. **The effects of the occupancy penalty on other global optimization algorithms.** As mentioned above, the SmartRunner algorithm can be viewed as hill climbing on a fitness landscape \(\widetilde{\mathcal{F}}\) modified with the occupancy penalties (Eq. (28)). However, the modified fitness landscape can also be explored using other empirical global optimization approaches. Here, we focus on three widely used algorithms: Simulated Annealing (SA),[13] Stochastic Hill Climbing (SHC),[28] and Evolutionary Algorithm (EA)[20, 21, 22] (see SI Methods for implementation details). SA is based on an analogy with a metallurgy technique involving heating followed by controlled cooling of a material to alter its physical properties.[13] The algorithm is implemented as a series of Metropolis Monte Carlo move trials[38] with a slowly decreasing temperature. SA's hyperparameters are the initial temperature \(T_{i}\) and the final temperature \(T_{f}\), plus the expected rate of fitness gain \(\overline{R}\) when the occupancy penalty is included. We use a linear cooling schedule in Figure 4: **SmartRunner exploration of the Rastrigin test function: _spmut_ moveset.** Same as Fig. 3 (including SmartRunner settings and hyperparameter value settings in panels C-E), but with the \(spmut\) moveset. this work. SHC is a version of hill climbing which accepts downhill moves with the probability \(p=1/(1+\exp\left[(\mathcal{F}_{\text{current}}-\mathcal{F}_{\text{new}})/T\right])\).[28] Thus, \(p\simeq 0\) in the \(\mathcal{F}_{\text{new}}/T\ll\mathcal{F}_{\text{current}}/T\) limit, and \(p\simeq 1\) in the opposite limit. SHC's search strategy is controlled by the temperature \(T\), along with \(\overline{R}\) in the case of modified landscapes. Finally, EA is inspired by the process of biological evolution.[21, 22] It involves creating a population of \(N_{\text{pop}}\) 'organisms' (i.e., putative solutions; we use \(N_{\text{pop}}=50\) in this work). The population is initialized randomly and subjected to repeated rounds of recombination, mutation and selection. Besides the population size, EA's hyperparameters are the crossover (recombination) rate \(r_{x}\), the mutation rate \(\mu\) and, for modified landscapes, the expected rate of fitness gain \(\overline{R}\). The original algorithm names (SA, SHC, EA) are reserved for runs with \(\overline{R}=0\); runs with modified landscapes are referred to as 'enhanced' (ESA, ESHC, EEA). Fig. S6 shows the performance of ESA as a function of the initial temperature \(T_{i}\) and the expected rate of fitness gain \(\overline{R}\) for our three test functions, with the _nnb_ moveset (although we have also performed a scan over the final temperature \(T_{f}\), the dependence is weak and the results are not shown). We observe that \(T_{i}\simeq 1\) values are more preferable and, as expected, are accompanied by the lower number of function evaluations. Strikingly, the hyperparameter settings with the best average performance always have non-zero \(\overline{R}\): \(T_{i}=1.0\), \(T_{f}=0.002\), \(\overline{R}=0.1\) for the Rastrigin function (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-0.017\)). For the Ackley function, \(T_{i}=1.0\), \(T_{f}=0.001\), \(\overline{R}=0.15\) (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-2.078\)). For the Griewank function, \(T_{i}=1.0\), \(T_{f}=0.001\), \(\overline{R}=0.2\) (the corresponding \(\langle\mathcal{F}_{\text{best}}\rangle=-0.067\)). Thus, ESA outperforms SA - when using simulated annealing, the best global optimization strategy is to augment the original fitness values with the occupancy penalties. Fig. S7 shows that these observations are statistically significant. Occupancy penalties dramatically improve SA's performance when it is run with the suboptimal values of the initial temperature (Fig. 5A, Fig. S8). In fact, the results are better than those with the higher, SA-optimal values of \(T_{i}\): \(\langle\mathcal{F}_{\text{best}}\rangle=-0.005\) for Rastrigin (\(T_{i}=0.02\), \(T_{f}=0.003\), \(\overline{R}=0.2\)), \(\langle\mathcal{F}_{\text{best}}\rangle=0.000\) for Ackley (\(T_{i}=0.01\), \(T_{f}=0.001\), \(\overline{R}=0.25\)), \(\langle\mathcal{F}_{\text{best}}\rangle=-0.015\) for Griewank (\(T_{i}=0.01\), \(T_{f}=0.003\), \(\overline{R}=0.25\)). Thus, the best overall strategy is to run SA at very low temperatures (where it reduces to simple hill climbing), but on the modified fitness landscape. This is precisely the strategy implemented in SmartRunner. Qualitatively similar results are obtained with SHC: non-zero values of \(\overline{R}\) are preferable at higher, SHC-optimal values of \(T\) (Fig. S9); the effect is statistically significant (Fig. S10). However, as Fig. 5B and Fig. S11 demonstrate, the enhancement is especially dramatic when the values of \(T\) become very low, much lower than the SHC-optimal values explored in Fig. S9. Similar to SA, low-\(T\) runs with \(\overline{R}\neq 0\) yield the highest-quality solutions, again indicating that in the presence of occupancy penalties the best strategy is straightforward hill ascent. Finally, occupancy penalties can rescue EA from being stuck in the local maxima (Fig. 5C, Fig. S12A,C) - with the _nnb_ moveset, the population tends to condense onto a local maximum and become monomorphic. Local mutations of population members in such locally optimal states are mostly deleterious and therefore tend to get eliminated from the population. The population as a whole is therefore unable to keep exploring new states, as evidenced by the low number of function evaluations in Fig. S12B,D,F compared to the other algorithms. This drawback is fixed by making the fitness landscape adaptive with the help of the occupancy penalty. SmartRunner can be viewed as stochastic generalization of the Taboo Search (TS) - a deterministic policy in which all nearest neighbors of the current state are explored one by one and the move to the neighbor state with the best fitness is accepted [27]. To prevent backtracking to already-explored states, a list of 'taboo' states is kept to which jumps are forbidden; the length of this list, \(L_{\rm tabu}\), is a hyperparameter. By construction, TS avoids visiting neighbor states more than once and is always guaranteed to find the best neighboring state to jump into; however, we expect it to lose efficiency in systems characterized by very large numbers of neighbors, since all of these neighbors have to be tried and most of them do not correspond to good solutions. In contrast, SmartRunner can make a decision to accept a move before all neighbors are explored, based on the move/neighbor statistics collected up to that point. In any event, with \(L_{\rm tabu}\geq 400\) Figure 5: **Occupancy penalties enhance performance of global optimization algorithms: _nnb_ movest.** (A) A scan over ESA hyperparameters (linear cooling schedule) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the initial temperature \(T_{i}\) (the final temperature is set to \(T_{f}=0.001\)). See Fig. S8 for details and for the Ackley and Griewank functions. (B) A scan over ESHC hyperparameters for the Ackley function: the expected rate of fitness gain per step \(\overline{R}\) and the temperature \(T\). See Fig. S11 for details and for the Rastrigin and Griewank functions. (C) A scan over EEA hyperparameters for the Griewank function: the expected rate of fitness gain per step \(\overline{R}\) and the mutation rate \(\mu\) (the crossover rate is set to \(r_{x}=0.2\) and the population size to \(N_{\rm pop}=50\)). See Fig. S12 for details and for the Rastrigin and Ackley functions. Each heatmap cell represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=10^{5}\) steps each and randomly chosen starting states. TS demonstrates high performance on all three test functions, requiring a relatively low number of function evaluations to achieve this result (Fig. S13). Interestingly, the situation is reversed with the _spmut_ moveset - with SA and SHC, better performance is achieved when \(\overline{R}=0\) (Fig. S14). This observation is not surprising given the somewhat special nature of the test functions we consider. As it turns out, with Rastrigin and Ackley functions it is possible to use TS to reach the global maximum in exactly 4 steps, regardless of the initial state. Each step sets one of the coordinates to 0.0 until the global maximum is attained (see Fig. S15A,B for representative trajectories). With the Griewank function, optimization depends on the initial conditions and, as a rule, additional steps are required since the first 4 steps only bring the system to the vicinity of the global maximum (Fig. S15C). Thus, with this landscape structure it is not beneficial to jump to a new node before all neighbors of the current node are explored. In other words, premature jumping between nodes simply resets the search. In this case, \(\overline{R}=0\) is indeed preferable and correctly identified by our methods; however, this is a special case which we do not expect to hold true in general. **Comparison of global optimization algorithms.** Different global optimization algorithms use different notions of a single step. While SA, SHC and SmartRuner define a single stochastic trial as an elementary step, a TS step involves querying all nearest neighbors, and an EA step involves rebuilding a population subjected to crossover and mutation. To ensure a fair comparison, we have allocated a fixed number of _novel_ fitness function evaluations to each algorithm and observed the resulting performance (SA had to be left out because its performance depends on the cooling schedule, such that stopping SA at \(T>T_{f}\) puts it at an unfair disadvantage). We note that with the _nnb_ moveset, SmartRuner consistently shows the best performance (Fig. 6). As expected, the worst performance with this moveset is exhibited by EA as it is unable to utilize more and more function evaluations to find states with better fitness - in fact, EA often uses fewer function evaluations than was allocated to it, terminating instead when the maximum number of steps is exceeded. **SmartRunner tests on SK spin glass and Kauffman's NK models: quenched disorder.** Next, we have turned to two challenging discrete-state systems with complex fitness landscapes. One is the Sherrington-Kirkpatrick (SK) spin glass model[12], with \(N\pm 1\) spins coupled by random interactions that are independently sampled from the standard Gaussian distribution (SI Methods). The other is Kauffman's NK model used in evolutionary theory[39, 40], in which each of the \(N\)\((0,1)\) sites interacts with \(0\leq K\leq N-1\) other sites chosen by random sampling. The fitness function for a given binary sequence is a sum over \(N\) single-site contributions; each single-site contribution is obtained by sampling from the standard uniform distribution (SI Methods). The model parameter \(K\) serves to tune the degree of landscape ruggedness: the number of local maxima increases rapidly as \(K\) goes up. In both systems, the moveset consists of changing the binary state at a single site. Thus, each of the \(2^{N}\) states has \(N\) nearest neighbors. First, we focus on systems with quenched disorder, where random parameters of the system are generated only once and subsequently used in all comparisons of global optimization algorithms. We have carried out a SmartRunner hyperparameter search for the SK model with \(N=200\) spins (Fig. S16). We find that among all of the values tried, \(\alpha=0.1\) is clearly preferable (Fig. S16A,C), with a statistically significant improvement in performance (Fig. S16E). On the other hand, the dependence on \(\overline{R}_{\rm init}\) is very weak. As expected, the number of function evaluations increases with \(\alpha\) as novel states are explored more frequently (Fig. S16B,D). The same conclusions are reached with the NK model with \(N=200\) sites and \(K=8\) couplings per site, with \(\alpha=0.1\) identified again as the optimal value (Fig. S17). We have also explored the \(\alpha\) settings in models with 200, 500, and 1000 spins/sites (Fig. S18). While \(\alpha=0.1\) is confirmed as the optimal choice for \(N=200\) models, \(\alpha=0.01\) is preferable for \(N=500,1000\). Finally, we carry out a side-by-side comparison of the performance of all 5 algorithms: SR, TS, SA, SHC, and EA on the SK models (Table 1) and the NK models (Table 2). To mimic a realistic situation in which computer resources are a limiting factor and the fitness landscapes are exceedingly large, we have chosen a single set of hyperparameter settings for each algorithm. Thus, SR was run with \(\alpha=0.01\), even though the above analysis shows that \(\alpha=0.1\) is in fact a better choice for \(N=200\). The only exception to this rule is SHC, where we carried out a mini-scan over the values \(T\) to optimize performance. All algorithms except for SmartRunner were run on the original landscapes without occupancy penalties. For SA, \(T_{f}\simeq 0\) should be reasonable, while \(T_{i}=1.0\) is dictated by the overall scale of the landscape. For EA, a 3D scan over \(\mu\), \(r_{x}\), \(N_{\rm pop}\) is not feasible, so that we had to settle for 'typical' values. Thus, more complex algorithms with several hyperparameters are implicitly penalized, as they are likely to be in a Figure 6: **Comparison of the algorithms conditioned on the number of unique fitness function calls: _nnb_ moveset. The maximum number of allowed function calls was set to \(\{6250,12500,25000,37500,50000\}\) for all algorithms. In panels A-C we show the best fitness values found in each run averaged over 100 independent runs with randomly chosen starting states. (A) Rastrigin function. (B) Ackley function. (C) Griewank function. (D) Same as C but for the globally best fitness values obtained over all 100 runs instead of the averages. EA – Evolutionary Algorithm (\(\mu=0.1\), \(r_{x}=0.1\), \(N_{\rm pop}=50\)), SHC – Stochastic Hill Climbing (\(T=0.5\)), SR – SmartRunner (\(l_{\rm max}=2\), \(\overline{R}^{\rm init}=0.1\), \(\alpha=1.0\) for Rastrigin and Ackley, \(\alpha=10.0\) for Griewank), TS – Taboo Search (\(L_{\rm tabu}=500\)).** realistic research setting. We find that SmartRunner ranks the highest overall in this competition. For the SK models, it is in the second place for \(N=200\) and the first place for \(N=500,1000\) if judged by the average of all solutions (Table 1). If judged by the globally best solution, the SmartRunner shares the first place with SHC for \(N=200\) and again takes the first place for \(N=500,1000\). Similar results are seen with the NK model (Table 2): by both the average and the globally \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\langle\mathcal{F}_{best}\rangle\pm\sigma_{\mathcal{F}_{best}}\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.715\pm 0.012\) & \(0.711\pm 0.009\) & \(\mathbf{0.723\pm 0.003}\) & \(0.719\pm 0.009\) & \(0.688\pm 0.020\) \\ \hline **500** & \(\mathbf{0.746\pm 0.008}\) & \(0.722\pm 0.009\) & \(0.710\pm 0.010\) & \(0.735\pm 0.008\) & \(0.668\pm 0.010\) \\ \hline **1000** & \(\mathbf{0.733\pm 0.005}\) & \(0.684\pm 0.010\) & \(0.492\pm 0.015\) & \(0.557\pm 0.006\) & \(0.598\pm 0.009\) \\ \hline \multicolumn{5}{c}{\(\max(\mathcal{F}_{best})\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.727\) & \(0.725\) & \(\mathbf{0.727}\) & \(\mathbf{0.727}\) & \(0.717\) \\ \hline **500** & \(\mathbf{0.757}\) & \(0.734\) & \(0.729\) & \(0.744\) & \(0.684\) \\ \hline **1000** & \(\mathbf{0.740}\) & \(0.704\) & \(0.517\) & \(0.566\) & \(0.614\) \\ \hline \end{tabular} \end{table} Table 1: **Comparison of the algorithm performance on the SK model.**\(\langle\mathcal{F}_{best}\rangle\) is the average of the best-fitness values found in each run, averaged over 10 independent runs with randomly chosen starting states; \(\sigma_{\mathcal{F}_{best}}\) is the corresponding standard deviation; \(\max(\mathcal{F}_{best})\) is the largest of the best-fitness values; \(\mathrm{N_{s}}\) is the number of spins. SR – SmartRunner (\(l_{\max}=2\), \(\overline{R}^{\mathrm{init}}=0.01\), \(\alpha=0.01\)), TS – Taboo Search (\(L_{\mathrm{tabu}}=5000\)), SA – Simulated Annealing (\(T_{i}=0.01\), \(T_{f}=0.001\), linear cooling schedule), SHC – Stochastic Hill Climbing (\(T=10^{-3}\)), EA – Evolutionary Algorithm (\(\mu=0.2\), \(r_{x}=0.5\), \(N_{\mathrm{pop}}=100\)). In SR, SA and SHC the total number of steps \(l_{\mathrm{tot}}=1.5\times 10^{6},10^{6},5\times 10^{5}\) for the models with 200, 500 and 1000 spins, respectively. In TS, the total number of steps is rescaled by the number of nearest neighbors (\(l_{\mathrm{tot}}=7.5\times 10^{3},2\times 10^{3},5\times 10^{2}\)); in EA, the total number of steps is rescaled by the population size (\(l_{\mathrm{tot}}=1.5\times 10^{4},10^{4},5\times 10^{3}\)). The best result in each row is highlighted in boldface. For consistency, all runs employed a single random realization of the SK model (quenched disorder). \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\langle\mathcal{F}_{best}\rangle\pm\sigma_{\mathcal{F}_{best}}\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.778\pm 0.005\) & \(0.772\pm 0.005\) & \(0.771\pm 0.005\) & \(\mathbf{0.785\pm 0.005}\) & \(0.751\pm 0.008\) \\ \hline **500** & \(\mathbf{0.776\pm 0.003}\) & \(0.748\pm 0.007\) & \(0.675\pm 0.005\) & \(0.702\pm 0.003\) & \(0.727\pm 0.007\) \\ \hline **1000** & \(\mathbf{0.770\pm 0.001}\) & \(0.732\pm 0.004\) & \(0.601\pm 0.004\) & \(0.616\pm 0.002\) & \(0.700\pm 0.005\) \\ \hline \end{tabular} \begin{tabular}{|l|l|l|l|l|l|} \multicolumn{5}{c}{\(\max(\mathcal{F}_{best})\)} \\ \hline \(\mathbf{N_{s}}\) & **SR** & **TS** & **SA** & **SHC** & **EA** \\ \hline **200** & \(0.789\) & \(0.781\) & \(0.782\) & \(\mathbf{0.795}\) & \(0.761\) \\ \hline **500** & \(\mathbf{0.784}\) & \(0.758\) & \(0.683\) & \(0.706\) & \(0.739\) \\ \hline **1000** & \(\mathbf{0.772}\) & \(0.738\) & \(0.606\) & \(0.620\) & \(0.708\) \\ \hline \end{tabular} \end{table} Table 2: **Comparison of the algorithm performance on the NK model.**\(\mathrm{N_{s}}\) is the number of sites (each site has 8 randomly chosen intra-sequence couplings per site); all other quantities and parameter settings are as in Table 1. For consistency, all runs employed a single random realization of the NK model (quenched disorder). best measures, SmartRunner is second for the \(N=200\) model and first for the larger models with \(500\) and \(1000\) sites. The somewhat weaker performance of SmartRunner on the \(N=200\) systems could be improved by switching to \(\alpha=0.1\) (Figs. S16,S17). However, this would give SmartRunner an unfair advantage in the context of this competition, in which every algorithm was run with a single reasonable set of hyperparameters. **Prediction of the SK ground state energies averaged over disorder.** Next, we have investigated the ability of SmartRunner to reproduce finite-size corrections to average ground state energies of the SK model (Fig. 7). The ground state energy per spin averaged over random spin couplings is known theoretically to be \(-0.7633\) in the \(N\rightarrow\infty\) limit of the SK model,[42] with the \(2/3\) scaling exponent for finite-size corrections (i.e., \(\langle E_{best}(N)\rangle\sim N^{-2/3}\)) available from both theoretical[43] and numerical[41] investigations. This provides a baseline against which SmartRunner's ability to find the global minima of the SK energy can be judged. We find that the average ground-state energy per spin predicted by SmartRunner is reasonably close to the expected straight line in Fig. 7, although there are statistically significant deviations for the three largest systems (\(N=300,350,400\)), indicating that SmartRunner does not quite reach Figure 7: **Finite-size corrections to the ground state energy per spin in the SK model.** Cyan dots: average of the best-energy values found by SmartRunner on SK landscapes with \(N=50,100,150,200,250,300,350,400\) spins. For each value of \(N\), \(18\) to \(23\) independent runs with \(l_{\rm max}=2,\overline{R}^{\rm init}=0.01\), \(\alpha=0.1\) were carried out, each one with randomly generated spin couplings and starting from a random spin configuration. The total number of steps \(l_{\rm tot}\) ranged from \(4.5\times 10^{6}\) to \(3.0\times 10^{7}\) depending on \(N\). Error bars represent the errors of the mean. Blue dots: numerical results for the finite-size corrections to the SK ground state energy reported by S. Boettcher using the Extremal Optimization algorithm (Table 1 in Ref.[41]). Dashed green line: a linear fit to Boettcher’s ground state energies yielding \(\langle E_{best}(N)\rangle=\langle E_{best}(\infty)\rangle+mN^{-2/3}\), where \(m=0.7047\) is the slope and \(\langle E_{best}(\infty)\rangle=-0.7633\) is the asymptotic Parisi energy for the infinite system.[42] All energies are divided by the number of spins \(N\) to produce intensive quantities. the true ground states in these cases. Overall, SmartRunner's performance on these systems is less reliable than that of Extremal Optimization, a heuristic algorithm specifically adapted to the SK model and requiring a simplified probabilistic model for spin couplings [44, 41]. ## Discussion and Conclusion In this work, we have developed a novel approach to global optimization called SmartRunner. Instead of relying on qualitative similarities with physical, chemical or biological systems, SmartRunner employs an explicit probabilistic model for accepting or rejecting a move on the basis of the immediate previous history of the optimization process. The key quantity guiding SmartRunner decisions is \(p_{f}\), the probability of finding a higher-fitness target in the next random trial. This probability has nearly universal asymptotics and can be effectively represented by a function that depends only on \(n\), the number of previously rejected attempts to change the current state of the system. In other words, the dependence of SmartRunner's behavior on such details of the systems as the number of nearest neighbors and the transition rates is fairly weak, making our approach applicable to a wide range of objective functions and movesets. Overall, SmartRunner can be viewed as an adaptive search policy designed to maximize fitness gain per step. Interestingly, SmartRunner's global optimization policy amounts to hill ascent on a fitness landscape modified with an easily computed adaptive occupancy penalty. The occupancy penalty makes rejecting moves less and less favorable as the number of unsuccessful attempts to change the current state grows. Ultimately, one of the nearest neighbors is accepted even if the step is deleterious on the original fitness landscape. This behavior allows SmartRunner to climb out of local basins of attraction (Fig. 2). In principle, the adaptive fitness landscape given by Eq. (28) can be explored using any global optimization algorithm. We have tested SmartRunner's performance on a standard set of functions routinely used to evaluate the performance of global optimization algorithms [29]. These 4D functions are characterized by numerous local maxima that make it challenging to find a single global maximum. We find that SmartRunner exhibits the highest fitness gain per novel fitness function evaluation compared to three other state-of-the-art gradient-free algorithms (Fig. 6). This is especially important in situations where fitness function calls are computationally expensive. Interestingly, when adaptive fitness landscapes were given as input to other global optimization algorithms, the best results were obtained when the other algorithms' policy for accepting and rejecting moves closely resembled the SmartRunner policy of hill climbing on the modified fitness landscape (Fig. 5). For example, with simulated annealing the globally best strategy was to set the initial temperature to a very low value, essentially reducing simulated annealing to hill ascent. Finally, we observe that the SmartRunner approach is flexible enough to adapt to substantial changes in the moveset, from \(\mathcal{O}(10^{0})\) local moves to \(\mathcal{O}(10^{2}-10^{3})\) random updates of a single randomly chosen coordinate (Figs. 3,4). We have also tested SmartRunner on two challenging models with long-range couplings and multiple local minima or maxima: the Sherrington-Kirkpatrick spin glass model [12] and the Kauffman's NK model of fitness [39, 40]. In systems with quenched disorder, SmartRunner performs very well compared with four other general-purpose global optimization algorithms (Tables 1,2). It is also fairly reliable in locating ground-state energies averaged over disorder in the SK model, although the results are inferior to those obtained by Extremal Optimization, a heuristic algorithm specifically adapted to finding the ground states in the SK model [41, 44] (Fig. 7). In summary, SmartRunner implements a novel global optimization paradigm which offers a viable alternative to current algorithms. The SmartRunner approach described here works on discrete or discretized fitness landscapes and does not make use of the gradient of the objective function in implementing its stochastic policy. In the future, we intend to adapt SmartRunner to carry out global optimization on continuous landscapes where the gradient of the objective function can be computed efficiently. Such optimization will be of great interest in modern machine learning. For example, training artificial neural networks relies on the differentiability of objective functions and optimization methods based on stochastic gradient descent [6, 7], which may get trapped in local minima. ## Software Availability The Python3 code implementing SmartRunner and four other gradient-free global optimization algorithms discussed here is available at [https://github.com/morozov22/SmartRunner](https://github.com/morozov22/SmartRunner). ## Acknowledgements We gratefully acknowledge illuminating discussions with Stefan Boettcher. JY and AVM were supported by a grant from the National Science Foundation (NSF MCB1920914). ## References * [1] Onuchic, J. N. and Wolynes, P. G. (2004) _Curr. Op. Struct. Biol._**14**, 70-75. * [2] Dill, K. A., Ozkan, S. B., Shell, M. S., and Weikl, T. R. (2008) _Ann. Rev. Biophys._**37**, 289-316. * [3] Crow, J. F. and Kimura, M. (1970) An Introduction to Population Genetics Theory, Harper and Row, New York. * [4] Kimura, M. (1983) The Neutral Theory of Molecular Evolution, Cambridge University Press, Cambridge, UK. * [5] Gillespie, J. (2004) Population Genetics: A Concise Guide, The Johns Hopkins University Press, Baltimore, USA. * [6] Goodfellow, I., Bengio, Y., and Courville, A. (2016) Deep Learning, MIT Press, Cambridge, MA. * [7] Mehta, P., Bukov, M., Wang, C.-H., Day, A. G., Richardson, C., Fisher, C. K., and Schwab, D. J. (2019) _Physics Reports_**810**, 1-124. * [8] Zwanzig, R., Szabo, A., and Bagchi, B. (1992) _Proc. Natl. Acad. Sci. USA_**89**, 20-22. * [9] Bryngelson, J., Onuchic, J., Socci, N., and Wolynes, P. (1995) _Proteins: Struc. Func. Genet._**21**, 167-195. * [10] Dill, K. A. and Chan, H. (1997) _Nat. Struct. Mol. Biol._**4**, 10-19. * [11] Danilova, M., Dvurechensky, P., Gasnikov, A., Gorbunov, E., Guminov, S., Kamzolov, D., and Shibaev, I. Recent Theoretical Advances in Non-Convex Optimization pp. 79-163 Springer International Publishing Cham, Switzerland (2022). * [12] Sherrington, D. and Kirkpatrick, S. (1975) _Phys. Rev. Lett._**35**, 1792-1796. * [13] Kirkpatrick, S., Gelatt, Jr., C., and Vecchi, M. (1983) _Science_**220**, 671-680. * [14] Cohn, H. and Fielding, M. (1999) _SIAM J. Optim._**9**, 779-802. * [15] Hukushima, K. and Nemoto, K. (1996) _J. Phys. Soc. Jpn._**65**, 1604-1608. * [16] Swendsen, R. H. and Wang, J.-S. (1986) _Phys. Rev. Lett._**57**, 2607-2609. * [17] Wang, W., Machta, J., and Katzgraber, H. G. (2015) _Phys. Rev. E_**92**, 063307. * [18] Marinari, E. and Parisi, G. (1992) _Europhys. Lett._**19**, 451-458. * [19] Wang, W., Machta, J., and Katzgraber, H. G. (2015) _Phys. Rev. E_**92**, 013303. * [20] Goldberg, D. (1989) Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA. * [21] Vikhar, P. A. (2016) In 2016 International Conference on Global Trends in Signal Processing, Information Computing and Communication (ICGTSPICC) : pp. 261-265. * [22] Slowik, A. and Kwasnicka, H. (2020) _Neur. Comp. Appl._**32**, 12363-12379. * [23] Geem, Z. W., Kim, J. H., and Loganathan, G. V. (2001) _Simulation_**76(2)**, 60-68. * [24] Lee, K. S. and Geem, Z. W. (2005) _Comp. Meth. Appl. Mech. Eng._**194**, 3902-3933. * [25] Kennedy, J. and Eberhart, R. (1995) _Proc. IEEE Intern. Conf. Neur. Netw._**4**, 1942-1948. * [26] Eberhart, R. and Kennedy, J. (1995) In MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science : pp. 39-43. * [27] Cvijovic, D. and Klinowski, J. (1995) _Science_**267**, 664-666. * [28] Juels, A. and Wattenberg, M. (1995) In D. Touretzky, M.C. Mozer, and M. Hasselmo, (ed.), Advances in Neural Information Processing Systems, volume **8**, Cambridge, MA: MIT Press. pp. 430-436. * [29] Torn, A. and Zilinskas, A. (1989) Global Optimization, Springer-Verlag, Berlin, Germany. * [30] Berg, B. (1993) _Nature_**361**, 708-710. * [31] Hesselbo, B. and Stinchcombe, R. (1995) _Phys. Rev. Lett._**74**, 2151-2155. * [32] Dittes, F.-M. (1996) _Phys. Rev. Lett._**76**, 4651-4655. * [33] Barhen, J., Protopopescu, V., and Reister, D. (1997) _Science_**276**, 1094-1097. * [34] Wenzel, W. and Hamacher, K. (1999) _Phys. Rev. Lett._**82**, 3003-3007. * [35] Hamacher, K. (2006) _Europhys. Lett._**74**, 944-950. * [36] Bishop, C. M. (2006) Pattern Recognition and Machine Learning, Springer, New York, NY. * [37] Kion-Crosby, W. B. and Morozov, A. V. (2018) _Phys Rev Lett_**121**, 038301. * [38] Metropolis, N., Rosenbluth, A., Rosenbluth, M., Teller, A., and Teller, E. (1953) _J. Chem. Phys._**21**, 1087-1092. * [39] Kauffman, S. A. and Weinberger, E. D. (1989) _J. Theor. Biol._**141**, 211-245. * [40] Kauffman, S. (1993) The Origins of Order: Self-Organization and Selection in Evolution, Oxford University Press, New York. * [41] Boettcher, S. (2005) _Eur. Phys. J. B_**46**, 501-505. * [42] Parisi, G. (1980) _J. Phys. A: Math. Gen._**13**, L115-L121. * [43] Parisi, G., Ritort, F., and Slanina, F. (1993) _J. Phys. A: Math. Gen._**26**, 3775-3789. * [44] Boettcher, S. (2010) _J. Stat. Mech._**2010**, P07002. # Supplementary Materials: An adaptive Bayesian approach to gradient-free global optimization Jianneng Yu\({}^{1,2}\) and Alexandre V. Morozov\({}^{1,2,}\) \({}^{1}\) Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, USA \({}^{2}\) Center for Quantitative Biology, Rutgers University, Piscataway, NJ 08854, USA Corresponding author: morozov@physics.rutgers.edu ## Supplementary Methods ### Standard test functions We employ 3 standard test functions [1], including the 4D Rastrigin function: \[\mathcal{F}(\vec{\mathbf{x}})=-4-\sum_{i=1}^{4}\big{(}x_{i}^{2}-\cos(18x_{i}) \big{)},\ x_{i}\in[-5,5],\forall i, \tag{1}\] the 4D Ackley function: \[\mathcal{F}(\vec{\mathbf{x}})=-20-e+20\exp\Big{(}-0.2\sqrt{\frac{1}{4}\sum_{i =1}^{4}x_{i}^{2}}\Big{)}+\exp\Big{(}\frac{1}{4}\sum_{i=1}^{4}\cos(2\pi x_{i}) \Big{)},\ x_{i}\in[-32.8,32.8],\forall i, \tag{2}\] and the 4D Griewank function: \[\mathcal{F}(\vec{\mathbf{x}})=-1-\frac{1}{4000}\sum_{i=1}^{4}x_{i}^{2}+\prod_ {i=1}^{4}\cos\big{(}\frac{x_{i}}{\sqrt{i}}\big{)},\ x_{i}\in[-600,600],\forall i \tag{3}\] All three functions have multiple local maxima and a unique global maximum located at \(\vec{\mathbf{x}}=\vec{\mathbf{0}}\) (\(\mathcal{F}(\vec{\mathbf{0}})=0\)). The fitness landscapes are discretized, with periodic boundary conditions and \(\Delta x=(0.05,0.2,1.0)\) steps in all four directions for Rastrigin, Ackley and Griewank functions, respectively. ### Sherrington-Kirkpatrick spin glass model Consider a system of \(N\) spins which can point either up or down: \(s_{i}=\pm 1\), \(i=1\dots N\). The Sherrington-Kirkpatrick (SK) model is defined by the Hamiltonian [2]: \[\mathcal{H}_{\text{SK}}(s)=-\frac{1}{\sqrt{N}}\sum_{1\leq i<j\leq N}J_{ij}s_{ i}s_{j}, \tag{4}\] where \(s=(s_{1},s_{2},\dots,s_{N})\) and \(J_{ij}\) are random spin couplings independently sampled from a standard Gaussian distribution: \[P(J_{ij})=\frac{1}{\sqrt{2\pi}}e^{-\frac{J_{ij}}{2}}. \tag{5}\] The SK model is characterized by a large number of local minima and therefore presents a challenging problem to global optimization algorithms. The ground state energy per spin of the SK model (the global minimum) asymptotically approaches \(\mathcal{H}_{\text{SK}}/N\to-0.7633\) in the \(N\to\infty\) limit [3]. All SK energies are divided by the number of spins \(N\) to produce intensive quantities. ### Kauffman's NK model In Kauffman's NK model [4, 5], each of the \(N\) sites in the gene (or genes in the genome) interacts with \(K\) other sites chosen by random sampling. The fitness of the genotype \(s=(s_{1},s_{2},\dots,s_{N})\) (\(s_{i}=\{0,1\}\), \(i=1\dots N\)) is given by \[\mathcal{F}_{\text{NK}}(s)=\sum_{\mu=1}^{N}\mathcal{F}_{\mu}(s_{\mu},s_{n_{1} (\mu)},\dots,s_{n_{K}(\mu)}), \tag{6}\] where \(n_{1}(\mu),\ldots,n_{K}(\mu)\) are \(K\) randomly-chosen interaction partners of the site \(s_{\mu}\). The single-site fitnesses \({\cal F}_{\mu}\) are obtained by sampling from a standard uniform distribution (\(U(0,1)\)); each combination of the \(2^{K+1}\) possible states of the argument corresponds to an independent sample from this distribution. When \(K=0\), the NK landscape becomes fully additive. Because in this limit the landscape is smooth and has a single maximum, it is sometimes called the "Mount Fuji" model.[6] The amount of landscape ruggedness can be tuned by increasing \(K\) to the maximum value of \(N-1\). With \(K=N-1\), the fitnesses of different sequences are uncorrelated; this model is called the "House of Cards"[7] due to the unpredictable fitness effects of mutations. Numerous results are available for the statistical mechanics of NK landscapes.[8, 9, 10, 11, 4] In particular, in the \(K=N-1\) limit the average number of local maxima is \(2^{L}/(L+1)\) for the binary alphabet considered here.[12] As with the SK model, we use NK models with quenched disorder - for given values of \(N\) and \(K\), all interaction partners and single-site fitnesses are generated once and then used in all subsequent comparisons of global optimization algorithms. All fitnesses are divided by the number of sites \(N\) to produce intensive quantities. ### Alternative global optimization algorithms We have implemented four alternative global optimization algorithms: Simulated Annealing (SA), Stochastic Hill Climbing (SHC), Evolutionary Algorithm (EA), and Taboo Search (TS) using Solid, a gradient-free Python optimization package ([https://github.com/100/Solid](https://github.com/100/Solid)). The modified Solid code for these four algorithms, implementing fitness functions augmented with the occupancy penalties, is available as part of the SmartRunner GitHub distribution package ([https://github.com/morozov22/SmartRunner](https://github.com/morozov22/SmartRunner)). SA, SHC and TS search strategies were left exactly as implemented in Solid, while in EA fitnesses of all population members were made positive by subtracting the minimum fitness in the current population; these fitnesses were then used to compute selection probabilities. Apart from this change, Solid's EA crossover and mutation strategies remained the same as in the original package. ## Supplementary Figures Figure S2: **SmartRunner exploration of the Ackley and Griewank test functions: _nnb_ moveset.** (A) A scan over SmartRunner hyperparameters (\(l_{\text{max}}=2\), \(nnb\) moveset) for the Ackley function: the initial value of the expected rate of fitness gain per step \(\overline{R}_{\text{init}}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) Same as A but for the Griewank function. (D) Same as B but for the Griewank function. (E) Same as C but with the globally best fitness value obtained over all 50 runs shown instead of the average. (F) Same as D but with the number of fitness function evaluations corresponding to the globally best run from E shown instead of the average over 50 runs. Figure S3: **SmartRunner exploration of the Ackley and Griewank test functions: _spmut_ moveset.** Same as Fig. S2A-D (including SmartRunner settings) but with the _spmut_ moveset. Figure S4: **Kolmogorov-Smirnov (KS) p-value analysis of the SmartRunner hyperparameter settings: Rastrigin test function.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent SmartRunner runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. SmartRunner was used with the \(nnb\) moveset. (B) Same as A but with the \(spmut\) moveset. Figure S5: **Kolmogorov-Smirnov (KS) p-value analysis of the SmartRunner \(l_{\max}\) hyperparameter setting.** (A) Rastrigin test function. One-sided KS tests were used to investigate the significance of the differences between distributions of best-fitness values yielded by SmartRunner runs with \(l_{\max}=3\) and \(l_{\max}=2\). Each KS test compared two best-fitness distributions with the same values of \(\alpha\) and \(\bar{R}_{\mathrm{init}}\). Low p-values (\(<0.05\)) indicate that \(l_{\max}=3\) runs have yielded a significantly better distribution of best-fitness values than the runs with \(l_{\max}=2\). High p-values (\(\geq 0.05\)) indicate that \(l_{\max}=3\) runs have yielded a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution with \(l_{\max}=2\). (B) Same as A but for the Ackley test function. (C) Same as A but for the Griewank test function. All runs employed the \(nnb\) moveset. Figure S6: **Enhanced simulated annealing (ESA) hyperparameter scan:**_nnb_ **moveset.** (A) A scan over ESA hyperparameters (\(nnb\) moveset, linear cooling schedule) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the initial temperature \(T_{i}\) (the final temperature is set to \(T_{f}=0.001\) in all plots). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each ESA run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S7: **Kolmogorov-Smirnov (KS) p-value analysis of the ESA hyperparameter settings: _nnb_ moveset.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent ESA runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. ESA was run on the Rastrigin function with the \(nnb\) moveset and a linear cooling schedule (Fig. S6A). (B) Same as A but for the Ackley function (Fig. S6C). (C) Same as A but for the Griewank function (Fig. S6E). Figure S8: **Enhanced simulated annealing (ESA) hyperparameter scan: low \(T_{i}\), _nnb_, **moveset.** Same as Fig. S6 but with suboptimally low initial temperatures \(T_{i}\) employed with a linear cooling schedule. Figure S9: **Enhanced stochastic hill climbing (ESHC) hyperparameter scan: _nnb_** **moveset.** (A) A scan over ESHC hyperparameters (\(nnb\) moveset) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the temperature \(T\). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\text{tot}}=10^{5}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each ESHC run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S10: **Kolmogorov-Smirnov (KS) p-value analysis of the ESHC hyperparameter settings:**_nnb_ **moveset.** (A) One-sided KS tests were used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values (over 50 independent ESHC runs) vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. ESHC was run on the Rastrigin function with the \(nnb\) moveset (Fig. S9A). (B) Same as A but for the Ackley function (Fig. S9C). (C) Same as A but for the Griewank function (Fig. S9E). Figure S11: **Enhanced stochastic hill climbing (ESHC) hyperparameter scan: low \(T\), \(nnb\) moveset.** Same as Fig. S9 but with suboptimally low temperatures \(T\). Figure S12: **Enhanced evolutionary algorithm (EEA) hyperparameter scan: _nnb_moveset.** (A) A scan over EEA hyperparameters (\(nnb\) moveset) for the Rastrigin function: the expected rate of fitness gain per step \(\overline{R}\) and the mutation rate \(\mu\) (the crossover rate is set to \(r_{x}=0.2\) and the population size to \(N_{\rm pop}=50\) in all plots). Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=2000\) steps each and randomly chosen starting states (each EEA step involves evaluating fitness functions for all 50 population members). (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each EEA run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function. Figure S13: **Taboo search (TS) hyperparameter scan: _nnb_ moveset. (A) A scan over the length of the taboo list \(L_{\rm tabu}\) (\(nnb\) moveset) for the Rastrigin function. Each cell in the heatmap represents best fitness values found in each run, averaged over 50 independent runs with \(l_{\rm tot}=12500\) steps each and randomly chosen starting states (each TS step involves evaluating fitness functions for all 8 nearest neighbors of the current node). (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each TS run. (C) Same as A but for the Ackley function. (D) Same as B but for the Ackley function. (E) Same as A but for the Griewank function. (F) Same as B but for the Griewank function.** ## Appendix A Simulated Annealing Figure S15: **Representative TS fitness trajectories: _spmut_ moveset.** Shown is the \(L_{2}\) distance \(D\) between the current 4D state and the global maximum as a function of the number of steps \(\ell\) for 3 randomly chosen TS trajectories. TS was run with \(L_{\rm tabu}=100\). (A) Rastrigin function. (B) Ackley function. (C) Griewank function. Figure S16: **SmartRunner (SR) hyperparameter scan: SK model with \(\mathbf{N=200}\) spins.** (A) A scan over SmartRunner hyperparameters (\(l_{\max}=2\)): the initial value of the expected rate of fitness gain per step \(\overline{R}_{\text{init}}\) and the level of optimism \(\alpha\). Each cell in the heatmap represents the best fitness values found in each run, averaged over 100 independent runs with \(l_{\text{tot}}=1.5\times 10^{6}\) steps each and randomly chosen starting states. (B) Same as A but with the average taken over the number of fitness function evaluations (unique fitness function calls) in each run. (C) Same as A but with the globally best fitness value obtained over all 100 runs shown instead of the average. (D) Same as B but with the number of fitness function evaluations corresponding to the globally best run from C shown instead of the average over 100 runs. (E) One-sided KS tests used to investigate the significance of the differences between two distributions of best-fitness values: the heatmap cell with the highest average of best-fitness values in panel A vs. every other cell. Low p-values (\(<0.05\)) indicate that the cell with the highest average has a significantly better distribution of best-fitness values than the other cell. High p-values (\(\geq 0.05\)) indicate that the cell with the highest average has a distribution of best-fitness values that is either indistinguishable from, or worse than the distribution in the other cell. Figure S17: **SmartRunner (SR) hyperparameter scan: NK model with \(\mathbf{N=200}\) sites and \(\mathbf{K=8}\) nearest neighbors.** Same as Fig. S16 but for the NK model. Figure S18: **SmartRunner (SR) hyperparameter scan: SK and NK models.** The SK models have \(N=200\), \(500\) and \(1000\) spins, while the NK models have \(N=200\), \(500\) and \(1000\) sites, with \(K=8\) randomly chosen intra-sequence couplings per site. All SR runs were carried out with \(l_{\text{max}}=2\) and \(\overrightarrow{R}^{\text{init}}=0.01\), and with the total number of steps \(l_{\text{tot}}=1.5\times 10^{6},10^{6},5\times 10^{5}\) for the models with \(200\), \(500\) and \(1000\) spins/sites, respectively. (A) SK model: shown are the best fitness values found in each run, averaged over \(10\) independent runs with randomly chosen starting states. Error bars show standard deviations. (B) SK model: shown are the globally best fitness values obtained over all \(10\) runs. (C) Same as A but for the NK model. (D) Same as B but for the NK model. The dashed horizontal line is drawn at \(0.7\) to guide the eye.
科学技術における多くの問題には、様々な目的関数のグローバル最小値や最大値を見つける必要があり、これらの関数は高次元的に設定されています。各関数の評価には、大きな計算コストがかかる可能性があります。グローバル最適化の重要性は、物理的、化学的、生物的系での類似性に基づいた多数の heuristics アルゴリズムの開発を促進しました。ここでは、新しいアルゴリズムである SmartRunner を提案します。SmartRunner は、受け入れられた移動の歴史に基づいたベイジアン確率的モデルを使用して、次のランダムな試行について判断します。このように、SmartRunner は、目的関数の最適化と実行可能な行動のセットに対して、学習の最適化を行い、その最適化は、各関数の評価におけるフィットネスの増加(またはエネルギーの損失)を最大化することを目指しています。私たちのアプローチは、オリジナルの目的関数を単純な適応ペナルティに加えることで、SmartRunner は
2310.20544
Information-theoretic causality and applications to turbulence: energy cascade and inner/outer layer interactions
We introduce an information-theoretic method for quantifying causality in chaotic systems. The approach, referred to as IT-causality, quantifies causality by measuring the information gained about future events conditioned on the knowledge of past events. The causal interactions are classified into redundant, unique, and synergistic contributions depending on their nature. The formulation is non-intrusive, invariance under invertible transformations of the variables, and provides the missing causality due to unobserved variables. The method only requires pairs of past-future events of the quantities of interest, making it convenient for both computational simulations and experimental investigations. IT-causality is validated in four scenarios representing basic causal interactions among variables: mediator, confounder, redundant collider, and synergistic collider. The approach is leveraged to address two questions relevant to turbulence research: i) the scale locality of the energy cascade in isotropic turbulence, and ii) the interactions between inner and outer layer flow motions in wall-bounded turbulence. In the former case, we demonstrate that causality in the energy cascade flows sequentially from larger to smaller scales without requiring intermediate scales. Conversely, the flow of information from small to large scales is shown to be redundant. In the second problem, we observe a unidirectional causality flow, with causality predominantly originating from the outer layer and propagating towards the inner layer, but not vice versa. The decomposition of IT-causality into intensities also reveals that the causality is primarily associated with high-velocity streaks.
Adrián Lozano-Durán, Gonzalo Arranz, Yuenong Ling
2023-10-31T15:26:18
http://arxiv.org/abs/2310.20544v1
Information-theoretic causality and applications to turbulence: energy cascade and inner/outer layer interactions ###### Abstract We introduce an information-theoretic method for quantifying causality in chaotic systems. The approach, referred to as IT-causality, quantifies causality by measuring the information gained about future events conditioned on the knowledge of past events. The causal interactions are classified into redundant, unique, and synergistic contributions depending on their nature. The formulation is non-intrusive, invariance under invertible transformations of the variables, and provides the missing causality due to unobserved variables. The method only requires pairs of past-future events of the quantities of interest, making it convenient for both computational simulations and experimental investigations. IT-causality is validated in four scenarios representing basic causal interactions among variables: mediator, confounder, redundant collider, and synergistic collider. The approach is leveraged to address two questions relevant to turbulence research: i) the scale locality of the energy cascade in isotropic turbulence, and ii) the interactions between inner and outer layer flow motions in wall-bounded turbulence. In the former case, we demonstrate that causality in the energy cascade flows sequentially from larger to smaller scales without requiring intermediate scales. Conversely, the flow of information from small to large scales is shown to be redundant. In the second problem, we observe a unidirectional causality flow, with causality predominantly originating from the outer layer and propagating towards the inner layer, but not vice versa. The decomposition of IT-causality into intensities also reveals that the causality is primarily associated with high-velocity streaks. The Python scripts to compute IT-causality can be found here. Information theory, causality, turbulence, wall-bounded turbulence, energy cascade, inner/outer motions ## 1 Introduction Causality is the mechanism through which one event contributes to the genesis of another (Pearl, 2009). Causal inference stands as a cornerstone in the pursuit of scientific knowledge (Bunge, 2017): it is via the exploration of cause-and-effect relationships that we are able to gain understanding of a given phenomenon and to shape the course of events by deliberate actions. Despite its ubiquity, the adoption of specialized tools tailored for unraveling causal relationships remains limited within the turbulence community. In this work, we propose a non-intrusive method for quantification of causality formulated within the framework of information theory. Causality in turbulence research is commonly inferred from a combination of numerical and experimental data interrogation. The tools encompass diverse techniques such as statistical analysis, energy budgets, linear and nonlinear stability theory, coherent structure analysis, and modal decomposition, to name a few (e.g. Robinson 1991; Reed _et al._ 1996; Cambon & Scott 1999; Schmid 2007; Smits _et al._ 2011; Jimenez 2012; Kawahara _et al._ 2012; Mezic 2013; Haller 2015; Wallace 2016; Jimenez 2018; Marusic & Monty 2019). Additional research strategies involve the use of time cross-correlation between pairs of time signals representing events of interest as a surrogate for causal inference. For instance, investigations into turbulent kinetic energy (Jimenez 2018; Cardesa _et al._ 2015) and the spatiotemporal aspects of spectral quantities (e.g., Choi & Moin 1990; Wallace 2014; Wilczek _et al._ 2015; de Kat & Ganapathisubramani 2015; He _et al._ 2017; Wang _et al._ 2020) exemplify some of these efforts. However, it is known that correlation, while informative, does not inherently imply causation as it lacks the directionality and asymmetry required to quantify causal interactions (Beebee _et al._ 2012). While the aforementioned tools have significantly advanced our physical understanding of turbulence, extracting rigorous causal relationships from the current methodologies remains a challenging task. An alternative and intuitive definition of causality is rooted in the concept of interventions: the manipulation of the causing variable results in changes in the effect (Pearl 2009; Eichler 2013). Interventions provide a pathway for evaluating the causal impact that one process \(A\) exerts on another process \(B\). This is achieved by adjusting \(A\) to a modified value \(\widetilde{A}\) and subsequently observing the post-intervention consequences on \(B\). Within the turbulence literature, there are numerous examples where the equations of motion are modified to infer causal interactions within the system (e.g. Jimenez & Moin 1991; Jimenez & Pinelli 1999; Hwang & Cossu 2010; Farrell _et al._ 2017; Lozano-Duran _et al._ 2021, to name a few). Despite the intuitiveness of interventions as a measure of causality, this approach is not without limitations (Eberhardt & Scheines 2007). Causality with interventions is intrusive (i.e., it requires modifying the system) and costly (simulations need to be recomputed for numerical experiments). When data is gathered from physical experiments, establishing causality through interventions can be even more challenging or impractical (for example, in a wind tunnel setup). Additionally, the concept of causality with interventions prompts questions about the type of intervention that must be introduced and whether this intervention could impact the outcome of the exercise as a consequence of forcing the system out of its natural attractor. The framework of information theory, the science of message communication (Shannon 1948), provides an alternative, non-intrusive definition of causality as the information transferred from the variable \(A\) to the variable \(B\). The origin of this idea can be traced back to the work of Wiener (1956), and its initial quantification was presented by Granger (1969) through signal forecasting using linear autoregressive models. In the domain of information theory, this definition was formally established by Massey (1990) and Kramer (1998) through the utilization of conditional entropies, employing what is known as directed information. Building upon the direction of information flow in Markov chains, Schreiber (2000) introduced a heuristic definition of causality termed transfer entropy. In a similar vein, Liang & Kleeman (2006) and subsequently Sinha & Vaidya (2016) suggested inferring causality by measuring the information flow within a dynamical system when one variable is momentarily held constant for infinitesimally small times. More recently, Lozano-Duran & Arranz (2022) proposed a new information-theoretic quantification of causality for multivariate systems that generalizes the definition of transfer entropy. Other methods have been proposed for causal discovery beyond the framework of information theory. These include nonlinear state-space reconstruction based on Takens' theorem (Arnhold _et al._, 1999; Sugihara _et al._, 2012), conditional independence-based methods (Runge _et al._, 2018; Runge, 2018), restricted Perron-Frobenius operator (Jimenez, 2023), reconstruction of causal graphs and Bayesian networks (Rubin, 1974; Spirtes _et al._, 2000; Pearl, 2000; Koller & Friedman, 2009; Imbens & Rubin, 2015). Runge _et al._ (2019) offers an overview of the methods for causal inference in the context of Earth system sciences. The reader is also referred to Camps-Valls _et al._ (2023) for a comprehensive summary of causality tools across disciplines. The development and application of causality tools in the realm of turbulence research remain scarce. Among the current studies, we can cite the work by Tissot _et al._ (2014), who used Granger causality to investigate the dynamics of self-sustaining wall-bounded turbulence. Materassi _et al._ (2014) used normalized transfer entropy to study the cascading process in synthetic turbulence generated via the shell model. Liang & Lozano-Duran (2016) and Lozano-Duran _et al._ (2019) applied information-theoretic definitions of causality to unveil the dynamics of energy-containing eddies in wall-bounded turbulence. A similar approach was followed by Wang _et al._ (2021) and Wang _et al._ (2022) to study cause-and-effect interactions in turbulent flows over porous media. Lozano-Duran & Arranz (2022) used information flux among variables to study the energy cascade in isotropic turbulence. Recently, Martinez-Sanchez _et al._ (2023) used transfer entropy to analyze the formation mechanisms of large-scale coherent structures in the flow around a wall-mounted square cylinder. The new method proposed here to quantify causality addresses important deficiencies compared to existing tools in the literature. This work is organized as follows. First, we discuss key aspects of causal inference in SS1.1. The method is introduced in SS2. The fundamentals of information theory required to formulate the problem of IT-causality are presented in SS2.1 and the method is introduced in SS2.2 along with a simple example and a discussion of its properties. The approach is validated in simple stochastic systems in SS3. IT-causality is used to investigate two important problems in turbulence: the locality in scale of the energy cascade (SS4); and the interaction between flow motions in the inner layer and outer layer of wall-bounded turbulence (SS5). Finally, limitations of the method are outlined in SS6 and conclusion are offered in SS7. ### Key concepts in causal inference We discuss the distinction between causality, association, and correlation, as well as descriptions of the three basic categories of interactions between variables: mediator, confounder, and collider. These are well-established concepts in the field of causal inference (see, for example Pearl, 2009), yet they remain relatively underutilized within the turbulence research community. Causality refers to the process by which one variable directly influences or determines changes in another variable. In causal relationships, a modification in the cause leads to a corresponding alteration in the effect, and there is a logical and temporal connection between them. For example, the rainy season in a particular region is a cause of increased umbrella sales. On the other hand, association signifies a statistical connection between two variables where they tend to occur together more frequently than expected by chance. Association does not necessarily imply causation; it could result from common causes, statistical coincidences, or confounding factors. One example of association is the increase in umbrella sales and raincoat sales, which tends to happen concurrently due to the confounding factor of the rainy season. Correlation is a specific form of association that quantifies the strength and direction of a linear (Pearson 1895) or monotonic (Spearman 1987) relationship between two variables. Correlation does not imply causation and can even fail to identify associations. In general, correlation implies association but not causation. Causation implies association but not correlation (Altman & Krzywinski 2015). Another crucial factor to consider is the manner variables interact. Let us consider three variables: \(A\), \(B\), and \(C\). We can identify three fundamental types of interactions as summarized in figure 1. \(\bullet\)_Mediator variables_ emerge in the causal chain from \(A\) to \(C\), with the variable \(B\) acting as a bridge: \(A\to B\to C\). In this scenario, \(B\) is often viewed as the mechanism or mediator responsible for transmitting the influence of \(A\) to \(C\). Mediator variables help explain the underlying mechanisms by which an independent variable influences a dependent variable. A simple example is \(\uparrow\) education level \(\rightarrow\)\(\uparrow\) job skills \(\rightarrow\)\(\uparrow\) income. \(\bullet\)_Confounder variables_ serve as a common cause for two variables: \(B\to A\) and \(B\to C\). Confounder variables have the potential to create a statistical correlation between \(A\) and \(C\), even if there is no direct causal link between them. Consequently, confounding variables are factors that can obscure or distort the genuine relationship between variables. Following the example above, rainy season \(\rightarrow\)\(\uparrow\) umbrella sales and rainy season \(\rightarrow\)\(\uparrow\) raincoat sales. \(\bullet\)_Collider variables_ are caused by the effect of multiple variables: \(A\to B\) and \(C\to B\). This scenario is particularly relevant in chaotic dynamical systems, where most variables are affected by multiple causes due to non-linear coupling. \(\circ\) A collider variable exhibits _redundant_ causes when both \(A\) and \(C\) contribute to the same effect or outcome of \(B\), creating overlapping or duplicative influences on the outcome. Consequently, redundant causes result in multiple pathways to the same effect. For instance, both hard work and high intelligence can independently contribute to the good grades of a student. Note that \(A\) and \(C\) may not necessarily be independent. \(\circ\) A collider variable is caused from _synergistic_ variables if the combined effect of \(A\) and \(C\) on \(B\) surpasses their individual effects on \(B\) when considered separately. As an example, two drugs may be required in tandem to effectively treat a condition; when each drug alone is insufficient. The interactions described above can combine and occur simultaneously, giving rise to more intricate causal networks. The method proposed below aims to distinguish between these various interactions, provided that we have sufficient knowledge of the system. ## 2 Method formulation ### Fundamentals of information theory Let us introduce the concepts of information theory required to formulate the problem of IT-causality. The first question that needs to be addressed is the meaning of _information_, as it is not frequently utilized within the fluid dynamics community. Consider \(N\) quantities of interest at time \(t\) represented by the vector \(\mathbf{Q}=[Q_{1}(t),Q_{2}(t),\ldots,Q_{N}(t)]\). For example, \(Q_{i}(t)\) may be the velocity or pressure of the flow at a given point in space, temporal coefficients Figure 1: Schematic representation of mediator, confounder, and collider variables. obtained from proper orthogonal decomposition, or a spacially-averaged quantity, etc. We treat \(\mathbf{Q}\) as a random variable and consider a finite partition of the observable phase space \(D=\{D_{1},D_{2},\ldots,D_{N_{Q}}\}\), where \(N_{Q}\) is the number of partitions, such that \(D=\cup_{i=1}^{N_{Q}}D_{i}\) and \(D_{i}\cap D_{j}=\emptyset\) for all \(i\neq j\). We use upper case \(Q\) to denote the random variable itself; and lower case \(q\) to denote a particular state or value of \(Q\). The probability of finding the system at state \(D_{i}\) at time \(t\) is \(p(\mathbf{Q}(t)\in D_{i})\), that in general depends on the partition \(D\). For simplicity, we refer to the latter probability as \(p(\mathbf{q})\). The information contained in the variable \(\mathbf{Q}\) is given by (Shannon 1948): \[H(\mathbf{Q})=\sum_{\mathbf{q}}-p(\mathbf{q})\log_{2}[p(\mathbf{q})]\geqslant 0, \tag{1}\] where the summation is over all the states (i.e., values) of \(\mathbf{Q}\). The quantity \(H\) is referred to as the Shannon information or entropy (Shannon 1948). The units of \(H\) are set by the base chosen, in this case 'bits' for base 2. For example, consider a fair coin with \(Q\in\{\text{heads},\text{tails}\}\) such that \(p(\text{heads})=p(\text{tails})=0.5\). The information of the system "tossing a fair coin \(n\) times" is \(H=-\sum 0.5^{n}\log_{2}(0.5^{n})=n\) bits, where the summation is carried out across all possible outcomes (namely, \(2^{n}\)). If the coin is completely biased towards heads, \(p(\text{heads})=1\), then \(H=0\) bits (taking \(0\log 0=0\)), i.e., no information is gained as the outcome was already known before tossing the coin. The Shannon information can also be interpreted in terms of uncertainty: \(H(\mathbf{Q})\) is the average number of bits required to unambiguously determine the state \(\mathbf{Q}\). \(H\) is maximum when all the possible outcomes are equiprobable (indicating a high level of uncertainty in the system's state) and zero when the process is completely deterministic (indicating no uncertainty in the outcome). The Shannon information of \(\mathbf{Q}\) conditioned on another variable \(\mathbf{Q}^{\prime}\) is defined as (Stone 2013): \[H(\mathbf{Q}|\mathbf{Q}^{\prime})=\sum_{\mathbf{q},\mathbf{q}^{\prime}}-p(\mathbf{q},\mathbf{q}^{ \prime})\log_{2}[p(\mathbf{q}|\mathbf{q}^{\prime})]. \tag{2}\] where \(p(\mathbf{q}|\mathbf{q}^{\prime})=p(\mathbf{q},\mathbf{q}^{\prime})/p(\mathbf{q}^{\prime})\) is the conditional probability distribution, and \(p(\mathbf{q}^{\prime})=\sum_{\mathbf{q}}p(\mathbf{q},\mathbf{q}^{\prime})\) is the marginal probability distribution of \(\mathbf{q}^{\prime}\). It is useful to interpret \(H(\mathbf{Q}|\mathbf{Q}^{\prime})\) as the uncertainty in the state \(\mathbf{Q}\) after conducting the'measurement' of the state \(\mathbf{Q}^{\prime}\). If \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\) are independent random variables, then \(H(\mathbf{Q}|\mathbf{Q}^{\prime})=H(\mathbf{Q})\), i.e., knowing the state \(\mathbf{Q}^{\prime}\) does not reduce the uncertainty in \(\mathbf{Q}\). Conversely, \(H(\mathbf{Q}|\mathbf{Q}^{\prime})=0\) if knowing \(\mathbf{Q}^{\prime}\) implies that \(\mathbf{Q}\) is completed determined. Finally, the mutual information between the random variables \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\) is \[I(\mathbf{Q};\mathbf{Q}^{\prime})=H(\mathbf{Q})-H(\mathbf{Q}|\mathbf{Q}^{\prime})=H(\mathbf{Q}^{\prime })-H(\mathbf{Q}^{\prime}|\mathbf{Q}), \tag{3}\] which is a symmetric measure \(I(\mathbf{Q};\mathbf{Q}^{\prime})=I(\mathbf{Q}^{\prime};\mathbf{Q})\) representing the information shared among the variables \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\). The mutual information between variables is central to the formalism presented below. Figure 2 depicts the relationship between the Shannon information, conditional Shannon information, and mutual information. The definitions above can be extended to continuous random variables by replacing summation by integration and the probability mass functions by probability density functions: \[H_{c}(\mathbf{Q}) =\int_{\mathbf{q}}-\rho(\mathbf{q})\log_{2}[\rho(\mathbf{q})]\,\mathrm{d}\mathbf{q}, \tag{4a}\] \[H_{c}(\mathbf{Q}|\mathbf{Q}^{\prime}) =\int_{\mathbf{q},\mathbf{q}^{\prime}}-\rho(\mathbf{q},\mathbf{q}^{\prime})\log_{ 2}[\rho(\mathbf{q}|\mathbf{q}^{\prime})]\,\mathrm{d}\mathbf{q}\,\mathrm{d}\mathbf{q}^{\prime},\] (4b) \[I_{c}(\mathbf{Q};\mathbf{Q}^{\prime}) =H_{c}(\mathbf{Q})-H_{c}(\mathbf{Q}|\mathbf{Q}^{\prime})=H_{c}(\mathbf{Q}^{\prime })-H_{c}(\mathbf{Q}^{\prime}|\mathbf{Q}), \tag{4c}\] where \(H_{c}\) is referred to as the differential entropy, \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\) are now continuous random variables, \(\rho\) denotes probability density function, and the integrals are performed over the support set of \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\). The differential entropy shares many of the properties of the discrete entropy. However, it can be infinitely large, negative, or positive. The method presented here relies on the use of mutual information, which is non-negative in the continuous case. Additionally, it can be shown that if \(\rho(\mathbf{q},\mathbf{q}^{\prime})\log_{2}[\rho(\mathbf{q},\mathbf{q}^{\prime})]\) is Riemann integrable, then \(I(\mathbf{Q}^{\Delta};\mathbf{Q}^{\prime\Delta})\to I_{c}(\mathbf{Q};\mathbf{Q}^{\prime})\) for \(\Delta\to 0\), where \(\mathbf{Q}^{\Delta}\) and \(\mathbf{Q}^{\prime\Delta}\) are the quantized versions of \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\), respectively, defined over a finite partition with a characteristic size of \(\Delta\)(Cover & Thomas, 2006). In the following section, we present our approach using discrete mutual information; nevertheless, a similar formulation is applicable to the continuous case. ### Information-theoretic causality (IT-causality) Our objective is to quantify the causality from the components of \(\mathbf{Q}\left(t\right)\) to the future of the variable \(Q_{j}^{+}=Q_{j}(t+\Delta T)\), where \(Q_{j}\) is one of the components of \(\mathbf{Q}\) and \(\Delta T>0\) represents an arbitrary time lag. Moreover, for each component of \(\mathbf{Q}\), the causality will be decomposed into _redundant_, _unique_, and _synergistic_ contributions to \(Q_{j}^{+}\). The theoretical foundation of the method is rooted in the forward propagation of information in dynamical systems. Let us consider the information in the variable \(Q_{j}^{+}\), given by \(H(Q_{j}^{+})\). Assuming that all the information in \(Q_{j}^{+}\) is determined by the past states of the system, we can write the equation for the forward propagation of information (Lozano-Duran & Arranz, 2022) \[H(Q_{j}^{+})=\Delta I(Q_{j}^{+};\mathbf{Q})+\Delta I_{\text{leak}\to j}, \tag{5}\] where \(\Delta I(Q_{j}^{+};\mathbf{Q})\) is the information flow from \(\mathbf{Q}\) to \(Q_{j}^{+}\), and \(\Delta I_{\text{leak}\to j}\) is the information _leak_, representing the causality from unobserved variables that influence the dynamics of \(Q_{j}^{+}\) but are not part of \(\mathbf{Q}\). The information leak can be expressed in closed form as a function of the observed variables as \[\Delta I_{\text{leak}\to j}=H(Q_{j}^{+}|\mathbf{Q}), \tag{6}\] that is the uncertainty in \(Q_{j}^{+}\) given the information in \(\mathbf{Q}\). The amount of available information about \(Q_{j}^{+}\) given \(\mathbf{Q}\) is \[H(Q_{j}^{+})-\Delta I_{\text{leak}\to j}=\Delta I(Q_{j}^{+};\mathbf{Q})=H(Q_{j}^{ +})-H(Q_{j}^{+}|\mathbf{Q})=I(Q_{j}^{+};\mathbf{Q}), \tag{7}\] which is the mutual information between \(Q_{j}^{+}\) and \(\mathbf{Q}\), \[I(Q_{j}^{+};\mathbf{Q})=\sum_{q_{j}^{+},\mathbf{q}}p(q_{j}^{+},\mathbf{q})\log_{2}\left( \frac{p(q_{j}^{+}|\mathbf{q})}{p(q_{j}^{+})}\right)=\sum_{q_{j}^{+},\mathbf{q}}p(q_{j} ^{+},\mathbf{q})\log_{2}\left(\frac{p(q_{j}^{+},\mathbf{q})}{p(q_{j}^{+})p(\mathbf{q})} \right). \tag{8}\] Equation (8) quantifies the average dissimilarity between \(p(q_{j}^{+})\) and \(p(q_{j}^{+}|\mathbf{q})\). In terms of Figure 2: Venn diagram of the Shannon information, conditional Shannon information and mutual information between two random variables \(\mathbf{Q}\) and \(\mathbf{Q}^{\prime}\). the Kullback-Leibler divergence (Kullback & Leibler 1951), it measures the dissimilarity between \(p(q^{+}_{j},\mathbf{q})\) and the distribution presumed under the assumption of independence between \(Q^{+}_{j}\) and \(\mathbf{Q}\), viz. \(p(q^{+}_{j})p(\mathbf{q})\). Hence, causality here is assessed by examining how the probability of \(Q^{+}_{j}\) changes when accounting for \(\mathbf{Q}\). Figure 3 provides an interpretation of the quantification of causality based on Eq. (8). The next step involves decomposing \(I(Q^{+}_{j};\mathbf{Q})\) into its unique, redundant, and synergistic components as \[I(Q^{+}_{j};\mathbf{Q})=\sum_{i=1}^{N}\Delta I^{U}_{i\to j}+\sum_{i\in C} \Delta I^{R}_{i\to j}+\sum_{i\in C}\Delta I^{S}_{i\to j}, \tag{9}\] where \(\Delta I^{U}_{i\to j}\) is the unique causality from \(Q_{i}\) to \(Q^{+}_{j}\), \(\Delta I^{R}_{i\to j}\) is the redundant causality among the variables in \(\mathbf{Q}_{i}\) with \(\mathbf{i}=[i_{1},i_{2},\ldots]\) being a collection of indices, \(\Delta I^{S}_{i\to j}\) is the synergistic causality from the variables in \(\mathbf{Q}_{i}\), and \(C\) is the set of all the combinations taken from \(1\) to \(N\) with more than one element and less than or equal to \(N\) elements. For \(N=4\), Eq. (9) can be expanded as \[I(Q^{+}_{j};\mathbf{Q}) =\Delta I^{U}_{1\to j}+\Delta I^{U}_{2\to j}+\Delta I^{U}_{3 \to j}+\Delta I^{U}_{4\to j} \tag{10a}\] \[+\Delta I^{R}_{12\to j}+\Delta I^{R}_{13\to j}+\Delta I^{R}_{14 \to j}+\Delta I^{R}_{23\to j}+\Delta I^{R}_{24\to j}+\Delta I^{R}_{34\to j}+\] (10b) \[+\Delta I^{S}_{12\to j}+\Delta I^{S}_{13\to j}+\Delta I^{S}_{14 \to j}+\Delta I^{S}_{23\to j}+\Delta I^{S}_{24\to j}+\Delta I^{S}_{34\to j}+\] (10c) \[+\Delta I^{R}_{123\to j}+\Delta I^{R}_{124\to j}+\Delta I^{R}_{134 \to j}+\Delta I^{R}_{234\to j}+\] (10d) \[+\Delta I^{S}_{123\to j}+\Delta I^{S}_{124\to j}+\Delta I^{S}_{134 \to j}+\Delta I^{S}_{234\to j}\] (10e) \[+\Delta I^{R}_{1234\to j}+\Delta I^{S}_{1234\to j}. \tag{10f}\] The source of causality might change depending on the value of \(Q^{+}_{j}\). For example, \(Q_{1}\) can be only causal to positive values of \(Q^{+}_{j}\), whereas \(Q_{2}\) can be only causal to negative values of \(Q^{+}_{j}\). For that reason, we define the specific mutual information (DeWeese & Meister 1999) from \(\mathbf{Q}_{\mathbf{t}}\) to a particular event \(Q^{+}_{j}=q^{+}_{j}\) as \[\bar{\imath}(Q^{+}_{j}=q^{+}_{j};\mathbf{Q}_{\mathbf{t}})=\sum_{\mathbf{q}_{\mathbf{t}}}p(\mathbf{ q}_{\mathbf{t}}|q^{+}_{j})\log_{2}\left(\frac{p(q^{+}_{j}|\mathbf{q}_{\mathbf{t}})}{p(q^{+}_{j})} \right)\geqslant 0. \tag{11}\] Note that the specific mutual information is a function of the random variable \(\mathbf{Q}_{\mathbf{l}}\) (which encompasses all its states) but only a function of one particular state of the target variable (namely, \(q_{j}^{+}\)). For the sake of simplicity, we will use \(\tilde{\mathbf{i}}_{\mathbf{l}}(q_{j}^{+})=\tilde{\mathbf{i}}(Q_{j}^{+}=q_{j}^{+};\mathbf{Q}_{ \mathbf{l}})\). Similarly to Eq. (8), the specific mutual information quantifies the dissimilarity between \(p(q_{j}^{+})\) and \(p(q_{j}^{+}|\mathbf{q})\) but in this case for the particular state \(Q_{j}^{+}=q_{j}^{+}\). The mutual information between \(Q_{j}^{+}\)and \(\mathbf{Q}_{\mathbf{l}}\) is recovered by \(I(Q_{j}^{+};\mathbf{Q}_{\mathbf{l}})=\sum_{q_{j}^{+}}p(q_{j}^{+})\tilde{\mathbf{i}}_{\mathbf{l }}(q_{j}^{+})\). We are now in the position of outlining the steps involved in the calculation of redundant, unique, and synergistic causalities (figure 4). The particular choices of the method were made to comply with the intuition behind mediator, confounder, and collider interactions (SS1.1) along with the ease of interpretability of the results. The reader is referred to Appendix A for the formal definitions. For a given value \(Q_{j}^{+}=q_{j}^{+}\), the _specific_ redundant, unique, and synergistic causalities are calculated as follows: (i) The specific mutual information are computed for all possible combinations of variables in \(\mathbf{Q}\). This includes specific mutual information of order one (\(\tilde{t}_{1},\tilde{t}_{2},\ldots\)), order two (\(\tilde{t}_{12},\tilde{t}_{13},\ldots\)), order three (\(\tilde{t}_{123},\tilde{t}_{124},\ldots\)), and so forth. One example is shown in figure 4(a). (ii) The tuples containing the specific mutual information of order \(M\), denoted by \(\tilde{\mathcal{G}}^{M}\), are constructed for \(M=1,\ldots,N\). The components of each \(\tilde{\mathcal{G}}^{M}\) are organized in ascending order as shown in figure 4(b). (iii) The specific redundant causalities, \(\Delta\tilde{\mathbf{i}}^{R}_{\mathbf{i}}(q_{j}^{+})\), are defined as the increments of information in \(\tilde{\mathcal{G}}^{1}\) common to all the components of \(\mathbf{Q}_{\mathbf{i}}\) (blue contributions in figure 3(c)). (iv) The specific unique causality, \(\Delta\tilde{t}_{i}^{U}(q_{j}^{+})\), is defined as the increment of information from \(Q_{i}\) that cannot be obtained from any other individual variable \(Q_{k}\) with \(k\neq i\) (red contribution in figure 3(c)). (v) The specific synergistic causalities, \(\Delta\tilde{t}_{\mathbf{i}}^{S}(q_{j}^{+})\), are defined as the increments of information due to the joint effect of the components of \(\mathbf{Q}_{\mathbf{i}}\) in \(\tilde{\mathcal{G}}^{M}\) with \(M>1\) (yellow contributions in figure 3(c)). The first increment is computed using as reference the largest specific mutual information from the previous tuple (dotted line in figure 3(c)). (vi) The specific redundant, unique and synergistic causalities that do not appear in the steps above are set to zero. (vii) The steps (i) to (vi) are repeated for all the states of \(Q_{j}^{+}\) (figure 3(d)). (viii) The IT-causalities (redundant, unique, and synergistic) are obtained as the expectation of their corresponding specific values with respect to \(Q_{j}^{+}\), \[\Delta I_{\mathbf{l}\to j}^{R} =\sum_{q_{j}^{+}}p(q_{j}^{+})\Delta\tilde{t}_{\mathbf{l}}^{R}(q_{j}^{+}), \tag{12a}\] \[\Delta I_{\mathbf{i}\to j}^{U} =\sum_{q_{j}^{+}}p(q_{j}^{+})\Delta\tilde{t}_{\mathbf{i}}^{U}(q_{j}^{+}),\] (12b) \[\Delta I_{\mathbf{i}\to j}^{S} =\sum_{q_{j}^{+}}p(q_{j}^{+})\Delta\tilde{t}_{\mathbf{l}}^{S}(q_{j}^{+}). \tag{12c}\] (ix) Finally, we define the average order of the specific IT-causalities with respect to \(Q_{j}^{+}\) as \[N_{\mathbf{i}\to j}^{\alpha}=\sum_{q_{j}^{+}}p(q_{j}^{+})n_{\mathbf{i}\to j}^{ \alpha}(q_{j}^{+}), \tag{13}\] where \(\alpha\) denotes R, U, or S, \(n_{\mathbf{i}\to j}^{\alpha}(q_{j}^{+})\) is the order of appearance of \(\Delta\tilde{t}_{\mathbf{i}}^{\alpha}(q_{j}^{+})\) from left to right. right as in the example shown in figure 4. The values of \(N^{\alpha}_{i\to j}\) will be used to plot \(\Delta I^{\alpha}_{i\to j}\) following the expected order of appearance of \(\Delta\tilde{\alpha}^{\alpha}_{i\to j}\). ### Simple example of IT-causality We illustrate the concept of redundant, unique, and synergistic causality in three simple examples. The examples represent a system with two inputs \(Q_{1}\) and \(Q_{2}\) and one output \(Q_{3}^{+}=f(Q_{1},Q_{2})\). The inputs can take two values, \(\{0,1\}\), randomly and independently distributed, each with a probability of \(0.5\). The causal description of the system is characterized by the four components: \[H(Q_{3}^{+})=\Delta I^{U}_{1\to 3}+\Delta I^{U}_{2\to 3}+\Delta I^{R}_{12\to 3}+\Delta I^{S}_{12\to 3}, \tag{14}\] where \(\Delta I_{\text{leak}\to 3}=0\) as \(H(Q_{3}^{+}|Q_{1},Q_{2})=0\). The results for the three cases are summarized in in figure 5. The first example represents a system in which \(Q_{2}=Q_{1}\) (duplicated input) and the output is given by \(Q_{3}^{+}=Q_{1}\). In this case, both \(Q_{1}\) and \(Q_{2}\) provide the same information about the output and the only non-zero term in Eq. (14) is the redundant causality \(\Delta I^{R}_{12\to 3}=1\) bit. In the second example, the output is given by \(Q_{3}^{+}=Q_{1}\) with no dependence on \(Q_{2}\), which only results in the unique causality \(\Delta I^{U}_{1\to 3}=1\) bit. In the last example, the output is given by the exclusive-OR operator: \(Q_{3}^{+}=Q_{1}\oplus Q_{2}\) such that \(Q_{3}^{+}=1\) if \(Q_{1}\neq Q_{2}\) and \(Q_{3}^{+}=0\) otherwise. In this case, the output behaves randomly when observing \(Q_{1}\) or \(Q_{2}\) independently. However, the outcome is completely determined when the joint variable \([Q_{1},Q_{2}]\) is considered. Hence, \([Q_{1},Q_{2}]\) contains more information than their individual components and all the causality comes from the synergistic causality \(\Delta I^{S}_{12\to 3}=1\) bit. ### Contribution of different intensities to IT-causality The IT-causalities can be decomposed in terms of different contributions from \(\mathbf{Q}_{t}\) and \(Q_{j}^{+}\). For the case of the unique causality, the IT-causality as function of \(q_{i}\) and \(q_{j}^{+}\) is denoted by \(\Delta\zeta^{U}_{i\to j}\) and it is such that \[\Delta I^{U}_{i\to j}=\sum_{q_{i}}\sum_{q_{j}^{+}}\Delta\zeta^{U}_{i\to j}(q_{ i},q_{j}^{+}). \tag{15}\] The expression for \(\Delta\zeta^{U}_{i\to j}\) is obtained inverting Eq. (15) \[\Delta\zeta^{U}_{i\to j}=\begin{cases}p(q_{j}^{+})p(q_{i}|q_{j}^{+})\log_{2} \left(\frac{p(q_{j}^{+}|q_{i})}{p(q_{j}^{+})}\right)-p(q_{j}^{+})\frac{\tilde{ q}_{l}}{N_{Q}},&\text{for}\;\tilde{t}_{i}\geqslant\tilde{t}_{l},\\ 0,&\text{otherwise},\end{cases} \tag{16}\] where \(\tilde{t}_{l}\) is the second largest member in \(\tilde{\mathcal{G}}^{1}\) and \(N_{Q}\) is the total number of states of \(q_{i}\). Analogous definitions can be written for the redundant and synergistic causalities. Note that \(\Delta\zeta^{U}_{i\to j}\) may be negative for a given \(q_{i}\)-\(q_{j}^{+}\) pair; although the sum of all the components is non-negative. Positive values of \(\Delta\zeta^{U}_{i\to j}\) are _informative_ (the pair \(q_{i}\)-\(q_{j}^{+}\) occurs more frequently than would be expected) and negative values are _misinformative_ (the pair \(q_{i}\)-\(q_{j}^{+}\) occurs less frequently than would be expected). ### Properties of IT-causality We discuss some properties of the IT-causality. \(\bullet\)_Non-negativity_. All the terms in Eq. (9) are non-negative by the definition of the redundant, unique and synergistic causalities, and the non-negativity of the specific mutual information (DeWeese & Meister 1999). Figure 4: Schematic of the steps involved in the calculation of specific causalities. For a given state \(Q^{+}_{j}=q^{+}_{j}\), the panels illustrate: (a) all possible specific mutual information values for a collection of four variables; (b) tuples of specific mutual information with the components organized in ascending order; (c) the increments corresponding to specific redundant (blue), unique (red), and synergistic (yellow) causalities; and (d) examples of specific causalities for different states of \(Q^{+}_{j}\). * _Reconstruction of individual mutual information_. The mutual information between \(Q_{i}\) and \(Q_{j}^{+}\) is equal to the unique and redundant causalities containing \(Q_{i}\) \[I(Q_{i};Q_{j}^{+})=\Delta I_{i\to j}^{U}+\sum_{\mathbf{i}\in\mathcal{C}_{i}} \Delta I_{\mathbf{i}\to j}^{R},\] (2.17) where \(\mathcal{C}_{i}\) is the set of the combinations in \(\mathcal{C}\) containing the variable \(Q_{i}\). This condition aligns with the notion that the information shared between \(Q_{i}\) and \(Q_{j}^{+}\) comprises contributions from unique and redundant information. However, there is no contribution from synergistic information, as the latter only arises through the combined effects of variables. This property, along with the non-negativity and forward propagation of information, enables the construction of the causality diagrams as depicted in figure 6 for two and three variables. * _Zero-causality property_. If \(Q_{j}^{+}\) is independent of \(Q_{i}\), then \(\Delta I_{\mathbf{i}\to j}^{R}=0\) for \(\mathbf{i}\in\mathcal{C}_{i}\) and \(\Delta I_{i\to j}^{U}=0\) as long as \(Q_{i}\) is observable. * _Invariance under invertible transformations_. The redundant, unique, and synergistic causalities are invariant under invertible transformations of \(\mathbf{Q}\). This property follows from the invariance of the mutual information (Cover & Thomas 2006). ## 3 Validation in stochastic systems We discuss four illustrative examples highlighting key distinctions between IT-causality and time cross-correlations. These examples are not representative of any specific dynamical system. However, the phenomena they portray are expected to emerge to varying degrees in more complex systems. For comparisons, the "causality" based on the time cross-correlation from \(Q_{i}\) to \(Q_{j}\) is defined as \[C_{i\to j}=\frac{|\sum_{n=1}^{N_{t}}Q_{i}^{\prime}(t_{n})Q_{j}^{\prime}(t_{n}+ \Delta T)|}{\left(\sum_{n=1}^{N_{t}}Q_{i}^{\prime 2}(t_{n})\right)^{1/2}\left( \sum_{n=1}^{N_{t}}Q_{j}^{\prime 2}(t_{n})\right)^{1/2}}\geqslant 0, \tag{3.1}\] Figure 5: Schematic of simple examples (top panels) and associated specific mutual information (bottom panels) for (a) duplicated input (pure redundant causality), (b) output equal to first input (pure unique causality), and (c) exclusive-OR output (pure synergistic causality). The schematics of the specific mutual information apply to both states \(Q_{3}^{+}=0\) and \(Q_{3}^{+}=1\). where \(Q_{i}^{\prime}(t_{n})\) signifies the fluctuating component of \(Q_{i}(t_{n})\) at time \(t_{n}\) with respect to its mean value, and \(N_{t}\) is the total number of time steps considered for the analysis. The values of Eq. (3.1) are bounded between 0 and 1. In all the scenarios discussed below, we consider a system with three variables \(Q_{1}(t_{n})\), \(Q_{2}(t_{n})\), and \(Q_{3}(t_{n})\) with discrete times \(t_{n}=n\). The systems is initialized at the first step with \(Q_{1}(1)=Q_{2}(1)=Q_{3}(1)=0\). A time-varying stochastic forcing, denoted by \(W_{i}(t_{n})\), acts on \(Q_{i}(t_{n})\) following a Gaussian distribution with a mean of zero and standard deviation of one. IT-causality is computed for the time lag \(\Delta T=1\) using a partition of 50 uniform bins per variable. The integration of the systems is carried out over \(10^{8}\) time steps and the first 10,000 steps are excluded from the analysis to avoid transient effects. * Mediator variable (\(Q_{3}\to Q_{2}\to Q_{1}\)). The first example corresponds to the system: \[Q_{1}(n+1) =\sin[Q_{2}(n)]+0.01W_{1}(n),\] (3.2a) \[Q_{2}(n+1) =\cos[Q_{3}(n)]+0.01W_{2}(n),\] (3.2b) \[Q_{3}(n+1) =0.9Q_{3}(n)+0.1W_{3}(n),\] (3.2c) where \(Q_{2}\) is the mediator variable between \(Q_{1}\) and \(Q_{3}\). The results are shown in figure 7, which includes an schematic of the functional dependence among variables, the IT-causality, and time cross-correlations. IT-causality reveals the prevalence of the unique contributions \(\Delta I_{3\to 3}^{U}\), \(\Delta I_{3\to 2}^{U}\), and \(\Delta I_{2\to 1}^{U}\), in addition to some redundant contributions consistent with figure 7(a). The unique causalities are compatible with the functional dependency \(Q_{3}\to Q_{2}\to Q_{1}\). This is less evident when using correlations, as \(C_{1\to 1}\), \(C_{2\to 1}\), \(C_{1\to 2}\), and \(C_{2\to 2}\) are large with values between 0.5 and 0.95. IT-causality also provides information about the amount of information leak (i.e. missing causality), which is above 50% for all three variables due to the effect of the stochastic forcing terms (which assumed to be unknown). The latter cannot be quantified using correlations. Figure 6: Diagram of the decomposition into redundant, unique, and synergistic causalities and contributions to total and individual mutual information for (a) two variables and (b) three variables. * Confounder variable (\(Q_{3}\to Q_{1}\) and \(Q_{3}\to Q_{2}\)). The second example considered is: \[Q_{1}(n+1) =\sin[Q_{1}(n)+Q_{3}(n)]+0.01W_{1}(n),\] (3.3a) \[Q_{2}(n+1) =\cos[Q_{2}(n)-Q_{3}(n)]+0.01W_{2}(n),\] (3.3b) \[Q_{3}(n+1) =0.9Q_{3}(n)+0.1W_{3}(n),\] (3.3c) where \(Q_{3}\) is a confounder variable to \(Q_{1}\) and \(Q_{2}\). The results are depicted in figure 8. The influence of the confounder variable becomes apparent through the synergistic causalities, namely, \(\Delta I_{13\to 1}^{S}\) and \(\Delta I_{23\to 2}^{S}\), as \(Q_{3}\) co-occurs with \(Q_{1}\) and \(Q_{2}\) in Eq. (3.3a) and Eq. (3.3b), respectively. The unique causality \(\Delta I_{3\to 3}^{U}\) dominates \(Q_{3}\). The non-zero redundant causality \(\Delta I_{123\to 3}^{R}\) implies that all variables contain information about the future of \(Q_{3}\), but that information is already contained in the past of \(Q_{3}\). When considering correlations, drawing robust conclusions regarding the interplay among variables presents a greater challenge due to the strong correlations observed across all possible pairs of variables, with values ranging between 0.6 and 1. * Collider with synergistic variables (\([Q_{2},Q_{3}]\to Q_{1}\)). The third example correspond to the system: \[Q_{1}(n+1) =\sin[Q_{2}(n)Q_{3}(n)]+0.001W_{1}(n),\] (3.4a) \[Q_{2}(n+1) =0.9Q_{2}(n)+0.1W_{2}(n),\] (3.4b) \[Q_{3}(n+1) =0.9Q_{3}(n)+0.1W_{3}(n),\] (3.4c) where \(Q_{2}\) and \(Q_{3}\) work synergistically to influence \(Q_{1}\). Essentially, \(Q_{2}Q_{3}\) behaves as a single random variable acting on \(Q_{1}\). The results, depicted in figure 9, clearly reveal the synergistic effect of \(Q_{2}\) and \(Q_{3}\) on \(Q_{1}\), as evidenced by \(\Delta I_{23\to 1}^{S}\). It is worth noting that, in this case, correlations do not hint at any influence of \(Q_{2}\) and \(Q_{3}\) on \(Q_{1}\). * Collider with redundant variables (\(Q_{2}\equiv Q_{3}\to Q_{1}\)). The last example corresponds to Figure 7: System with mediator variable from Eq. (3.2). (a) Schematic of the functional dependence among variables. \(W_{i}\) represents stochastic forcing to the variable. (b) Redundant, unique and synergistic causalities. The grey bar is the information leak. The IT-causalities are ordered from left to right according to \(N_{I\to j}^{\alpha}\). (c) Time cross-correlation between variables. the system: \[Q_{1}(n+1) =0.3Q_{1}(n)+0.7\{\sin[Q_{2}(n)Q_{3}(n)]+0.1W_{1}(n)\} \tag{3.5a}\] \[Q_{2}(n+1) =0.9Q_{2}(n)+0.1W_{2}(n),\] (3.5b) \[Q_{3}(n+1) \equiv Q_{2}(n+1), \tag{3.5c}\] Figure 8: System with a confounder variable from Eq. (3.3). (a) Schematic of the functional dependence among variables. \(W_{i}\) represents stochastic forcing to the variable. (b) Redundant, unique and synergistic causalities. The grey bar is the information leak. The IT-causalities are ordered from left to right according to \(N^{\alpha}_{\mathbf{i}\to j}\). (c) Time cross-correlation between variables. Figure 9: Collider with synergistic variables from Eq. (3.4). (a) Schematic of the functional dependence among variables. \(W_{i}\) represents stochastic forcing to the variable. (b) Redundant, unique and synergistic causalities. The grey bar is the information leak. The IT-causalities are ordered from left to right according to \(N^{\alpha}_{\mathbf{i}\to j}\). (c) Time cross-correlation between variables. where \(Q_{3}\) is identical to \(Q_{2}\). In this scenario, both \(Q_{2}\) and \(Q_{3}\) convey the same information regarding each other influence on the future of \(Q_{1}\). Consistently, the non-zero IT-causalities for \(Q_{2}\) and \(Q_{3}\) are \(\Delta I^{R}_{23\to 2}=\Delta I^{R}_{23\to 3}\neq 0\). The active IT-causalities in \(Q_{1}\) is the redundant contribution \(\Delta I^{R}_{123\to 1}\) and \(\Delta I^{R}_{23\to 1}\). The variables \(Q_{2}\) and \(Q_{3}\) are also highly correlated, but no assessment can be made about their redundancy from the correlation viewpoint. Furthermore, the correlation analysis do not show any influence from \(Q_{2}\) and \(Q_{3}\) to \(Q_{1}\). Additional validation cases are offered in the Appendix B. These include coupled logistic maps with synchronization and coupled Rossler-Lorenz system. ## 4 Scale locality of the energy cascade in isotropic turbulence The cascade of energy in turbulent flows, namely, the transfer of kinetic energy from large to small flow scales or vice versa (backward cascade), has been the cornerstone of most theories and models of turbulence since the 1940s (e.g., Richardson 1922; Obukhov 1941; Kolmogorov 1941, 1962; Aoyama _et al._ 2005; Falkovich 2009; Cardesa _et al._ 2017). However, understanding the dynamics of kinetic energy transfer across scales remains an outstanding challenge. Given the ubiquity of turbulence, a deeper understanding of the energy transfer among the flow scales could enable significant progress across various fields, ranging from combustion (Veynante & Vervisch 2002), meteorology (Bodenschatz 2015), and astrophysics (Young & Read 2017) to engineering applications of aero/hydrodynamics (Sirovich & Karlsson 1997; Hof _et al._ 2010; Marusic _et al._ 2010; Kuhnen _et al._ 2018; Ballouz & Ouellette 2018). Despite the progress made in recent decades, the causal interactions of energy among scales in the turbulent cascade have received less attention. Here, we investigate the redundant, unique, and synergistic causality of turbulent kinetic energy transfer across different scales. The primary hypothesis under consideration here is the concept of scale locality within the cascade, where kinetic energy is transferred sequentially from one scale to the subsequent smaller scale. Figure 10: Collider with redundant variables from Eq. (3.5). (a) Schematic of the functional dependence among variables. \(W_{i}\) represents stochastic forcing to the variable. (b) Redundant, unique and synergistic causalities. The grey bar is the information leak. The IT-causalities are ordered from left to right according to \(N^{\alpha}_{I\to j}\). (c) Time cross-correlation between variables. ### Numerical database The case chosen to study the energy cascade is forced isotropic turbulence in a triply periodic box with side \(L\). The data were obtained from the DNS of Cardesa _et al._ (2015), which is publicly available in Torroja (2021). The conservation of mass and momentum equations of an incompressible flow are given by \[\frac{\partial u_{i}}{\partial t}+\frac{\partial u_{i}u_{j}}{\partial x_{j}}=- \frac{\partial\Pi}{\partial x_{i}}+\nu\frac{\partial^{2}u_{i}}{\partial x_{j} \partial x_{j}}+f_{i},\quad\frac{\partial u_{i}}{\partial x_{i}}=0, \tag{4.1}\] where repeated indices imply summation, \(\mathbf{x}=[x_{1},x_{2},x_{3}]\) are the spatial coordinates, \(u_{i}\) for \(i=1,2,3\) are the velocities components, \(\Pi\) is the pressure, \(\nu\) is the kinematic viscosity, and \(f_{i}\) is a linear forcing sustaining the turbulent flow (Rosales & Meneveau, 2005). The flow setup is characterized by Reynolds number based on the Taylor microscale (Pope, 2000), \(Re_{\lambda}\approx 380\). The simulation was conducted by direct numerical simulation of Eq. (4.1) with \(1024^{3}\) spatial Fourier modes, which is enough to accurately resolve all the relevant length-scales of the flow. In the following, we provide a summary of the main parameters of the simulation. For more detailed information about the flow setup, the readers are referred to Cardesa _et al._ (2015). The spatial and time-averaged values of turbulent kinetic energy (\(K=u_{i}u_{i}/2\)) and dissipation (\(\varepsilon=2\nu S_{ij}S_{ij}\)) are indicated as \(K_{\rm avg}\) and \(\varepsilon_{\rm avg}\), respectively. Here, \(S_{ij}=(\partial u_{i}/\partial x_{j}+\partial u_{j}/\partial x_{i})/2\) represents the rate-of-strain tensor. The ratio between the largest and smallest length scales in the problem can be quantified as \(L_{\varepsilon}/\eta=1800\), where \(L_{\varepsilon}=K_{\rm avg}^{3/2}/\varepsilon_{\rm avg}\) denotes the integral length scale, and \(\eta=(\nu^{3}/\varepsilon_{\rm avg})^{1/4}\) signifies the Kolmogorov length scale. The generated data is also time-resolved, with flow fields being stored at intervals of \(\Delta t=0.0076T_{\varepsilon}\), where \(T_{\varepsilon}=K_{\rm avg}/\varepsilon_{\rm avg}\). The simulation was intentionally run for an extended period to ensure the accurate computation of specific mutual information. The total simulated time after transient effects was equal to \(165T_{\varepsilon}\). ### Characterization of the kinetic energy transfer The next stage involves quantifying the transfer of kinetic energy among eddies at different length scales over time. To accomplish this, the \(i\)-th component of the instantaneous flow velocity, denoted as \(u_{i}(\mathbf{x},t)\), is decomposed into contributions from large and small scales according to \(u_{i}(\mathbf{x},t)=\tilde{u}_{i}(\mathbf{x},t)+u_{i}^{\prime}(\mathbf{x},t)\). The operator \((\cdot)\) signifies the low-pass Gaussian filter given by \[\tilde{u}_{i}(\mathbf{x},t)=\iiint_{V}\frac{\sqrt{\pi}}{\bar{\Delta}}\exp\left[- \pi^{2}(\mathbf{x}-\mathbf{x}^{\prime})^{2}/\bar{\Delta}^{2}\right]u_{i}(\mathbf{x}^{ \prime},t)\mathrm{d}\mathbf{x}^{\prime}, \tag{4.2}\] where \(\bar{\Delta}\) is the filter width and \(V\) denotes integration over the whole flow domain. Examples of the unfiltered and filtered velocities are included in figure 11. The kinetic energy of the large-scale field evolves as \[\left(\frac{\partial}{\partial t}+\tilde{u}_{j}\frac{\partial}{\partial x_{j} }\right)\frac{1}{2}\tilde{u}_{i}\tilde{u}_{i}=-\frac{\partial}{\partial x_{j} }\left(\tilde{u}_{j}\bar{\Pi}+\tilde{u}_{i}\tau_{ij}-2\nu\tilde{u}_{i}\tilde{ S}_{ij}\right)+\Sigma-2\nu\tilde{S}_{ij}\tilde{S}_{ij}+\tilde{u}_{i}\tilde{f}_{i}, \tag{4.3}\] where \(\tau_{ij}=(\overline{u_{i}u_{j}}-\tilde{u}_{i}\tilde{u}_{j})\) is the subgrid-scale stress tensor, which represents the effect of the (filtered) small-scale eddies on the (resolved) large-scale eddies. The interscale energy transfer between the filtered and unfiltered scales is given by \[\Sigma(\mathbf{x},t;\bar{\Delta})=\tau_{ij}(\mathbf{x},t;\bar{\Delta}_{i})\tilde{S}_{ij }(\mathbf{x},t;\bar{\Delta}_{i}), \tag{4.4}\] which is the quantity of interest. The velocity field is low-pass filtered at four filter widths: \(\bar{\Delta}_{1}=163\eta\), \(\bar{\Delta}_{2}=81\eta\), \(\bar{\Delta}_{3}=42\eta\), and \(\bar{\Delta}_{4}=21\eta\). The filter widths are situated within the inertial range of the simulation: \(L_{\varepsilon}>\bar{\Delta}_{i}>\eta\), for \(i=1,2,3\) and \(4\). The resulting velocity fields are used to compute the interscale energy transfer at scale \(\bar{\Delta}_{i}\), which is denoted by \(\Sigma_{i}(\mathbf{x},t;\bar{\Delta}_{i})\). We use the volume-averaged value of \(\Sigma_{i}\) computed over the entire domain, denoted by \(\langle\Sigma_{i}\rangle\), as a marker for the time-evolution of the interscale energy transfer: \[\langle\Sigma_{i}\rangle(t)=\iiint_{\mathcal{V}}\Sigma(\mathbf{x},t;\bar{\Delta}_{ i})\mathrm{d}\mathbf{x}, \tag{4.5}\] which is only a function of time for a given \(\bar{\Delta}_{i}\). Figure 12(a) contains a fragment of the time history of \(\langle\Sigma_{i}\rangle\) for \(i=1,2,3\) and \(4\). Figure 11: Instantaneous velocity field for (a) \(u_{1}\) and (b) \(\bar{u}_{1}\). The velocities are normalized by their respective standard deviations. Figure 12: (a) Extract of the time-history of fluctuating component of \(\langle\Sigma_{1}\rangle\), \(\langle\Sigma_{2}\rangle\), \(\langle\Sigma_{3}\rangle\), and \(\langle\Sigma_{4}\rangle\) (from black to red). Although not shown, the whole time-span of the signals is \(165T_{\varepsilon}\). The primes denote fluctuating component above the mean value and \(\sigma_{i}\) is the standard deviation of \(\langle\Sigma_{i}\rangle\). (b) Time horizon of causal influence for maximum unique causuality from \(\langle\Sigma_{i}\rangle\) to \(\langle\Sigma_{j}\rangle^{+}\). \(\Delta t^{U}_{i\to j}\) with \(j\neq i\) as a function of the filter width. The dashed line is \(\Delta T_{j}\sim\bar{\Delta}^{2/3}\). \(T_{\eta}\) and \(\eta\) are the Kolmorogov time-scale and length-scale, respectively. ### IT-causality analysis of the energy cascade We examine the causal interactions among the variables representing the interscale turbulent kinetic energy transfer: \(\langle\Sigma\rangle=[\langle\Sigma_{1}\rangle,\langle\Sigma_{2}\rangle,\langle \Sigma_{3}\rangle,\langle\Sigma_{4}\rangle]\). For a given target variable, \(\langle\Sigma_{j}\rangle\), the time delay \(\Delta T_{j}\) used to evaluate causality is determined as the time required for maximum \(\Delta I_{i\to j}^{U}\) with \(j\neq i\), where \(\langle\Sigma_{j}\rangle^{+}\) is evaluated at \(t+\Delta T_{j}\). Figure 12(b) shows that the time lags for causal influence increase with the filter width. According to the Kolmogorov theory (Kolmogorov, 1941), the characteristic lifetime of an eddy in the inertial range scales as \(\sim\tilde{\Delta}^{2/3}\). Assuming that the time required for interscale energy transfer is proportional to the eddy lifetime, it is expected that \(\Delta T_{j}\sim\tilde{\Delta}^{2/3}\). The values of \(\Delta T_{j}\) are consistent with the scaling provided by \(\tilde{\Delta}^{2/3}\) (also included in the figure), albeit the observation is limited to only three scales due to low Reynolds number effects. It was tested that the conclusions drawn below are not affected when the value of \(\Delta T_{j}\) was halved and doubled. The redundant, unique, and synergistic causalities are shown in figure 13(a) for each \(\langle\Sigma_{j}\rangle\), \(j=1,\ldots,4\). The most important contributions come from redundant and unique causalities, whereas synergistic causalities play a minor role. The top panel in figure 13(b) shows the causal map for \(\Delta I_{i\to j}^{U}\). It is interesting that the causal map for unique causalities vividly captures the forward energy cascade of causality toward smaller scales, which is inferred from the non-zero terms \(\Delta I_{1\to 2}^{U}\), \(\Delta I_{2\to 3}^{U}\), and \(\Delta I_{3\to 4}^{U}\). Curiously, there is no unique causality observed from smaller to larger scales, and any backward causality solely arises through redundant causal relationships. In the context of IT-causality, this implies that no new information is conveyed from the smaller scales to the larger ones. It is also revealing to compare the results in the top panel of figure 13(b) with the time cross-correlation, as the latter is routinely employed for causal inference by the fluid mechanics community. The time-cross-correlation "causality" from \(\langle\Sigma_{i}\rangle\) to \(\langle\Sigma_{j}\rangle\) is defined using Eq. (11) with \(Q_{i}=\langle\Sigma_{i}\rangle\) and \(Q_{j}=\langle\Sigma_{j}\rangle\). The time lag \(\Delta T_{ij}\) depends on the pair \(\langle\Sigma_{i}\rangle\)-\(\langle\Sigma_{j}\rangle\) and is obtained as the time for maximum correlation. The correlation map, \(C_{i\to j}\), is shown in the bottom panel of figure 13(b). The process portrayed by \(C_{i\to j}\) is far more intertwined than its IT-causality counterpart offered in the top panel of figure 13(b). Similarly to \(\Delta I_{i\to j}^{U}\), the correlation map also reveals the prevailing nature of the forward energy cascade (\(C_{i\to j}\) larger for \(j>i\)). However, note that \(C_{i\to j}\) is always above 0.7, implying that all the interscale energy transfers are tightly coupled. This is due to the inability of \(C_{i\to j}\) to compensate for mediator variables (e.g., a cascading process of the form \(\langle\Sigma_{1}\rangle\rightarrow\langle\Sigma_{2}\rangle\rightarrow\langle \Sigma_{3}\rangle\) would result in a non-zero correlation between \(\langle\Sigma_{1}\rangle\) and \(\langle\Sigma_{3}\rangle\) via the mediator variable \(\langle\Sigma_{2}\rangle\)). As a consequence, \(C_{i\to j}\) also fails to shed light on whether the energy is cascading locally from the large scales to the small scales (i.e., \(\langle\Sigma_{1}\rangle\rightarrow\langle\Sigma_{2}\rangle\rightarrow\langle \Sigma_{3}\rangle\rightarrow\langle\Sigma_{4}\rangle\)), or on the other hand, the energy is transferred between non-contiguous scales (e.g., \(\langle\Sigma_{1}\rangle\rightarrow\langle\Sigma_{3}\rangle\) without passing through \(\langle\Sigma_{2}\rangle\)). We have seen that IT-causality supports the former: the energy is transferred sequentially. Overall, the inference of causality based on the time cross-correlation is obscured by the often milder asymmetries in \(C_{i\to j}\) and the failure of \(C_{i\to j}\) to account for the effects of intermediate variables. In contrast, the causal map of unique causalities \(\Delta I_{i\to j}^{U}\) conveys a more intelligible picture of the locality of energy transfers among different scales. Our conclusions are consistent with evidence from diverse approaches, such as scaling analysis (Zhou, 1993, 2005, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016), triadic interactions in Fourier space (Domaradzki & Rogallo, 1990, 2009), time cross-correlations (Cardesa _et al._, 2015, 2017), and transfer entropy in the Gledzer-Ohkitana-Yamada shell model (Materassi _et al._, 2014). The IT-causality can be decomposed into contributions from different intensities of the target variable (\(\langle\Sigma_{j}\rangle^{+}\)). This information is provided by the weighted specific causalities \(p(\langle\Sigma_{j}\rangle^{+})\Delta\vec{i}_{\alpha\to j}^{\alpha}\), which are shown in figure 14(a) for \(\langle\Sigma_{2}\rangle^{+}\). The unique causalities can also be decomposed as a function of the source variable. This is done for the unique causality from \(\langle\Sigma_{1}\rangle\) to \(\langle\Sigma_{2}\rangle^{+}\), denoted by \(\Delta\vec{i}_{1\to 2}^{U}\) (as seen in figure 14b). The results show that the unique causality follows the linear relationship \(\langle\Sigma_{1}\rangle^{\prime}\sim\langle\Sigma_{2}\rangle^{\prime+}\). In general, events located around the mean value contribute the most to the causality. However, there is a small bias towards values above the mean \(\langle\Sigma_{1}\rangle^{\prime}>0\). Similar conclusions can be drawn for the other unique causalities (not shown). Finally, we calculate the information leak (\(\Delta I_{\mathrm{leak}\to j}\)) to quantify the amount of causality unaccounted for by the observable variables. The ratios \(\Delta I_{\mathrm{leak}\to j}/H(\langle\Sigma_{j}\rangle)\) are found to be 0.14, 0.47, 0.31, and 0.20 for \(j=1,2,3\), and 4, respectively, and for the time lags considered. The largest leak occurs for \(\langle\Sigma_{2}\rangle\), where approximately 47% of the IT-causality is carried by variables not included within the set \(\lfloor\langle\Sigma_{1}\rangle,\langle\Sigma_{2}\rangle,\langle\Sigma_{3} \rangle,\langle\Sigma_{4}\rangle\rfloor\). This implies that there are other factors affecting \(\langle\Sigma_{2}\rangle\) that have not been accounted for and that explain the remaining 53% causality of the variable. Conversely, the largest scale \(\langle\Sigma_{1}\rangle\) bears the smallest leak of 14%, which is due to the high value of the unique causality \(\Delta I_{1\to 1}^{U}\). The latter implies that the future of \(\langle\Sigma_{1}\rangle\) is mostly determined by its own past. Figure 13: (a) Redundant (R), unique (U), and synergistic (S) causalities among interscale energy-transfer signals at different scales. The information leak for each variable is also shown in the right-hand side bar. The causalities are ordered from left to right according to \(N_{I\to j}^{\alpha}\). (b) Top panel: causality maps for unique causalities \(\Delta I_{i\to j}^{U}\). Bottom panel: Correlation map \(C_{i\to j}\) between interscale energy-transfer signals as defined by Eq. (1). ## 5 Interaction between inner and outer layer motions in wall-bounded turbulence The behavior of turbulent motion within the thin fluid layers immediately adjacent to solid boundaries poses a significant challenge for understanding and prediction. These layers are responsible for nearly 50% of the aerodynamic drag on modern airliners and play a crucial role in the first 100 meters of the atmosphere, influencing broader meteorological phenomena (Marusic _et al._, 2010). The physics in these layers involve critical processes occurring very close to the solid boundary, making accurate measurements and simulations exceptionally difficult. Previous investigations utilizing experimental data from high Reynolds number turbulent boundary layers have revealed the impact of large-scale boundary-layer motions on the smaller-scale near-wall cycle (e.g. Hutchins & Marusic, 2007; Mathis _et al._, 2009). To date, the consensus holds that the dynamics of the near-wall layer operate autonomously (Jimenez & Pinelli, 1999), implying that outer-scale motions are not necessary to sustain a realistic turbulence in the buffer layer. Similarly, the large-scale outer-layer flow motions are known to be insensitive to perturbations of the near-wall cycle (Townsend, 1976). Nevertheless, it is widely accepted that the influence of large-scale motions in the inner layer is manifested through amplitude modulation of near-wall events. In the initial studies aimed at understanding outer-inner layer interactions (Mathis _et al._, 2009; Marusic _et al._, 2010; Mathis _et al._, 2011; Agostini & Leschziner, 2014), temporal or spatial signals of large-scale streamwise velocity motions were extracted using pre-defined filter cut-offs. The superposition of the large- and small-scale fluctuations in the near-wall region was then parameterized assuming universal small-scale fluctuations in the absence of inner-outer interactions. Subsequent refinements of the method include the work by Agostini & Leschziner (2016), who separated the large-scale and small-scale motions by means of the Empirical Mode Decomposition (Huang _et al._, 1998; Cheng _et al._, 2019) without explicit wavelength cutoffs. Later, Agostini & Leschziner (2022) used an auto-encoder algorithm to separate three-dimensional flow fields into large-scale and small-scale motions. Recently, Towne _et al._ (2023) used conditional transfer entropy to study inner/outer layer interactions in a turbulent boundary layer. Howland & Yang (2018) also investigated the small-scale response to large-scale fluctuations in turbulent flows in a more general setting along with its Figure 14: Specific (a) Redundant (R), unique (U), and synergistic (S) causalities from \(\langle\Sigma_{i}\rangle\) to \(\langle\Sigma_{2}\rangle^{+}\) as a function of the intensity of \(\langle\Sigma_{2}\rangle^{+}\). (b) Decomposition of unique causality as a function of \(\langle\Sigma_{1}\rangle\) and \(\langle\Sigma_{2}\rangle^{+}\). The primes denote fluctuating component above the mean value and \(\sigma_{\rm f}\) is the standard deviation of \(\langle\Sigma_{i}\rangle\). implications on wall modeling. A discussion of the amplitude modulation process in other turbulent flows can be found in Fiscaletti _et al._ (2015). Here, we leverage IT-causality to investigate the interaction between motions in the outer and inner layers of wall-bounded turbulence. Figure 15 illustrates the configuration used to examine the causal interactions between velocity motions in the outer layer and the inner layer. Two hypotheses are considered: (i) the footprint of outer flow large-scale motions on the near-wall motions and (ii) Townsend's outer-layer similarity hypothesis (Townsend, 1976). In hypothesis (i), the dynamics of the near-wall motions are presumed to be influenced by outer flow large-scale motions penetration close to the wall (i.e., predominance of top-down causality). In hypothesis (ii), the outer region of a turbulent boundary layer is expected to exhibit universal behavior that is independent of the specific characteristics of the underlying wall surface (i.e., lack of bottom-up causality). ### Numerical database The database utilized is a turbulent channel flow at friction Reynolds number \(\mathrm{Re}_{\tau}=u_{\tau}h/\nu\approx 1,000\) from Lozano-Duran & Jimenez (2014) based on the channel half-height (\(h\)), the kinematic viscosity (\(\nu\)), and the average friction velocity at the wall (\(u_{\tau}\)). The simulation was generated by solving the incompressible Navier-Stokes equations between two parallel walls via direct numerical simulation. The size of the domain in the streamwise, wall-normal, and spanwise direction is \(2\pi h\times 2h\times\pi h\), respectively. Velocity fields were stored sequentially in time every 0.8 plus units to resolve a large amount of the frequency content of the turbulent flow. Here, plus units are defined in terms of the friction velocity and the kinematic viscosity. The streamwise, wall-normal, and spanwise directions of the flow are denoted by \(x\), \(y\), and \(z\), respectively. More details about the simulation set-up can be found in Lozano-Duran & Jimenez (2014). The time-resolved signals considered are those of the streamwise velocity at the wall-normal heights \(y_{I}^{*}=15\) (for the inner layer) and \(y_{O}/h=0.3\) (for the outer layer), where superscript \(*\) denotes plus units. The inner and outer layer streamwise velocity signals are denoted by \(u_{I}(t)=u(x,y_{I},z,t)\) and \(u_{O}(t)=u(x,y_{O},z,t)\), respectively. Figure 16 shows an excerpt of \(u_{I}\) and \(u_{O}\) for a fixed \((x,z)\) location. The joint probability distribution to evaluate IT-causality was calculated using multiple streamwise and spanwise locations \((x,z)\). Some studies have considered a space-shift in the streamwise direction between both signals to increase their correlation (Howland & Yang, 2018). Here, we do not apply a streamwise Figure 15: Schematic of outer-layer and inner-layer streamwise velocity motions in a turbulent boundary and their interactions via unique causality. space-shift between the two signals but instead we do account for the relative displacement between signals using the time lag \(\Delta T\) to evaluate the IT-causality. ### IT-causality analysis of inner/outer layer interactions The goal is to evaluate the cause-and-effect interactions between \(u_{I}\) and \(u_{O}\). Four causal interactions are investigated: the self-induced unique causality of the signals, \(\Delta I_{O\to O}^{U}\) (outer\(\rightarrow\)outer) and \(\Delta I_{I\to I}^{U}\) (inner\(\rightarrow\)inner), and the cross-induced unique causality between signals, \(\Delta I_{I\to O}^{U}\) (inner\(\rightarrow\)outer) and \(\Delta I_{Q\to I}^{U}\) (outer\(\rightarrow\)inner). The latter represents the interaction between outer and inner layer motions. The self- and cross- unique causalities are compiled in figure 17 as a function of the time lag \(\Delta T\). The self-induced unique causalities \(\Delta I_{O\to O}^{U}\) and \(\Delta I_{I\to I}^{U}\) dominate for short time-scales. This is expected, as variables are mostly causal to themselves in the immediate future. Regarding inner/outer interactions, there is a clear peak at \(\Delta T^{*}\approx 30\) from the outer motions to the inner motions as seen in \(\Delta I_{O\to I}^{U}\). This peak is absent in the reverse direction \(\Delta I_{I\to O}^{U}\), which is essentially zero at all time scales. The result distinctly supports the prevalence of top-down interactions: causality flows predominantly from the outer-layer large-scale motions to inner-layer small-scale motions. The outcome is consistent with the modulation of the near-wall scales by large-scale motions reported in previous investigations (e.g. Hutchins & Marusic 2007; Mathis _et al._ 2009). The lack of bottom-up causality from the inner layer to the outer layer is also consistent with the Townsend's outer-layer similarity hypothesis (Townsend 1976) and previous observations in the literature (e.g. Flack _et al._ 2005; Flores & Jimenez 2006; Busse & Sandham 2012; Mizuno & Jimenez 2013; Chung _et al._ 2014; Lozano-Duran & Bae 2019). Figure 18(a) shows the redundant, unique, and synergistic causalities among velocity signals in the inner and outer layer at \(\Delta T^{*}=30\). This value corresponds to the time lag of maximum causal inference for \(\Delta I_{O\to I}^{U}\). The inner layer motions are dominated by the unique causality from the outer layer, \(\Delta I_{O\to I}^{U}\). The redundant and synergistic causalities are lower but still significant. Curiously, the unique causality \(\Delta I_{I\to I}^{U}\) is zero at the time scale considered. For the outer-layer motions, most of the causality is self-induced \(\Delta I_{O\to O}^{U}\) with no apparent influence from the inner layer. The information leak, as indicated in the right-hand sidebar, is 99% for both \(u_{I}\) and \(u_{O}\). Such a high value implies that most of the causality determining the future states of \(u_{I}\) and \(u_{O}\) is contained in other variables not considered in the analysis. This high information leak value is unsurprising, considering that the analysis has neglected most of the turbulent flow Figure 16: Example of time signals of the streamwise velocity in the inner layer \(u_{I}(x,y_{I},z,t)\) at \(y_{I}^{*}=15\) (dashed) and the outer layer \(u_{O}(x,y_{O},z,t)\) at \(y_{O}/h=0.3\) (solid) for a fixed \((x,z)\). The signals are extracted from a turbulent channel flow at \(\mathrm{Re}_{\tau}\approx 1,000\). The asterisk denotes plus units. field to evaluate the causality of \(u_{I}\) and \(u_{O}\), retaining only the degrees of freedom represented by two pointwise signals. The IT-causality is contrasted with the time cross-correlations in figure 18(b) using the same time lag of \(\Delta T^{*}=30\). All correlations are below 30%. Nonetheless, the fact that \(C_{O\to I}>C_{I\to O}\) also hints at the higher influence of the outer layer flow on the inner layer motions. However, this asymmetry in the directionality of the interactions is milder and less assertive than the IT-causality results. Furthermore, correlations do not offer a detailed decomposition into redundant, unique, and synergistic causality, nor do they account for the effect of unobserved variables as quantified by the information leak. Finally, we investigate the contribution of different intensities of \(u_{O}\) and \(u_{I}\) to the IT-causalities, focusing on top-down interactions. The redundant, unique, and synergistic specific causalities for \(u_{I}^{*}\) are shown in figure 19(a). The unique causality as a function of \(u_{O}\) and \(u_{I}\) (namely, \(\Delta\zeta_{O\to I}^{U}\)) is presented in figure 19(b). The contributions are clearly divided into four quadrants. Most of the causality is located in the first quadrant (\(u_{O}^{\prime}>0\) and \(u_{I}^{\prime}>0\)), followed by the third quadrant (\(u_{O}^{\prime}<0\) and \(u_{I}^{\prime}<0\)), as both contain informative events (\(\Delta\zeta_{O\to I}^{U}>0\)). The second and fourth quadrants contain misinformative events (\(\Delta\zeta_{O\to I}^{U}<0\)) that do not increase the value of the unique causality. The results seem to indicate that causality from the outer to the inner layer primarily occurs within high-velocity streaks in the outer layer that propagate towards the wall. A significant, albeit weaker, causality is also observed for low-velocity streaks. It has been well-documented that high- and low-velocity streaks are statistically accompanied by downward and upward flow motions referred to as sweeps and ejections, respectively (Wallace 2016), which explains the high values of \(\Delta\zeta_{O\to I}^{U}\) for \(u_{O}^{\prime}>0\)-\(u_{I}^{\prime}>0\) and \(u_{O}^{\prime}<0\)-\(u_{I}^{\prime}<0\). The absence of causality for \(u_{O}^{\prime}\) and \(u_{I}^{\prime}\), with opposite signs, can be attributed to the fact that these situations are improbable, as they imply a change of sign in the velocity streak along the wall-normal direction. In summary, in comparison to previous investigations, IT-causality is purposely devised to account for the time dynamics of the signals and provides the time scale for maximum causal inference. Moreover, the present approach only requires the inner/outer time signals without any further manipulation. IT-causality is based on probability distributions and, as such, is invariant under shifting, rescaling, and, in general, invertible transformations of the signals (see SS2.5). Hence, our approach unveils the interactions between inner and outer layer velocity motions in a simple manner while minimizing the number of arbitrary parameters Figure 17: Unique causality of the inner and outer layer streamwise velocity motions. (a) Self- and (b) cross-induced unique causality as a function of the time lag. The vertical dashed line is \(\Delta T^{*}=30\). Figure 19: (a) Redundant (R), unique (U), and synergistic (S) specific causalities from \(u_{O}\) to \(u_{I}^{+}\) as a function of the intensity of \(u_{I}^{+}\). (b) Decomposition of unique causality \(\Delta\zeta_{O\to I}^{U}\) as a function of \(u_{O}\) to \(u_{I}^{+}\). The primes denote fluctuating component above the mean value and \(\sigma_{u_{I}}\) and \(\sigma_{u_{O}}\) are the standard deviations of \(u_{O}\) to \(u_{I}\), respectively. Figure 18: (a) Redundant (R), unique (U), and synergistic (S) causalities among velocity signals in the inner (I) and outer (O) layer of wall-bounded turbulence. The information leak for each variable is also shown in the right-hand side bar. (b) Time cross-correlation between variables. The results are for a time-lag \(\Delta T^{*}\approx 30\). typically used in previous studies, such as the type of signal transformation, filter width, reference convection velocity, and selection of parameters for the extraction of the universal signal. ## 6 Limitations We conclude this work by discussing several limitations of IT-causality for causal inference in chaotic dynamical systems: * The first consideration is the acknowledgment that IT-causality is merely a tool. As such, it can offer valuable physical insights into the problem at hand when used effectively. The formulation of meaningful questions and the selection of appropriate variables to answer those questions will always take precedence over the choice of the tool. * IT-causality operates within a probabilistic framework. The concept of information-theoretic causality used here is based on transitional probabilities between states. Therefore, it should be perceived as a probabilistic measure of causality rather than a means of quantifying causality for individual events or causality inferred from interventions into the system. * IT-causality is a data-intensive tool due to the necessity of estimating high-dimensional probability distributions. Consequently, the applicability of the method is limited when dealing with a large number of variables. Typically, it can be applied to analyze up to five to ten variables simultaneously, depending on the available data. Attempting to consider a greater number of variables can introduce significant statistical errors. However, this limitation is expected to become less restrictive with the ongoing proliferation of data and advancements in probability estimation methods. Appendix C contains an analysis of the impact of the number of samples in IT-causality. * IT-causality relies on mutual information, which is a specific instance of the Kullback-Leibler divergence. Other dissimilarity methods exist to quantify differences between probability distributions, all of which are equally valid and could be explored in the future to devise alternative quantification of causality. * The results obtained through IT-causality analysis are contingent on the observable phase-space. High-dimensional chaotic systems, such as turbulence, involve nonlinear interactions among a large amount of variables. However, IT-causality is often limited to a finite subset of variables. When variables with a strong impact on the dynamics are unobserved or intentionally omitted due to practical constraints, the inferred causal structure among observed variables can be incomplete or misleading. This situation is referred to as lack of causal sufficiency. Appendix SSB contains an example of the impact of unobserved variables in the coupled Rossler-Lorenz system. * The concept of causality, as interpreted in IT-causality, is inherently linked to changes in the variables. This is reflected in the partitioning of the variables into states (\(D_{i}\)), which are regions where variable values fall within a specific range. The partitioning scheme shapes our definition of change; a variable is considered to have changed when it transitions between states. Hence, different partitions may lead to different IT-causalities. For continuous variables, IT-causality will become eventually insensitive to further refinements of the partition provided the smoothness in the probability distributions, although exceptions may arise. Appendix C also provides an analysis of the sensitivity of IT-causality to partition refinement. * IT-causality is specifically designed for dynamic variables and, as such, cannot be employed to analyze parameters that remain constant over time. In summary, IT-causality may offer valuable insights into causality within complex systems, especially when compared to correlation-based methods. However, like any tool, it comes with constraints that researchers should be aware of when applying the approach to draw meaningful conclusions. ## 7 Conclusions Causality lies at the heart of scientific inquiry, serving as the cornerstone for understanding how variables relate to one another. In the case of turbulence research, the chaotic and multiscale nature of the flow makes the quantification of causality particularly challenging. For that reason, traditional approaches for assessing relationships between variables often fall short in measuring causal links, highlighting the need for more in-depth approaches. We have introduced an information-theoretic method to quantify causality among variables by measuring the dissimilarity between transitional probability distributions. The approach, referred to as IT-causality, is rooted in the forward propagation of information in chaotic dynamical systems and quantifies causality by measuring the information gained about future events. One distinctive feature of IT-causality compared to correlation-based methods is its suitability for analyzing causal networks that involve mediator, confounder, and collider variables. In the latter, our method allows us to distinguish between redundant, unique, and synergistic causality among variables. IT-causality can also be decomposed as a function of variable intensities, which facilitates the evaluation of how different states contribute to the overall causality. Another essential aspect of IT-causality is its foundation on probability distributions, rendering it invariant under shifting, rescaling, and general invertible transformations of the variables. Finally, we have introduced the concept of information leak, quantifying the extent of causality that remains unaccounted for due to unobserved variables. IT-causality has been applied to investigate two problems in turbulence research. In the first problem, we tested the hypothesis of scale locality in the energy cascade of isotropic turbulence. Time-resolved data from direct numerical simulations of isotropic turbulence were used for the analysis. First, the velocity field was low-pass filtered to obtain the interscale energy transfer among four scales (\(\tilde{\Delta}\)) within the inertial range. The interscale energy transfer was volume-averaged to extract time-resolved signals, which served as markers for the dynamics of the energy cascade. IT-causality was applied to these signals to uncover the causal relationships involved in the energy transfer at different scales. It was found that the time scale for maximum causal inference for the unique causalities follows \(\tilde{\Delta}^{2/3}\), consistent with the Kolmogorov theory. Most of the causality among energy transfer signals is either redundant or unique, with barely any synergistic causality. This suggests that much of the information contained in the signals is either duplicated or originates from a unique source. In particular, the most pronounced unique causalities occurred between consecutive scales, progressing from larger to smaller scales. This finding supports the hypothesis of scale locality in the interscale energy transfer in isotropic turbulence, where the energy propagates sequentially from one scale to the next smaller scale. Finally, the analysis of the contribution of different intensities to the total unique causality revealed a linear relationship between the magnitudes of the causal variable and its effect: large-scale events of a given intensity contribute the most to smaller-scale events of similar relative intensity. In the second problem, we explored the interaction between streamwise velocity motions within the inner and outer layers of wall-bounded turbulence. To accomplish this, we utilized time-resolved data from a direct numerical simulation of turbulent channel flow. IT-causality was applied to two pointwise signals of the streamwise velocity. The first signal was extracted from the outer layer, positioned at 30% of the channel's half-height. The second signal was located within the inner layer, situated at a distance of 15 plus units from the wall. The analysis revealed a unidirectional flow of causality, with causality predominantly originating from the outer layer and propagating towards the inner layer. This unidirectional nature suggests a clear influence from the outer layer dynamics on those in the inner layer, as previously observed, but not vice versa. The time horizon for maximum causal inference from the outer to the inner layer spanned 30 plus units. The decomposition of causality contributions as a function of velocity intensity revealed that the causality from the outer layer to the inner layer is primarily associated with high-velocity streaks. These streaks extend from the outer layer down to the wall and are consistent with the well-known association of sweeps and high-speed streaks in wall-bounded turbulence. Lastly, it was observed that the information leak amounted to approximately 99% for both velocity signals. This substantial value indicates that a significant portion of the causality governing the velocity signals resides within variables that were not taken into account during the analysis. This is expected, as most of the degrees of freedom in the system were neglected in the analysis. We have shown that IT-causality offers a natural approach for examining the relationships among variables in chaotic dynamical systems, such as turbulence. By focusing on the transitional probability distributions of states, IT-causality provides a framework that aligns seamlessly with the inherent unpredictability and complexity of chaotic systems, opening up a new avenue for advancing our understanding of these phenomena. ## 8 Acknowledgements This work was supported by the National Science Foundation under Grant No. 2140775 and MISTI Global Seed Funds and UPM. G. A. was partially supported by the Predictive Science Academic Alliance Program (PSAAP; grant DE-NA0003993) managed by the NNSA (National Nuclear Security Administration) Office of Advanced Simulation and Computing and the STTR N68335-21-C-0270 with Cascade Technologies, Inc. and the Naval Air Systems Command. The authors acknowledge the MIT SuperCloud and Lincoln Laboratory Supercomputing Center for providing HPC resources that have contributed to the research results reported within this paper. The authors would like to thank Adam A. Sliwiak, Alvaro Martinez-Sanchez, Rong Ma, Sam Costa, Julian Powers, and Yuan Yuan for their constructive comments. ## Appendix A Formal definition of redundant, unique, and synergistic causalities The problem of defining redundant, unique, and synergistic causalities can be generally framed as the task of decomposing the mutual information \(I(Q_{j}^{+};\mathbf{Q})\). The definitions proposed here are motivated by their consistency with the properties presented in this section along with the ease of interpretability. Alternative definitions are possible and other decompositions have been suggested in the literature (e.g. Williams & Beer, 2010; Griffith & Koch, 2014; Griffith & Ho, 2015; Ince, 2017; Gutknecht _et al._, 2021; Lozano-Duran & Arranz, 2022; Kolchinsky, 2022). However, none of the previously existing decompositions are compatible with the properties outlined in SS 2.5. Our definitions of redundant, unique, and synergistic causalities are motivated by the following intuition: * Redundant causality from \(\mathbf{Q_{i}}=[Q_{i_{1}},Q_{i_{2}},\ldots]\) to \(Q_{j}^{+}\) is the common causality shared among all the components of \(\mathbf{Q_{i}}\), where \(\mathbf{Q_{i}}\) is a subset of \(\mathbf{Q}\). * Unique causality from \(Q_{i}\) to \(Q_{j}^{+}\) is the causality from \(Q_{i}\) that cannot be obtained from any other individual variable \(Q_{k}\) with \(k\neq i\). Redundant and unique causalities must depend only on probability distributions based on \(Q_{i}\) and \(Q_{j}^{+}\), i.e., \(p(q_{i},q_{j}^{+})\). * Synergistic causality from \(\mathbf{Q_{i}}=[Q_{i_{1}},Q_{i_{2}},\ldots]\) to \(Q_{j}^{+}\) is the causality arising from the joint effect of the variables in \(\mathbf{Q}_{t}\). Synergistic causality must depend on the joint probability distribution of \(\mathbf{Q}_{t}\) and \(Q_{j}^{+}\), i.e., \(p(\mathbf{q}_{t},\mathbf{q}_{j}^{+})\). For a given state \(Q_{j}^{+}=q_{j}^{+}\), the redundant, unique, and synergistic specific information are formally defined as follows: \(\bullet\) The specific redundant causality is the _increment_ in information gained about \(q_{j}^{+}\) that is common to all the components of \(\mathbf{Q}_{\mathbf{j}_{k}}\): \[\Delta\tilde{t}_{\mathbf{j}_{k}}^{R}=\begin{cases}\tilde{t}_{i_{k}}-\tilde{t}_{i_{ k-1}},&\text{for }\tilde{t}_{i_{k}},\tilde{t}_{i_{k-1}}\in\tilde{\mathcal{G}}^{1}\text{ and }k\neq n_{1}\\ 0,&\text{otherwise},\end{cases}\] (A.1) where we take \(\tilde{t}_{i_{0}}=0\), \(\mathbf{j}_{k}=[j_{k1},j_{k2},\ldots]\) is the vector of indices satisfying \(\tilde{t}_{j_{k1}}\geqslant\tilde{t}_{i_{k}}\) for \(\tilde{t}_{j_{k1}},\tilde{t}_{i_{k}}\in\tilde{\mathcal{G}}^{1}\), and \(n_{1}\) is the number of elements in \(\tilde{\mathcal{G}}^{1}\). \(\bullet\) The specific unique causality is the _increment_ in information gained by \(Q_{i_{k}}\) about \(q_{j}^{+}\) that cannot be obtained by any other individual variable: \[\Delta\tilde{t}_{i_{k}}^{U}=\begin{cases}\tilde{t}_{i_{k}}-\tilde{t}_{i_{k-1}},&\text{for }i_{k}=n_{1},\ \tilde{t}_{i_{k}},\tilde{t}_{i_{k-1}}\in\tilde{ \mathcal{G}}^{1}\\ 0,&\text{otherwise}.\end{cases}\] (A.2) \(\bullet\) The specific synergistic causality is the _increment_ in information gained by the combined effect of all the variables in \(\mathbf{Q}_{\tilde{t}_{k}}\) that cannot be gained by other combination of variables \(\mathbf{Q}_{\mathbf{j}_{k}}\) such that \(\tilde{t}_{j_{k}}\leqslant\tilde{t}_{i_{k}}\) for \(\tilde{t}_{i_{k}}\in\tilde{\mathcal{G}}^{M}\) and \(\tilde{t}_{j_{k}}\in\{\tilde{\mathcal{G}}^{1},\ldots,\tilde{\mathcal{G}}^{M}\}\) with \(M>1\): \[\Delta\tilde{t}_{i_{k}}^{S}=\begin{cases}\tilde{t}_{i_{k}}-\tilde{t}_{i_{k-1}},&\text{for }\tilde{t}_{i_{k-1}}\geqslant\max\{\tilde{\mathcal{G}}^{M-1}\},\text{ and }\tilde{t}_{i_{k}},\tilde{t}_{i_{k-1}}\in\tilde{\mathcal{G}}^{M}\\ \tilde{t}_{i_{k}}-\max\{\tilde{\mathcal{G}}^{M-1}\},&\text{for }\tilde{t}_{i_{k}}>\max\{ \tilde{\mathcal{G}}^{M-1}\}>\tilde{t}_{i_{k-1}},\text{ and }\tilde{t}_{i_{k}},\tilde{t}_{i_{k-1}}\in\tilde{ \mathcal{G}}^{M}\\ 0,&\text{otherwise}.\end{cases}\] (A.3) ## Appendix B Additional validation cases ### Synchronization in logistic maps The one-dimensional logistic map is a recurrence given by relationship, \[Q_{1}(n+1)=\alpha_{1}Q_{1}(n)[1-Q_{1}(n)],\] (B.1) where \(n\) is the time step and \(\alpha_{1}\) is a constant. Equation (B.1) exhibits a chaotic behavior for \(\alpha_{1}\approx 3.57-4\)(May, 1976). We consider the three logistic maps: \[Q_{1}(n+1)=\alpha_{1}Q_{1}(n)[1-Q_{1}(n)],\] (B.2a) \[Q_{2}(n+1)=\alpha_{2}f_{12}[1-f_{12}],\] (B.2b) \[Q_{3}(n+1)=\alpha_{3}f_{123}[1-f_{123}],\] (B.2c) which are coupled by \[f_{12}=\frac{Q_{2}(n)+c_{1\to 2}Q_{1}(n)}{1+c_{1\to 2}},\] (B.3a) \[f_{123}=\frac{Q_{3}(n)+c_{12\to 3}Q_{1}(n)+c_{12\to 3}Q_{2}(n)}{1+2c_{12\to 3}},\] (B.3b) where \(\alpha_{1}=3.68\), \(\alpha_{2}=3.67\), and \(\alpha_{3}=3.78\) are constants, \(c_{1\to 2}\) is the parameter coupling \(Q_{2}\) with \(Q_{1}\), and \(c_{12\to 3}\) is the parameter coupling \(Q_{3}\) with \(Q_{2}\) and \(Q_{1}\). The clear directionality of the variables in this system for different values of \(c_{12\to 3}\) and \(c_{12\to 3}\) offers a simple testbed to illustrate the behavior of the IT-causality. The causal analysis is performed for one time-step lag after integrating the system for \(10^{8}\) steps. The phase-space was partitioned using 100 bins for each variables. First, we consider three cases with different degrees of coupling between \(Q_{1}\) and \(Q_{2}\) while maintaining \(Q_{3}\) uncoupled. The results are shown in figure 20. * Uncoupled systems (\(c_{12\to 3}=c_{12\to 3}=0\)). In this case, \(Q_{1}\), \(Q_{2}\), and \(Q_{3}\) are completely uncoupled and the only non-zero causalities are the self-induced unique components \(\Delta I^{U}_{1\to 1}\), \(\Delta I^{U}_{2\to 2}\), and \(\Delta I^{U}_{3\to 3}\), as shown by the left panels in figure 20 (red bars). * Intermediate coupling \(Q_{1}\to Q_{2}\) (\(c_{12\to 3}=0.1\) and \(c_{12\to 3}=0\)). In this case, the dynamics of \(Q_{2}\) are affected by \(Q_{1}\). This is shown in the center panels of figure 20 (blue bars) by the non-zero terms \(\Delta I^{R}_{12\to 2}\neq 0\), \(\Delta I^{U}_{1\to 1}\neq 0\) and \(\Delta I^{S}_{12\to 1}\neq 0\). The latter is the synergistic causality due to the combined effect of \(Q_{1}\) and \(Q_{2}\), which is a manifestation of the coupling term \(f_{1\to 2}\). We can also observed that \(\Delta I^{R}_{12\to 1}\neq 0\). Note that this is a redundant causality and does not necessarily imply that \(Q_{1}\) is affected by \(Q_{2}\). Instead, it should be interpreted as \(Q_{2}\) being able to inform about the future of \(Q_{1}\), which is expected as \(Q_{1}\) is contained in the right-hand side of the equation for \(Q_{2}\). As expected, the only non-zero causality for \(Q_{3}\) is again \(\Delta I^{U}_{3\to 3}\), as it is uncoupled from \(Q_{1}\) and \(Q_{2}\). * Strong coupling \(Q_{1}\to Q_{2}\) (\(c_{12\to 3}=1\) and \(c_{12\to 3}=0\)). Taking the limit \(c_{12\to 3}\to\infty\), it can be seen that \(Q_{2}\equiv Q_{1}\). It is also known that even for lower values of \(c_{12\to 3}\sim 1\), \(Q_{1}\) and \(Q_{2}\) synchronize and both variables exhibit identical dynamics (Diego _et al._, 2019). This is revealed by the right panels of figure 20 (yellow bars), where the only non-zero causalities are \(\Delta I^{R}_{12\to 1}\) and \(\Delta I^{R}_{12\to 2}\). As in the two previous cases, \(Q_{3}\) remains unaffected (\(\Delta I^{U}_{3\to 3}\neq 0\)). Next, we consider three additional cases in which \(Q_{3}\) is coupled with \(Q_{1}\) and \(Q_{2}\). The results are shown in figure 21. Figure 20: Redundant (R), unique (U) and synergistic (S) causalities for \(Q_{1}\), \(Q_{2}\), \(Q_{3}\) in coupled logistic maps for (left panels, in red) uncoupled variables, \(c_{12}=0\) and \(c_{123}=0\), (center panels, in blue) intermediate coupling \(Q_{1}\to Q_{2}\), \(c_{12}=0.1\) and \(c_{123}=0\) and (right panels, in yellow) strong coupling \(Q_{1}\to Q_{2}\)\(c_{12\to 3}=1\) and \(c_{12\to 3}=0\). * Strong coupling \(Q_{2},Q_{1}\to Q_{3}\) and no coupling \(Q_{1}\to Q_{2}\) (\(c_{12\to 3}=0\) and \(c_{123\to 3}=1\)). The results, included in the left panels of figure 21 (red bars), show that most of the causality to \(Q_{1}\) and \(Q_{2}\) is self-induced and unique (\(\Delta I_{1\to 1}^{U}\neq 0\) and \(\Delta I_{2\to 2}^{U}\neq 0\), respectively), with a small redundant contribution from \(Q_{3}\). This is consistent the fact that the dynamics of \(Q_{1}\) and \(Q_{2}\) do not depend on any other variable than themselves, but \(Q_{3}\) is coupled with \(Q_{1}\) and \(Q_{2}\), which results in small amount of redundant causality. There is a strong causality from \(Q_{1}\) and \(Q_{2}\) to \(Q_{3}\) in the form of synergistic causality, being \(\Delta I_{123\to 3}^{S}\) the dominant component consistent with the coupling term \(f_{123}\). * Intermediate coupling \(Q_{2},Q_{1}\to Q_{3}\) and \(Q_{1}\to Q_{2}\) (\(c_{12\to 3}=c_{123\to 3}=0.1\)). This is the most complex scenario since the variables do not synchronize yet they affect each other notably. The results are shown in the center panels of figure 21 (blue bars). The causalities to \(Q_{1}\) remain mostly independent from \(Q_{2}\) and \(Q_{3}\) except for the expected small redundant causalities. The causalities to \(Q_{2}\) and \(Q_{3}\) exhibit a much richer behavior, with multiple redundant and synergistic causalities. The fact that \(Q_{2}\) is not coupled to \(Q_{3}\) can be seen from the lack of unique causality \(\Delta I_{3\to 2}^{U}\). * Strong coupling \(Q_{2},Q_{1}\to Q_{3}\) and \(Q_{1}\to Q_{2}\) (\(c_{12\to 3}=1\) and \(c_{123\to 3}=1\)). In this case, the three variables synchronize such that \(\Delta I_{123\to 1}^{R}=\Delta I_{123\to 2}^{R}=\Delta I_{123\to 3}^{R}\neq 0\) (i.e., they can be interpreted as exact copies of each other). The results are shown in right panels of figure 21 (yellow bars). ### Example in coupled Rossler-Lorenz system We study a coupled version of the Lorenz system (Lorenz 1963) and the Rossler system (Rossler 1977). The former was developed by Lorenz as a simplified model of viscous Figure 21: Redundant (R), unique (U) and synergistic (S) causalities for \(Q_{1}\), \(Q_{2}\), \(Q_{3}\) in coupled logistic maps for (left panels, in red) uncoupled variables, \(c_{12}=0\) and \(c_{123}=1\), (center panels, in blue) intermediate coupling \(Q_{1}\to Q_{2}\), \(c_{12}=0.1\) and \(c_{123}=0.1\) and (right panels, in yellow) strong coupling \(Q_{1}\to Q_{2}\), \(c_{12\to 3}=1\) and \(c_{12\to 3}=1\). fluid flow. Rossler proposed a simpler version of the Lorenz's equations in order to facilitate the study its chaotic properties. The governing equations are \[\frac{\mathrm{d}Q_{1}}{\mathrm{d}t} =-6[Q_{2}+Q_{3}], \tag{14a}\] \[\frac{\mathrm{d}Q_{2}}{\mathrm{d}t} =6[Q_{1}+0.2Q_{2}],\] (14b) \[\frac{\mathrm{d}Q_{3}}{\mathrm{d}t} =6\left[0.2+Q_{3}[Q_{1}-5.7]\right],\] (14c) \[\frac{\mathrm{d}Q_{4}}{\mathrm{d}t} =10[Q_{5}-Q_{4}],\] (14d) \[\frac{\mathrm{d}Q_{5}}{\mathrm{d}t} =Q_{4}[28-Q_{6}]-Q_{5}+cQ_{2}^{2},\] (14e) \[\frac{\mathrm{d}Q_{6}}{\mathrm{d}t} =Q_{4}Q_{5}-\frac{8}{3}Q_{6}, \tag{14f}\] where \([Q_{1},Q_{2},Q_{3}]\) correspond to the Rossler system and \([Q_{4},Q_{5},Q_{6}]\) to the Lorenz system. The coupling is unidirectional from the Rossler system to the Lorenz system via \(Q_{2}\to Q_{5}\) and the parameter \(c\). This coupled system has previously been studied by Quiroga _et al._ (2000) and Krakovska _et al._ (2018). We use this case to study the behavior of IT-causality among four variables in a continuous dynamical system when some of the variables are hidden. The observable variables are \(\mathbf{Q}=[Q_{1},Q_{2},Q_{5},Q_{6}]\). The system was integrated for \(10^{6}t_{\mathrm{ref}}\) where \(t_{\mathrm{ref}}\) is the time-lag for which \(I(Q_{1}^{+};Q_{1})/I(Q_{1};Q_{1})=0.5\). The time-lag selected for causal inference is \(\Delta T\approx t_{\mathrm{ref}}\) and the 50 bins per variable were used to partition the phase space. The results for uncoupled systems (\(c=0\)) are shown in figure 22. The left panel portrays a typical trajectories of the systems. The causalities are shown in the right panel, where red and blue colors are used to represent causalities exclusive to the Rossler system (i.e., only involving \(Q_{1}\) and \(Q_{2}\)) and Lorenz system (i.e., only involving \(Q_{5}\) and \(Q_{6}\)), respectively. Unsurprisingly, IT-causality shows that both systems are uncoupled. Moreover, the unique, redundant and synergistic causal structure identified in the Rossler and Lorenz systems are consistent with structure of Eq. (14a). The information leak is roughly 25% due to the unobserved variables and the uncertainty introduced by partitioning the observable phase-space. The results for the coupled system (\(c=2\)) are shown in figure 23. The left panel shows how the trajectory of the Lorenz system is severely impacted by the coupling. The new causalities are shown in the right panel. As before, red and blue colors are used to represent causalities exclusive to the Rossler system (i.e., only involving \(Q_{1}\) and \(Q_{2}\)) and Lorenz system (i.e., only involving \(Q_{5}\) and \(Q_{6}\)), respectively, and yellow color is used for causalities involving variables from both systems. The causalities in the Rossler remain comparable to the uncoupled case besides some small redundancies and synergies due to the effect of unobserved variables. On the contrary, the causalities in the Lorenz system undergo deeper changes. This is evidenced by the emergence of multiple synergistic causalities involving \(Q_{1}\) and \(Q_{2}\). This effect is consistent with the coupling of both systems. The emergence of new redundant and synergistic causalities can be understood as a more complex manifestation of the effect seen in the toy problem from figure 5: the combination of variables yields the creation of redundancies synergies, where the latter dominate. ## Appendix C Sensitivity of IT-causality to sample size and partition refinement We investigate the sensitivity of IT-causality to the number of samples (\(N_{\text{samples}}\)) used to estimate the probability distributions and the number of bins employed to partition the range of values of the variables (\(N_{\text{bins}}\)). The Lorenz system is used as a testbed: \[\frac{\text{d}Q_{1}}{\text{d}t} =10[Q_{2}-Q_{1}],\] (C 1 \[a\] ) \[\frac{\text{d}Q_{2}}{\text{d}t} =Q_{1}[28-Q_{3}]-Q_{2}\] (C 1 \[b\] ) \[\frac{\text{d}Q_{3}}{\text{d}t} =Q_{1}Q_{2}-\frac{8}{3}Q_{3}.\] (C 1 \[c\] ) The system was integrated over time to collect \(N_{\text{samples}}=5\times 10^{3},5\times 10^{4},5\times 10^{5}\), and \(5\times 10^{8}\) events after transients. Probability distributions were calculated using uniform bins with \(N_{\text{bins}}=10,50,100,\) and \(200\) per variable. Our primary focus is on causalities to \(Q_{1}\), but the conclusions drawn also apply to \(Q_{2}\) and \(Q_{3}\). The sensitivity to \(N_{\text{samples}}\) is displayed in figure 24(a), where \(N_{\text{samples}}\) varies while maintaining \(N_{\text{bins}}=50\) constant. For \(N_{\text{samples}}>5\times 10^{3}\), the changes in IT-causality remain Figure 22: Uncoupled Rössler-Lorenz system (\(c=0\)). The left panels show excerpts of the trajectories pertaining to Rössler systems \([Q_{1},Q_{2},Q_{3}]\) (top) and Lorenz system \([Q_{4},Q_{5},Q_{6}]\) (bottom). The right panels show the redundant (R), unique (U), and synergistic (S) causalities among \([Q_{1},Q_{3},Q_{4},Q_{6}]\). The causalities are ordered from left to right according to \(N_{t\to j}^{\alpha}\). within a few percentage points of difference. The sensitivity to the size of the partition is assessed in figure 24(b), where \(N_{\text{bins}}\) varies while \(N_{\text{samples}}\) is held constant at \(N_{\text{samples}}=5\times 10^{5}\). The IT-causalities exhibit quantitative resemblance for all partitions, with the exception of \(N_{\text{bins}}=10\), which may be too coarse to capture the continuous dynamics of the variables.
因果関係を量子化する情報論的アプローチを導入します。そのアプローチはIT-因果関係と呼ばれ、過去の事象の知識に基づいて未来の事象に関する情報を得ることで因果関係を定量化します。因果関係は、その性質によって冗長、ユニーク、協調的な貢献に分類されます。この定式は非干渉性、変数の可逆変換に対する不変性があり、観察できない変数の欠損の因果関係を補います。この方法は、興味のある量の過去と未来のイベントのペアのみを必要とし、計算シミュレーションと実験調査に便利なものである。IT-因果関係は、変数の間の基本的な因果関係を含む四つのシナリオで検証されています。メディエータ、混同因子、冗長のコリダー、そして協調的なコリダーのいずれか。このアプローチは、乱流研究に関連する二つの質問に活用されています。i)
2301.13817
Patch Gradient Descent: Training Neural Networks on Very Large Images
Traditional CNN models are trained and tested on relatively low resolution images (<300 px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited.
Deepak K. Gupta, Gowreesh Mago, Arnav Chavan, Dilip K. Prasad
2023-01-31T18:04:35
http://arxiv.org/abs/2301.13817v1
# Patch Gradient Descent: Training Neural Networks ###### Abstract Traditional CNN models are trained and tested on relatively low resolution images (\(<300\) px), and cannot be directly operated on large-scale images due to compute and memory constraints. We propose Patch Gradient Descent (PatchGD), an effective learning strategy that allows to train the existing CNN architectures on large-scale images in an end-to-end manner. PatchGD is based on the hypothesis that instead of performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. PatchGD thus extensively enjoys better memory and compute efficiency when training models on large scale images. PatchGD is thoroughly evaluated on two datasets - PANDA and UltraMNIST with ResNet50 and MobileNetV2 models under different memory constraints. Our evaluation clearly shows that PatchGD is much more stable and efficient than the standard gradient-descent method in handling large images, and especially when the compute memory is limited. ## 1 Introduction Convolutional neural networks (CNNs) are considered among the most vital ingredients for the rapid developments in the field of computer vision. This can be attributed to their capability of extracting very complex information far beyond what can be obtained from the standard computer vision methods. For more information, we refer the reader to the recently published comprehensive reviews (Khan et al., 2020; Li et al., 2021; Alzubaidi et al., 2021). With the recent technological developments, very large images are obtained from data acquisition in the fields of microscopy (Khater et al., 2020; Schermelleh et al., 2019), medical imaging (Aggarwal et al., 2021), and earth sciences (Huang et al., 2018; Amani et al., 2020), among others. Recently, there has been a drive to use deep learning methods in these fields as well. In particular, several deep learning methods have been proposed to handle the images from the microscopy domain (Orth et al., 2017; Dankovich and Rizzoli, 2021; Sekh et al., 2020, 2021), however, the big data challenge of applying CNNs to analyze such images is immense, as we demonstrate in Figure 1. High content nanoscopy involves taking nanoscopy images of several adjacent fields-of-view and stitching them side-by-side to have a full perspective of the biological sample, such as a patient's tissue biopsy, put under the microscope. There is information at multiple scales embedded in these microscopy images (Villegas-Hernandez et al., 2022), with the smallest scale of features being only a few pixels in size. Indeed, such dimensions of images and levels of details are a challenge for the existing CNNs. Figure 1: Example nanoscopy image (left) of a mouse kidney cryosection approximately 1/12th of the area of a single field-of-view of the microscope, chosen to illustrate the level of details at different scales. The bottom right images show that the smallest features in the image of relevance can be as small as a few pixels (here 5-8 pixels for the holes)(Villegas-Hernández et al., 2022). Existing deep learning models using CNNs are predominantly trained and tested on relatively low resolution regime (less than \(300\times 300\) pixels). This is partly because the widely used image benchmarking datasets such as ILSVRC(ImageNet dataset) [10] for classification and PASCAL VOC [1] for object detection/segmentation consist of low-resolution images in a similar range, and most of the existing research has been towards achieving state-of-the-art (SOTA) results on these or similar datasets. Using these models on high-resolution images leads to quadratic growth of the associated activation size, and this in turn leads to massive increase in the training compute as well as the memory footprint. Further, when the available GPU memory is limited, such large images cannot be processed by CNNs. There exist very limited works that address the issue of handling very large images using CNNs. The most common approach among these is to reduce the resolution of the images through downscaling. However, this can lead to a significant loss of information associated with the small-scale features, and it can adversely affect the semantic context associated with the image. An alternate strategy is to divide the image into overlapping or non-overlapping tiles and process the tiles in a sequential manner. However, this approach does not assure that the semantic link across the tiles will be preserved and it can hinder the learning process. Several similar strategies exist that attempt to learn the information contained in the large images, however, their failure to capture the global context limits their use. In this paper, we present a novel CNN training pipeline that is capable of handling very large images. We point out here that 'large images' should not be plainly interpreted in terms of the number of pixels that they comprise, rather an image should be considered too large to be trained with CNNs if the respective computational memory budget available for it is small. For example, while training a a ResNet50 classification model with images of size \(10,000\times 10,000\) might be hardly possible on a GPU card of 48 GB memory, a GPU memory of 12 GB could be good enough to train the same model on \(512\times 512\) size images. Further, when the same \(512\times 512\) size images are trained on a GPU memory limit of 4 GB, these might be looked at as too large. Figure 2 presents a better understanding of the problem outlined above. We consider here the task of classification of the UltraMNIST digits [11] into one of the 10 predefined classes labelled from 0-9. UltraMNIST images used here comprise 3-5 MNIST digits of extremely varying scale and the sum of the digits ranges between 0-9. The label class of each image corresponds to the sum of the contained digits. More details related to the UltraMNIST classification problem are presented in Appendix B.2. We consider here images of size \(512\times 512\) pixels and pose the problem to be solved at two different computational memory budgets. We consider the two cases of GPU memory limits of 4 GB and 16 GB. For the base CNN model, we use ResNet50 [12] architecture and employ the standard training approach. We refer to this approach as Gradient descent (GD). We further present results obtained using the proposed training pipeline, referred as _PatchGD_. Abbreviated for Patch Gradient Descent, it is a scalable training method designed to build neural networks with either very large images, or very low memory compute or a combination of both. The efficacy of PatchGD is evident from the results in Figure 2 where PatchGD outperforms the conventional GD method for 16 GB as well as 4 GB memory limit. While the difference in performance is 4% at 16 GB, it grows to a remarkable margin of 13% difference in the accuracy measure at 4 GB. The classification problem at 4 GB memory compute is intended to replicate the real-world challenges when dealing with large images. With only 4 GB in hand, the image size of \(512\times 512\) is already too large to be used for training a ResNet50 model, and this leads to the inferior performance shown in Figure 2. However, PatchGD is stable even at this low memory regime, and this can be attributed to its design that makes it invariant to image size to a large extent. We describe the details of the method later in the paper as well as demonstrate through experimental results on a variety of image sizes that PatchGD is capable of adapting the existing CNN models to work with very large images even if the available GPU memory is limited. **Contributions.** To summarize, the contributions of this paper can be listed as follows. * We present _Patch Gradient Descent (PatchGD)_, a novel strategy to train neural networks on very large images in an end-to-end manner. Figure 2: Performance comparison of standard CNN and PatchGD (ours) for the task of classification of UltraMNIST digits of size \(512\times 512\) pixels using ResNet50 model. Two different computational memory budgets of 16 GB and 4GB are used, and it is demonstrated that PatchGD is relatively stable for the chosen image size, even for very low memory compute. * Due to its inherent ability to work with small fractions of a given image, PatchGD is scalable on small GPUs, where training the original full-scale images may not even be possible. * PatchGD reinvents the existing CNN training pipeline in a very simplified manner and this makes it compatible with any existing CNN architecture. Moreover, its simple design allows it to benefit from the pre-training of the standard CNNs on the low-resolution data. ## 2 Related Work This paper aims at improving the capability of CNNs in handling large-scale images in general. To our knowledge there is only very limited research work in this direction and we discuss them in this section. Most works that exist focus on histopathological datasets since these are popular sources of large images. The majority of existing works employ pixel-level segmentation masks, which are not always available. For example, Iizuka et al. (2020); Liu et al. (2017) perform patch-level classification based on labels created from patchwise segmentation masks available for the whole slide images (WSI), and then feed it to a RNN to obtain the final WSI label. Braatz et al. (2022) use goblet cell segmentation masks to perform patch-level feature extraction. However, these approaches require labelled segmentation data, are computationally expensive, feature learning is very limited, and the error propagation is higher. Another set of methods focus on building a compressed latent representation of the large input images using existing pretrained models or unsupervised learning approaches. For example, Lai et al. (2022) use U-Net autoencoder and stack them into a cube, which is then fed to another module to obtain slide-level predictions. Tellez et al. (2018) explore the use of different encoding strategies including reconstruction error minimization, contrastive learning and adversarial feature learning to map high-resolution patches to a lower-dimensional vector. Tellez et al. (2020) extend this work and use multi-task learning to get better representations of patches than their unsupervised counterparts. One important limitation of this class of methods is that the encoding network created from unsupervised learning is not always the strong representative of the target task. There exist several methods that use pretrained models derived from other other tasks as feature extractors and the output is then fed to a classifier. Example methods include using Cancer-Texture Network (CAT-Net) and Google Brain (GB) models as feature extractors (Kosaraju et al., 2022), or additionally using similar datasets for fine-tuning (Brancati et al., 2021). Although these methods gain advantage from transfer learning, such two-stage decoupled pipelines propagate errors through under-represented features and the performance of the model on the target task is hampered. In this paper, we propose a single step approach that can be trained in an end-to-end manner on the target task. Several research works have focused on identifying the right patches from the large images and use them in a compute-effective manner to classify the whole image. Naik et al. (2020) propose to construct the latent space using randomly selected tiles, however, this approach does not preserve the semantic coherence across the tiles and fails to extract features that are spread across multiple tiles. Campanella et al. (2019) consider this as a multi-instance learning approach, assigning labels to top-K probability patches for classification. Pinckaers et al. (2022); Huang et al. (2022) propose a patch-based training, but make use of streaming convolution networks. Sharma et al. (2021) cluster similar patches and performs cluster-aware sampling to perform WSI and patch classification. Cordonnier et al. (2021) use a patch scoring mechanism and patch aggregator network for final prediction, however they perform downsampling for patch scoring which may cause loss of patch-specific feature important for WSI. Papadopoulos et al. (2021) progressively increases the resolution and localize the regions of interest dropping the rest equivalent to performing hard adaptive attention. DiPalma et al. (2021) train a teacher model at high-resolution and performs knowledge distillation for the same model at lower resolution. Katharopoulos and Fleuret (2019) perform attention sampling on downsampled image and derive an unbiased estimator for the gradient update. However their method involves downsampling for attention which may loose out some vital information. It is important to note that all such methods which employ patch selection and knowledge distillation are orthogonal to our work and can be easily combined with our work. However, this is beyond the scope of this paper. With the recent popularity of Transformer-based methods for vision-based tasks, Chen et al. (2022) proposed a self-supervised learning objective for pre-training large-scale vision transformer at varying scale. Their method involves a hierarchical vision transformer which leverages the natural hierarchical structure inherent in WSI. However their method requires a massive pre-training stage which is not always feasible. Also their method is specific to WSI rather than more general image classification and involves training multiple large-scale transformers. Our method on the other hand, targets more general image classification task and does not involve large scale pre-training, rather it directly works over any existing CNN model. ## 3 Approach ### General description _Patch Gradient Descent (PatchGD)_ is a novel CNN training strategy that can train networks with high-resolution images. It is based on the hypothesis that, rather than performing gradient-based updates on an entire image at once, it should be possible to achieve a good solution by performing model updates on only small parts of the image at a time, ensuring that the majority of it is covered over the course of iterations. However, even if only a portion of the image is used, the model is still trainable end-to-end with PatchGD. Figure 3 presents a schematic explanation of the PatchGD method. At the core of PatchGD lies the construction or filling of \(\mathbf{Z}\) block, a deep latent representation of the full input image. Irrespective of which parts of the input are used to perform model updates, \(\mathbf{Z}\) builds an encoding of the full image based on information acquired for different parts of it from the previous few update steps. We further explain the use of the \(\mathbf{Z}\) block using the diagram shown in Figure 2(a). As can be seen, \(\mathbf{Z}\) is primarily an encoding of an input image \(\mathbf{X}\) obtained using any given model parameterized with weights \(\mathbf{\theta}_{1}\). The input image is divided into \(m\times n\) patches and each patch is processed as an independent image using \(\mathbf{\theta}_{1}\). The size of \(\mathbf{Z}\) is always enforced to be \(m\times n\times s\), such that patch \(\mathbf{x}_{ij}\) in the input space corresponds to the respective \(1\times 1\times s\) segment in the \(\mathbf{Z}\) block. The process of \(\mathbf{Z}\)-filling spans over multiple steps, where every step involves sampling \(k\) patches and their respective positions from \(\mathbf{X}\) and passing them as a batch to the model for processing. The output of the model combined with the positions are then used to fill the respective parts of \(\mathbf{Z}\). Once all the \(m\times n\) patches of \(\mathbf{X}\) are sampled, the filled form of \(\mathbf{Z}\) is obtained. The concept of filling \(\mathbf{Z}\) is employed by PatchGD during model training as well as inference stages. To build an end-to-end CNN model, we add a small subnetwork comprising convolutional and fully-connected layers that processes the information contained in \(\mathbf{Z}\) and transforms it into a vector of \(c\) probabilities as desired for the task of classification. It is important to note that the cost of adding this small sub-network is generally negligible. The pipelines for model training and inference are shown in Figure 2(b). During training, model components \(\mathbf{\theta}_{1}\) as well as \(\mathbf{\theta}_{2}\) are updated. Based on a fraction of patches sampled from the input image, the respective encodings are computed using the latest state of \(\mathbf{\theta}_{1}\) and the output is used Figure 3: Schematic representations of the pipelines demonstrating working of different components of the PatchGD process. to update the corresponding entries in the already filled \(\mathbf{Z}\). The partially updated \(\mathbf{Z}\) is then used to further compute the loss function value and the model parameters are updated through back-propagation. ### Mathematical formulation In this section, we present a detailed mathematical formulation of the proposed PatchGD approach and describe its implementation for the model training and inference steps. For the sake of simplicity, we tailor the discussion towards training of a CNN model for the task of classification. Let \(f_{\mathbf{\theta}}:\mathbb{R}^{M\times N\times C}\to\mathbb{R}^{c}\) denote a CNN-based model parameterized by \(\mathbf{\theta}\) that takes an input image \(\mathbf{X}\) of spatial size \(M\times N\) and \(C\) channels, and computes the probability of it to belong to each of the \(c\) pre-defined classes. To train this model, the following optimization problem is solved. \[\underset{\mathbf{\theta}}{\text{min}}\ \ \mathcal{L}(f(\mathbf{\theta};\mathbf{X}), \mathbf{y}), \tag{1}\] where \(\{\mathbf{X},\mathbf{y}\}\in\mathcal{D}\) refers to the data samples used to train the network and \(\mathcal{L}(\cdot)\) denotes the loss function associated with the training. Traditionally, this problem is solved in deep learning using the popular mini-batch gradient descent approach where updates are performed at every step using only a fraction of the data samples. We present below the formulation of standard gradient descent followed by the formulation our PatchGD method. **Gradient Descent (GD).** Gradient descent in deep learning involves performing model updates using the gradients computed for the loss function over one or more image samples. With updates performed over one sample at a time, referred as stochastic gradient descent method, the model update at the \(i^{\text{th}}\)step can be mathematically stated as \[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\alpha\frac{\mathrm{d}\mathcal{L}}{ \mathrm{d}\mathbf{\theta}^{(i-1)}}, \tag{2}\] where \(\alpha\) denotes the learning rate. However, performing model updates over one sample at a time leads to very slow convergence, especially because of the noise induced by the continuously changing descent direction. This issue is alleviated in mini-batch gradient descent method where at every step, the model weights are updated using the average of gradients computed over a batch of samples, denoted here as \(\mathcal{S}\). Based on this, the update can be expressed as \[\mathbf{\theta}^{(i)}=\mathbf{\theta}^{(i-1)}-\frac{\alpha}{N(\mathcal{S})}\sum_{ \mathbf{X}\in\mathcal{S}}\frac{\mathrm{d}\mathcal{L}^{(\mathbf{X})}}{\mathrm{d }\mathbf{\theta}^{(i-1)}} \tag{3}\] and \(N(S)\) here denotes the size of the batch used. As can be seen in Eq. 3, if the size of image samples \(s\in\mathcal{S}\) is very large, it will lead to large memory requirements for the respective activations, and under limited compute availability, only small values of \(N(\mathcal{S})\), sometimes even just 1 fits into the GPU memory. This should clearly demonstrate the limitation of the gradient descent method, when handling large images. This issue is alleviated by our PatchGD approach and we describe it next. **PatchGD.** As described in Section 3.1, PatchGD avoids model updates on an entire image sample in one go, rather it computes gradients using only part of the image and updates the model parameters. In this regard, the model update step of PatchGD can be stated as \[\mathbf{\theta}^{(i,j)}=\mathbf{\theta}^{(i,j-1)}-\frac{\alpha}{k\cdot N(\mathcal{S}_{ i})}\sum_{\mathbf{X}\in\mathcal{S}_{i}}\sum_{p\in\mathcal{P}_{\mathbf{X},j}} \frac{\mathrm{d}\mathcal{L}^{(\mathbf{X},p)}}{\mathrm{d}\mathbf{\theta}^{(i,j-1)}}. \tag{4}\] In the context of deep learning, \(i\) here refers to the index of the mini-batch iteration within a certain epoch. Further, \(j\) denotes the inner iterations, where at every inner iteration, \(k\) patches are sampled from the input image \(\mathbf{X}\) (denoted as \(\mathcal{P}_{\mathbf{X},j}\)) and the gradient-based updates are performed as stated in Eq. 4. Note that for any iteration \(i\), multiple inner iterations are run ensuring that the the majority of samples from the full set of patches that are obtained from the tiling of \(\mathbf{X}\) are explored. In Eq. 4, \(\mathbf{\theta}^{(i,0)}\) denotes the initial model to be used to start running the inner iterations on \(\mathcal{S}_{i}\) and is equal to \(\mathbf{\theta}^{(i-1,\zeta)}\), the final model state after \(\zeta\) inner iterations of patch-level updates using \(\mathcal{S}_{i-1}\). For a more detailed understanding of the step-by-step model update process, please see Algorithm 1. As described earlier, PatchGD uses an additional sub-network that looks at the full latent encoding \(\mathbf{Z}\) for any input image \(\mathbf{X}\). Thus the parameter set \(\mathbf{\theta}\) is extended as \(\mathbf{\theta}=[\mathbf{\theta}_{1},\mathbf{\theta}_{2}]^{\intercal}\), where the base CNN model is \(f_{\mathbf{\theta}_{1}}\) and the additional sub-network is denoted as \(g_{\mathbf{\theta}_{2}}\). ``` 1:\(\mathbf{\theta}^{(i)}\), \(\mathbf{\theta}^{(i-1)}\), \(\bm affects the convergence process, we have observed that gradient update per inner-iteration leads to sometimes poor convergence. Thus, we introduce gradient accumulation over \(\epsilon\) steps and update the model accordingly. Note that gradients are allowed to backpropagate only through those parts of \(\mathbf{Z}\) that are active at the \(j^{\text{th}}\) inner-iteration. During inference phase, \(\mathbf{Z}\) is filled using the optimized \(f_{\mathbf{\theta}_{2}}\) as stated in Algorithm 2 and then the filled version of \(\mathbf{Z}\) is used to compute the class probabilities for input \(\mathbf{X}\) using \(g_{\mathbf{\theta}_{2}^{*}}\). ``` Input: Batch of input images \(\mathcal{X}\in\mathbb{R}^{B\times M\times N\times C}\), Pre-trained feature trained feature extractor \(f_{\mathbf{\theta}_{1}}\), Classifier head \(g_{\mathbf{\theta}_{2}}\), Patch size \(p\), Inner iterations \(\zeta\), Patches per inner iteration \(k\), Batch size \(B\), Learning rate \(\alpha\), Grad. Acc. steps \(\epsilon\) Initialize: \(\mathbf{Z}=\mathbf{0}^{B\times m\times n\times c};\mathbf{U}_{1}=\mathbf{0}, \mathbf{U}_{2}=\mathbf{0}\) \(\mathbf{Z}\leftarrow\mathbf{Z}\text{-filling}(\mathbf{X},f_{\mathbf{\theta}_{1}},p)\) for\(\mathbf{X}\in\mathcal{X}\) \(f_{\mathbf{\theta}_{1}}\leftarrow\texttt{start\_gradient}(f_{\mathbf{\theta}_{1}})\) for\(j:1\text{ to }\zeta\)do for\(\mathbf{X}\) in \(\mathcal{X}\)do \(\{\mathcal{P}_{\mathbf{X},j},v\}=\texttt{patch\_sampler}(\mathbf{X},k),\) \(\mathcal{P}_{\mathbf{X},j}\in\mathbb{R}^{p\times p\times C\times k}\) \(\mathbf{z}=f_{\mathbf{\theta}_{1}}(\mathcal{P}_{\mathbf{X},j})\) \(\mathbf{Z}[v]=\mathbf{z}\) // Update the positional embeddings \(\mathbf{y}_{\text{pred}}=g_{\mathbf{\theta}_{2}}(\mathbf{Z})\) \(\mathcal{L}=\texttt{calculate\_loss}(\mathbf{y},\mathbf{y}_{\text{pred}})\) \(\mathbf{U}_{1}=\mathbf{U}_{1}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{1}, \mathbf{U}_{2}=\mathbf{U}_{2}+\text{d}\mathcal{L}/\text{d}\mathbf{\theta}_{2}\) endfor if\(j\%\epsilon=0\)then \(\mathbf{U}_{1}=\mathbf{U}_{1}/\epsilon\), \(\mathbf{U}_{2}=\mathbf{U}_{2}/\epsilon\) \(\mathbf{\theta}_{1}=\mathbf{\theta}_{1}-\alpha\mathbf{U}_{1}\) \(\mathbf{\theta}_{2}=\mathbf{\theta}_{2}-\alpha\mathbf{U}_{2}\) \(\mathbf{U}_{1}=\mathbf{0},\mathbf{U}_{2}=\mathbf{0}\) endif endfor ``` **Algorithm 1** Model Training for 1 iteration ## 4 Experiments We demonstrate here the efficacy of PatchGD through multiple numerical experiments on two benchmark datasets comprising large images with features at multiple scales. ### Experimental setup **Datasets.** For the experiments presented in this paper, we consider two datasets: UltraMNIST (Gupta et al., 2022) and Prostate cANcer graDe Assessment (PANDA) (Bulten et al., 2022) datasets. UltraMNIST is a classification dataset and each sample comprises 3-5 MNIST digits of varying scales placed at random locations in the image such that the sum of the digits lies between 0-9. PANDA dataset comprises high-resolution histopathological images, and for this study, we consider a maximum image resolution of \(4096\times 4096\) pixels. Note that unlike the aforementioned approaches, we do not make use of any segementation masks for PANDA. Therefore, the complete task boils down to taking an input high-resolution image and then classifying them into 6 categories based on the International Society of Urological Pathology (ISUP) grade groups. More details related to the datasets can be found in Appendix B. **CNN models.** We consider two popular CNN architectures: ResNet50 (He et al., 2016) and MobileNetV2 (Sandler et al., 2018). ResNet50 is a popular network from the residual networks family and forms backbone for several models used in a variety of computer vision tasks (such as object detection and tracking). Thus, we demonstrate the working of PatchGD on primarily this model. MobileNetV2 is a light-weight architecture which is commonly employed for edge-devices, and it would be of interest to see how it performs with large images under limited memory scenarios. **Implementation details.** We follow the same hyperparameters across our experiments for a fair comparison. Exact details are stated in Appendix C. We report classification accuracy and quadratic weighted kappa (QWK) for PANDA dataset. PyTorch is the choice of framework to implement both the baselines and the PatchGD. We follow 4GB, 16GB and 24GB memory constraints to mimic the popular deep learning GPU memory limits. Latency is calculated on 40GB A100 GPU, completely filling the GPU memory. ### Results **UltraMNIST classification.** The performance of PatchGD for UltraMNIST has already been shown in Figure 2. More detailed results are presented in Tables 1 and 2. For both the architectures, we see that PatchGD outperforms the standard gradient descent method (abbreviated as GD) by large margins. Our approach employs an additional sub-network \(g_{\mathbf{\theta}_{2}}\), and it can be argued that the gains reported in the paper are due to it. For this purpose, we extend the base CNN architectures used in GD and report the respective performance scores in Tables 1 and 2 as GD-extended. For both the architectures, we see that PatchGD outperforms GD as well as GD-extended by large margins. For ResNet50, the performance difference is even higher when we have a low memory constraint. At 4 GB, while GD seems unstable with a performance dip of more than 11% compared to the 16 GB case, our PatchGD approach seems to be significantly more stable. For MobileNetV2, the difference between PatchGD and GD is even higher at 16GB case, thereby clearly showing that PatchGD blends well with even light-weight models such as MobileNetV2. For MobileNetV2, we see that going from 16 GB to 4 GB, there is no drop in model performance, which demonstrates that MobileNetV2 can work well with GD even at low memory conditions. Nevertheless, PatchGD still performs significantly better. The underlying reason for this gain can partly be attributed to the fact that since PatchGD facilitates operating with partial images, the activations are small and more images per batch are permitted. We also observe that the performance scores of GD-extended are inferior compared to even GD. ResNet50 and MobilenetV2 are optimized architectures and we speculate that addition of plain convolutional layers in the head of the network is not suited due to which the overall performance is adversely affected. **Prostate Cancer Classification (PANDA).** Table 3 presents the results obtained on PANDA dataset for three different image resolutions. For all experiments, we maximize the number of images used per batch while also ensuring that the memory constraint is not violated. For images of \(512\times 512\), we see that GD as well as PatchGD deliver approximately similar performance scores (for both accuracy as well as QWK) at 16 GB memory limit. However, for the similar memory constraint, when images of size \(2048\times 2048\) (2K) pixels are used, the performance of GD drops by approximately 10% while our PatchGD shows a boost of 9% in accuracy. There are two factors that play role in creating such a big gap in the performance of GD and PatchGD. First, due to significantly increased activation size for higher-resolution images, GD faces the bottleneck of batch size and only 1 image per batch is permitted. Note that to stabilize it, we also experimented with gradient-accumulation across batches, however, it did not help. Alternatively, we performed hierarchical training, where the model trained on the lower resolution case was used as the initial model for the higher-resolution. To alleviate the issue of using only 1 image per batch, we considered a higher memory limit. Another reason for the low performance is that for higher-resolution images, the optimized receptive field of ResNet50 is not suited which leads to non-optimal performance. For increased batch size at 2K resolution, we also considered running quantized networks at half-precision and increased memory (see Table 3). At half-precision, the performance of GD improves, however, it is still significantly lower than PatchGD. Similar observation is made for 4K images that PatchGD performs better. The performance improves further when a patch size of 256 is used. Clearly, from the results reported on PANDA dataset, it is evident that PatchGD is significantly better than GD in terms of accuracy as well as QWK when it comes to handle large images in an end-to-end manner. We also report the latency of both the methods during inference time, and it can be seen that PatchGD performs almost at par with GD. The reason is that unlike GD, the activations produced by PatchGD are smaller and the gain in terms of speed from this aspect balance the slowness induced by patchwise processing of the images. Clearly for applications demanding to handle large images but also aiming to achieve real-time inference, PatchGD could be an interesting direction to explore further. **Additional study.** We demonstrated in the earlier experiments that PatchGD performs significantly better than its counterpart. We present here a brief study related to some of the hyperparameters involved in PatchGD. Table 4 presents the influence of patch sampling on the overall performance of PatchGD. We vary the sampling fraction per inner-iteration as well as the fraction of samples considered in total for an image in a certain iteration. We observe that keeping the sampling fraction per inner-iteration small helps to achieve better accuracy. This is counter-intuitive since smaller fractions provide a lesser context of the image in one go. We speculate that similar to mini-batch gradient \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \\ \hline GD & - & 16 & 65.2 \\ GD-extended & - & 16 & 50.5 \\ PatchGD & 256 & 16 & **69.2** \\ GD & - & 4 & 53.6 \\ GD-extended & - & 4 & 52.5 \\ PatchGD & 256 & 4 & **63.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance scores for standard Gradient Descent and our PatchGD method obtained using ResNet50 architectures on the task of UltraMNIST classification with images of size \(512\times 512\). \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Patch size & Memory (in GB) & Accuracy \% \\ \hline GD & - & 16 & 67.3 \\ GD-extended & - & 16 & 64.3 \\ PatchGD & 256 & 16 & **83.7** \\ GD & - & 4 & 67.7 \\ GD-extended & - & 4 & 60.0 \\ PatchGD & 256 & 4 & **74.8** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance scores for standard Gradient Descent and our PatchGD method on the task of UltraMNIST classification with images of size \(512\times 512\) obtained using MobileNetV2 architecture. descent, not using too large patch batch size induces regularization noise, which in turn improves the convergence process. However, this aspect needs to be studied in more details for a better understanding. We also observed that the fraction of the image seen in one overall pass of the image in PatchGD does not generally affect the performance, unless it is low. For lower fractions, it is hard for the model to build the global context and the convergene is sub-optimal. We have also briefly studied the influence of gradient accumulation length parameter for PatchGD and the results are reported in Table 6 of the appendices. We observed that performing gradient-based model update per inner iteration leads to superior performance for the chosen experiment. However, the choice of \(\epsilon\) depends on the number of inner steps \(\zeta\). For large values of \(\zeta\), values greater than 1 are favored. For example, for the case of processing 2K resolution images with patch size of \(128\times 128\), \(\epsilon=\zeta\) worked well. However, an empirical relation between \(\zeta\) and \(\epsilon\) is still to be identified, and this is a part of our future research work. ## 5 Discussion **Hyperparameter optimization and fine-tuning.** PatchGD involves several hyperparameters and their optimized combination is still to be identified. While we have demonstrated the influence through a few experiments, more clarity needs to be gained on the best values of number of inner-update steps to be combined in gradient accumulation (\(\epsilon\)), striking the right balance between patch size and the number of inner iterations for a given compute memory limit as well choosing the right pretraining strategy. We have observed that using the models trained with GD as the initial models in PatchGD can improve the overall performance. However, there are instances when model training on GD is not possbile. In such scenarios, one could use low-resolution models trained on GD or even the conventional pretrained models. Nevertheless, the effect of each of these choices needs to be thoroughly studied. **Application to other tasks.** In this paper, we have focused on demonstrating the working of PatchGD on tasks of image classification, and in particular those where features exist at varying scales. However, this does not limit the applicability of our method to other problems. PatchGD can also be used on the conventional classification problems, and we speculate that it could help to refine the receptive field of the existing models. We discuss this in more details later in this paper. Beyond classification, it is also straightforward to adapt this method for other tasks such as segmentation, object detection, among others, and we intend to cover them in an extended version of this study later. **Limitations.** This paper presented the foundational concept of PatchGD. Although we have demonstrated the efficacy of PatchGD through multiple numerical experiments, the overall investigation is still limited in terms of understanding the generalization and stability of the method. Another minor limitation is that since our approach looks only at a fraction of an image in one step, it is relatively slower than the standard gradient descent method. However, since the inference speed is almost the same, this issue creates a bottleneck only when real-time training is a priority. **Conclusions.** In this paper, we have demonstrated that it is possible to handle large images with CNN even when the available GPU memory is very limited. We presented Patch Gradient Descent (PatchGD), a novel CNN training strategy that performs model updates using only fractions of the image at a time while also ensuring that it sees almost the full context over a course of multiple steps. We have demonstrated through multiple experiments the efficacy of \begin{table} \begin{tabular}{c c c c} \hline \hline Sampling & Max Sampled & Accuracy & QWK \\ \hline 50 & 100 & 42.3 & 0.538 \\ 30 & 100 & 49.9 & 0.613 \\ 10 & 100 & 53.9 & 0.627 \\ 10 & 70 & 53.1 & 0.624 \\ 10 & 50 & 53.9 & 0.622 \\ 10 & 30 & 51.1 & 0.610 \\ \hline \hline \end{tabular} \end{table} Table 4: Sampling ablation on PANDA dataset. Memory limit is 16 GB, Image size and patch size are 2048 and 128 respectively \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Method & Resolution & Patch Size & Sampling \% & Mem. Constraint & \# Parameters (M) (G) & Latency (imgs/sec) & Accuracy \% & QWK \\ \hline GD & 512 & - & - & 16 & 23.52 & 618.05 & 44.4 & 0.558 \\ PatchGD & 512 & 128 & 30* & 16 & 26.39 & 521.42 & 44.9 & 0.576 \\ GD & 2048 & - & - & 16 & 23.52 & 39.04 & 34.8 & 0.452 \\ PatchGD & 2048 & 128 & 10 & 16 & 26.40 & 32.52 & 53.9 & 0.627 \\ GD-fp16 & 2048 & - & - & 24 & 23.52 & 39.04 & 50.6 & 0.658 \\ PatchGD-fp16 & 2048 & 128 & 10 & 24 & 26.40 & 32.52 & 56.1 & 0.662 \\ GD-fp16 & 4096 & - & - & 24 & 23.52 & 9.23 & 50.1 & 0.611 \\ PatchGD-fp16 & 4096 & 128 & 10 & 24 & 26.41 & 8.09 & 53.5 & 0.667 \\ PatchGD-fp16 & 4096 & 256 & 10 & 24 & 26.40 & 9.62 & 55.6 & 0.672 \\ \hline \hline \end{tabular} \end{table} Table 3: Performance scores obtained using Resnet50 on PANDA dataset for Gradient Descent (GD) and Patch Gradient Descent (PatchGD). In case of 512 image size, 10% sampling leads to only one patch, hence 30% patches are chosen. PatchGD in handling large images as well as operating under low memory conditions, and in all scenarios, our approach outperforms the standard gradient descent by significant margins. We hope that the details of the method as well as the experimental evidence presented in the paper sufficiently justify the significance of PatchGD in making existing CNN models work with large images without facing the bottleneck of compute memory. **Future work.** This paper has established the foundational concept of patch gradient descent to enable training CNNs using very large images. The results as well as insights presented in the paper open doors to several novel secondary research directions that could be interesting in terms of improving the efficacy as well as the acceptance of the presented method in a broader scientific community. Examples of these include extending PatchGD to work on gigapixel images at small compute memory, using PatchGD for enhanced receptive field on standard computer vision tasks, and lastly to couple PatchGD with transformers. Details on the associated challenges and possible modifications are further discussed in Appendix A. **Acknowledgement** We would like to thank Texmin Foundation for the financial support provided through grant PSF-IH-1Y-022 to support this work.
伝統的なCNNモデルは、比較的低解像度の画像 (<300ピクセル) で訓練およびテストされます。大規模な画像に直接適用することは、計算とメモリ制約のためにできません。私たちは、Patch Gradient Descent (PatchGD) を提案します。これは既存のCNNアーキテクチャを、エンドツーエンドの方法で大規模な画像で訓練できる効果的な学習戦略です。PatchGDは、整数の全体的な更新ではなく、画像の一部を順番に更新することで、良い解決策を得られるという仮説に基づいています。つまり、一部の画像を更新し、他の部分を更新しないようにすることで、全体をカバーし続けることができます。これにより、PatchGDは、大規模な画像を訓練する際に、メモリと計算効率が大幅に向上します。PatchGDは、PANDAとUltraMNISTの2つのデータセットで、ResNet50とMobileNetV2のモデルを、異なるメモリ制約条件
2309.16195
Reconstructing microstructures from statistical descriptors using neural cellular automata
The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability.
Paul Seibert, Alexander Raßloff, Yichi Zhang, Karl Kalina, Paul Reck, Daniel Peterseim, Markus Kästner
2023-09-28T06:35:07
http://arxiv.org/abs/2309.16195v1
# Reconstructing microstructures from statistical descriptors using neural cellular automata ###### Abstract The problem of generating microstructures of complex materials in silico has been approached from various directions including simulation, Markov, deep learning and descriptor-based approaches. This work presents a hybrid method that is inspired by all four categories and has interesting scalability properties. A neural cellular automaton is trained to evolve microstructures based on local information. Unlike most machine learning-based approaches, it does not directly require a data set of reference micrographs, but is trained from statistical microstructure descriptors that can stem from a single reference. This means that the training cost scales only with the complexity of the structure and associated descriptors. Since the size of the reconstructed structures can be set during inference, even extremely large structures can be efficiently generated. Similarly, the method is very efficient if many structures are to be reconstructed from the same descriptor for statistical evaluations. The method is formulated and discussed in detail by means of various numerical experiments, demonstrating its utility and scalability. Keywords: Microstructure - Reconstruction - Descriptor - Neural cellular automata ## 1 Introduction The generation and analysis of random heterogeneous composite materials is a recently emerging research topic that aims at accelerating materials engineering by enabling digital workflows such as numerical simulation and inverse design [1]. Specifically, microstructure characterization and reconstruction allows to _(i)_ generate many microstructure realizations from a single example, _(ii)_ explore hypothetic materials by interpolating between microstructures in a morphologically meaningful manner, and _(iii)_ create 3D models from 2D observations. A multitude of approaches has been developed in the last decades that is summarized in different review articles [2, 3, 4]. For the purpose of this work, the existing approaches can be broadly divided in four categories1 - simulation, Markov random field, deep learning and descriptor-based approaches. Naturally, some algorithms in the literature can be identified as hybrid methods that fall into two or more of these categories. After discussing the main ideas of these categories and approaches in subsection 1.1, this work presents an algorithm that bridges all four categories and exhibits some very interesting properties as described in subsection 1.2. Footnote 1: Besides hybrid methods that fall into multiple categories, some exceptions like Wang tiles [5] do not clearly fall into any of the categories. ### Existing approaches for microstructure reconstruction Simulation-based approachesSimulating the microstructure evolution might be the most direct way. This requires to identify and to solve the physical (partial differential) equations (PDEs) that govern the process. An excellent overview is given in [2]. As an example, the Cahn-Hilliard equation describing phase separation [6] has been studied extensively [7, 8, 9]. Similarly, for granular structures, given a representative set of particles, realistic and dense packing can be achieved by simulating gravitational forces [10, 11, 12]. As a final, more complex example, grain formation in polystraline structures has been studied in depth. Simplified approaches reduce the description to vertices [13] or grain boundaries [14], whereas Monte Carlo methods [15] or cellular automata [16, 17, 18] are used to model the evolution of an entire 2D pixel field. Recently, neural cellular automata have been applied to solidification microstructure modeling [19]. Approaches based on the phase field method are probably the most developed. Thereby, the evolution of a diffuse indicator function is modeled by an additional differential equation [20, 21, 22] that can be solved, for example, in _OpenPhase_[23]. These approaches are often applied to simulate the complex microstructure morphologies that arise in additive manufacturing [24, 25, 26]. This non-exhaustive list indicates that a variety of physical processes are responsible for the formation of different material classes. Even if the relevant set of physical equations is selected, it can be challenging to perform the simulations due to numerical issues or difficulties in parameterizing the underlying constitutive models [27, 25]. This motivates the purely image-based approaches that are presented in the following. Markov-based reconstructionAs a first purely image-based method, this subsection discusses a class of reconstruction algorithms originally developed for computer graphics applica tions which are herein referred to as Markov-based approaches. For this purpose, it is worth noting that a microstructure can be modeled as a stationary Markov random field if the probability of finding a certain phase at a given location does not depend directly on the location, but only on the phase distribution in the local finite-size neighborhood. This assumption of locality and stationarity motivates reconstruction algorithms that directly rely on this conditional probability to update individual pixels based on their neighbor's values. A very simple implementation inspired by texture synthesis [28] might determine individual pixel updates by scanning the reference data for the given neighborhood in order to compute a probability [29; 30]. It is worth noting that this approach is akin to the multi-point statistics method that has been developed in the Geosciences literature [31] and has been applied and improved substantially by Tahmasebi [32; 33; 34; 35]. For a better scalability, improved algorithms precompute the probabilities for all neighborhoods and store them in efficient data structures for access during reconstruction [31; 36]. Direct sampling methods [37] as well as data structure-based alternatives are implemented in _MPSLIB_[38]. Despite a good local prediction quality, MRF-based approaches often fail to accurately reproduce long-range correlations. This behavior is related to the neighborhood size in the Markovian assumption: Capturing long-range correlations requires large neighborhood sizes, which are often unfeasible because of a disproportionately increased need for training data. Multigrid approaches [39; 35] have been shown to alleviate this issue to a certain extent. Furthermore, to condense the information to a compact model that is also able to interpolate missing neighborhood patterns from similar examples, supervised models have been trained to predict a pixel's phase given its neighborhood. In particular, decision trees [40] and neural networks [41; 42; 39] have been used for 2D and 3D [43] reconstruction. This motivates the discussion of purely deep learning-based approaches in the following subsection. Deep learning-based reconstructionIn deep learning-based methods, a generative model is fitted or trained on a sufficiently large data set of microstructures and is then used to sample new realizations of the same structure. Autoencoders [44; 42; 45] and generative adversarial networks (GANs) are typical examples that have been applied to MCR [46; 47]. For the latter, the merits of modifications like conditional GANs [48; 49], SytleGAN [50], and gradient penalty [51] have also been discussed in the context of microstructure generation. Applications to steels [52] and earth materials [53] show high image quality. Although GANs usually operate on 2D data, 3D-to-3D reconstruction can be achieved by using 3D convolutions [54; 55]. For reconstructing 3D data from 2D examples, a 3D generator has been combined with a 2D discriminator [56; 57]. As an alternative, the third dimension can be regarded as time by combining the GAN with a recurrent neural network [58]. To harness the advantage of both, autoencoders and GANs, they are sometimes combined by using the decoder simultaneously as a generator. This has proven advantageous for 2D-to-3D reconstruction [59; 60; 61] and for extremely small data sets [62]. As an alternative, machine learning methods like Bayesian approaches [63] and attention-based models [64; 65; 66] are equally applicable. Diffusion models, which have recently replaced GANs as state-of-the-art in general-purpose image generation, have also been applied to microstructure reconstruction [67; 68] and optimization [69; 70]. Much research is focused on identifying suitable model types and adapting them to microstructure reconstruction by enabling 2D-to-3D reconstruction [61; 57] making them applicable to small data sets [62] or ensuring that certain descriptor requirements are met [71; 72]. A major challenge lies in defining models with high accuracy that at the same time do not require large data sets to be trained on. These challenges motivate training-free models such as descriptor-based reconstruction, as presented in the next subsection. Descriptor-based reconstructionThe central idea behind descriptor-based reconstruction methods is to statistically quantify the microstructure morphology by means of descriptors like volume fractions and spatial \(n\)-point correlations [73]. Reconstructing a microstructure from a given set of descriptors can then be formulated as an optimization problem directly in the space of possible microstructures. Here, the desired microstructure descriptors can be computed from a single microstructure example, making these methods very data-efficient. One of the most well-known descriptor-based reconstruction methods is the Yeong-Torquato algorithm [73], which iteratively swaps individual pixels in the microstructure to solve the optimization problem. A detailed discussion is given in [74; 75]. This enables high flexibility, as descriptors can be replaced by new alternatives [76; 77] or higher-fidelity versions of the same descriptor [78; 79]. However, even with computationally inexpensive descriptors, the Yeong-Torquato algorithm becomes computationally challenging at high resolutions and in 3D, where billions of iterations are sometimes required for convergence [80]. A common solution is to use a multigrid scheme [81; 82; 83; 84; 85]. Further ideas include different-phase neighbor sampling rules [86], efficient descriptor updates [87; 80] and optimized directional weighing of correlation functions [78]. More information is given in [3]. As an alternative to the pixel-based Yeong-Torquato algorithm, the optimization problem can be formulated in a much lower-dimensional space. For this purpose, the microstructure is approximated by geometric objects that can be described by a few parameters, e.g., ellipsoidal inclusions [88; 89; 90] or Voronoi or Laguerre cells [91; 92; 93]. Independently from the microstructure representation [90], differentiable descriptors allow solving the optimization problem using a gradient-based optimizer. This idea is formulated as differentiable microstructure characterization and reconstruction (DMCR) [94; 95] and several approaches can be identified as special cases [71; 96; 97]. The Yeong-Torquato algorithm and improved versions of it, such as DMCR, have been successfully validated and applied to alloys and anisotropic metamaterials [85] sandstone [98], rock [99], chalk [100], various soils [101] and more. Some versions are publicly available in the open-source _MCRpy_ package [102]. While descriptor-based approaches are very accurate and data-efficient since no training data set is required, they are computationally intensive. More specifically, since the optimization is directly carried out in the microstructure space, the memory and computational requirements grow quickly as the microstructure size increases, especially in 3D. Hybrid reconstruction approachesThe specific and unique advantages and disadvantages of all four categories of MCR approaches motivate hybrid methods that fall into multiple of these categories. Naturally, there is no sharp boundary between Markov-based and deep learning methods if a machine learning model like a neural network is used to predict individual pixels based on their neighborhood as in [40; 41; 42; 39; 43]. Furthermore, simulation by discretized (partial) differential equations and cellular automata resemble Markov-based methods in their locality, but are derived from physical principles and sometimes incorporate various physical quantities (e.g. temperature) beyond phase indicator functions. At the boundary between machine learning and descriptor-based methods, multiple sequential approaches use Gaussian random field-based methods [103] to initialize simulated annealing2[100; 104] and diffusion models [72]. Furthermore, the volume fractions [105; 72; 58; 106; 49], histograms [107] and Gram matrices [10; 108] are sometimes added to the loss function of deep learning-based methods as microstructure descriptors. _DRAGen_[109] combines an automaton-like growth process with a nucleation point optimization based on classical descriptors and allows to use machine learning models for generating input data. At the interface between machine learning and physical simulation, autoencoders [110] and diffusion models [12] have been used as particle generators followed by a gravity simulation for aggregate structures. Besides that, the literature comprises a large number of physics-informed neural network approaches that are not discussed herein. Footnote 2: This is technically a hybrid method between two descriptor-based approaches. ### Objectives and contribution of this work This work presents a hybrid approach that is inspired by all these categories. Like in a simulation-based approach, a partial differential equation models the temporal evolution of the microstructure. It is, however, not derived from physics but learned by a neural network. Similar to the Markov-based methods, this network operates based on local information and is therefore called neural cellular automaton (NCA). This constraint of locality is relaxed not by increasing the neighborhood beyond a one pixel distance, but by introducing further hidden channels to the microstructure function that the NCA can use to encode relevant information. Finally, unlike common machine learning or Markov-based approaches, the NCA is not trained directly on image data or on a set of neighborhoods, but on a statistical descriptor. This requires the NCA to be retrained whenever the statistical descriptor changes, however, it reduces the amount of required data to a bare minimum. The input image only needs to enable the computation of a statistical descriptor; hence the NCA is applicable whenever classical training-free approaches like the Yeong-Torquato algorithm and DMCR can be used. Furthermore, the size of the training data is independent of the image size during training, which is again independent from the size of the reconstructed structure. Hence, microstructures of massive resolutions or numbers can be reconstructed with very limited additional computational effort. Furthermore, due to the nature of NCA, the algorithm is inherently distributed, parallel and robust with respect to perturbations. In summary, the central idea lies in modeling the differential equation governing the structure evolution by training neural cellular automata (NCA) on statistical descriptors. A detailed formulation is given in section 2 and validated by various numerical experiments in section 3. A conclusion is drawn in section 4. ## 2 Neural cellular automata for descriptor-based microstructure reconstruction Based on the work of Mordvintsev et al. [111], the formulation of general neural cellular automata (NCA) is summarized in subsection 2.1. The main idea of the present work to train NCA by arbitrary descriptors is described in subsection 2.2. Finally, the implementation is discussed in subsection 2.3. ### Formulation of neural cellular automata The general idea behind a cellular automaton is to iteratively update individual pixels based on the direct neighbors. In the work of Mordvintsev et al. [111], this information source is further restricted. The neighboring pixel values are not passed directly to the cellular automaton. Instead, they are used to compute a discrete approximation to the gradient and curvature, which are then passed to the cellular automaton. Denoting \(\mathbf{x}\in\mathcal{D}\) as a position vector in the microstructure domain \(\mathcal{D}\subset\mathbb{R}^{2}\) and \(t\in\mathcal{T}=\{t\in\mathbb{R}\,|\,0\leq t\leq t^{\text{end}}\}\) as time, the evolution of the microstructure \(m(\mathbf{x},t)\) can be written as a partial differential equation \[\frac{\partial m(\mathbf{x},t)}{\partial t}=f_{\mathbf{\theta}}\left(m(\mathbf{x},t), \nabla_{\mathbf{x}}m(\mathbf{x},t),\nabla_{\mathbf{x}}^{2}m(\mathbf{x},t)\right)\,, \tag{1}\] where \(f_{\mathbf{\theta}}\) is the cellular automaton which maps the value, gradient and curvature of the microstructure function to its temporal derivative. To be more specific, \(\nabla_{\mathbf{x}}(\bullet)\) and \(\nabla_{\mathbf{x}}^{2}(\bullet)\) denote the gradient and Laplace operator, respectively. Furthermore, \(m\) takes real values within the arbitrarily chosen bounds \[0\leq m(\mathbf{x},t)\leq 1\quad\forall\,\mathbf{x}\in\mathcal{D},\,t\in\mathcal{T}\,\,. \tag{2}\] In a neural cellular automaton specifically, a neural network is chosen as \(f_{\mathbf{\theta}}\), where \(\mathbf{\theta}\) denotes the parameter vector. In other words, the NCA defines partial differential equation (PDE)3 that needs to be discretized and solved in order to generate a microstructure. An explicit Euler scheme is chosen as a time stepping scheme Footnote 3: To be precise, the NCA defines a PDE _system_, as explained later in the document. \[\frac{m_{n_{t}+1}-m_{n_{t}}}{\Delta t}=f_{\theta}\left(m_{n_{t}},\nabla_{x}m_{n _{t}},\nabla_{x}^{2}m_{n_{t}}\right)\,, \tag{3}\] where the current solution at time step \(n_{t}\) defines the update for the next time step \(n_{t}+1\). The dependence on \(\mathbf{x}\) and \(t\) is dropped for the sake of brevity. The space is naturally discretized on an equidistant grid of pixel values, where \(\nabla_{x}(\bullet)\) and \(\nabla_{x}^{2}(\bullet)\) are approximated by a Sobel and Laplace filter, respectively. Based on this discretization, the relation between the current solution, its spatial derivatives and its temporal evolution, i.e. the PDE itself, is learned by the NCA. Given the inability of Markov-based approaches with small neighborhood sizes to accurately capture long-range correlations, it should be clear that the extremely limited local information is not sufficient to train a good NCA. For this reason, the augmented microstructure function \(\mathbf{m}^{\prime}(\mathbf{x},t)\) is introduced which maps a spatial position \(\mathbf{x}\) at time \(t\) to an \(n\)-dimensional vector. The first entry of the vector contains the normal microstructure function \(m(\mathbf{x},t)\) and is the only entry that affects the training. The idea behind the other entries is that the NCA can choose to allocate any quantity that is useful for passing information and increasing the image quality. As an example, for an equidistant grain microstructure, one channel might contain the distance to the next grain boundary. With this, the temporal evolution reads \[\frac{\partial\mathbf{m}^{\prime}(\mathbf{x},t)}{\partial t}=f_{\theta}\left(\mathbf{m}^{ \prime}(\mathbf{x},t),\nabla_{\mathbf{x}}\mathbf{m}^{\prime}(\mathbf{x},t),\nabla_{\mathbf{x}}^{2 }\mathbf{m}^{\prime}(\mathbf{x},t)\right) \tag{4}\] For reconstructing a microstructure from the trained NCA, \(\mathbf{m}^{\prime}(\mathbf{x},0)\) is initialized by zeros and the system evolves freely. ### Training the model from microstructure descriptors The function \(f_{\theta}\) is learned by a small neural network with two layers as shown in Figure 1: _First_, an initial solution \(\mathbf{m}^{\prime}(\mathbf{x},0)=\mathbf{0}\,\forall\,\mathbf{x}\) is chosen. _Secondly_, \(\mathbf{m}^{\prime}(\mathbf{x},t)\) develops according to Equation 4 for a randomly chosen number of time steps. As a regularization and as a measure to break symmetry, asynchronous updates are chosen, whereby in every time step, a given percentage of cells is chosen at random and only those develop. The bounds given in Equation 2 are enforced by clipping. _Thirdly_, a loss function \(\mathcal{L}\) is computed on the final result \(m^{\text{end}}=m(\mathbf{x},t^{\text{end}})\). The choice of \(\mathcal{L}\) is discussed later. Note that only \(m^{\text{end}}\), i.e., the first component of \(\mathbf{m}^{\text{end}}\), contributes to \(\mathcal{L}\). _Finally_, the gradient \(\partial\mathcal{L}/\partial\mathbf{\theta}\) of the loss function with respect to the NCA parameters is computed by conventional backpropagation and used to update \(\mathbf{\theta}\). Note that this limits the number of timesteps during training for numerical reasons. The formulation of the loss function depends on the area of application of the NCA. After initially using a pixel-wise Euclidean norm error in the RGB space for general-purpose image generation [111], Mordvintsev et al. [112] found that a Gram matrix-based style loss [113] enable NCAs to be applied to texture synthesis [112]. The novelty in the present work lies in realizing that any of the known statistical descriptors can be used, as long as they can be differentiated with respect to the microstructure field. The loss is thus formulated as a mean squared error (MSE) in the descriptor space \[\mathcal{L}=\|\mathbf{D}(\mathbf{m}^{\text{end}})-\mathbf{D}^{\text{des}}\|_{\text{MSE}}\,, \tag{5}\] where \(\mathbf{D}\) denotes a statistical descriptor or a weighted concatenation of multiple descriptors that is computed on the reconstruction result, while \(\mathbf{D}^{\text{des}}\) denotes the desired value computed from the reference structure. Because \(m^{\text{end}}\) results from the temporal evolution of \(f_{\theta}\), it depends on the parameter vector \(\mathbf{\theta}\) of the NCA. The central idea is train the NCA by gradient-based optimization of \(\mathbf{\theta}\) to minimize Equation 5, whereby arbitrary descriptors can be incorporated. While the Gram matrices used in [112] can be interpreted as a statistical descriptor, the spatial two- and three-point correlations are more common in microstructure reconstruction. The idea of using high-dimensional, differentiable descriptors for direct microstructure reconstruction is given in [94], where an efficient and differentiable formulation of the three-point correlations is given. As another example, a differentiable approximation to lineal path function is presented in [102] and a descriptor based on a hierarchical wavelet transform is given in [114]. All these descriptors are implemented in _MCRpy_[102]. ### Implementation The implementation of a descriptor-based NCA for microstructure reconstruction is carried out based on the code for NCA texture synthesis [112] and the differentiable descriptors available in _MCRpy_[102]. The former code is adapted to only a single non-hidden dimension \(m(\mathbf{x})\) as opposed to three RGB channels. Then, _MCRpy_ is used to define a loss, where different descriptors such as Gram Matrices \(G\)[71], correlations \(S\)[74], variation \(\mathcal{V}\)[95] and volume fraction \(\varphi\) can be combined and weighed in a single loss function in a flexible manner. More information on these descriptors is given in [102]. _MCRpy_ makes use of the automatic differentiation in _TensorFlow_ to compute the gradient \(\partial\mathcal{L}/\partial m\). Then, \(m\) is backpropagated through time to compute \(\partial m/\partial\mathbf{\theta}\) and consequently \(\partial\mathcal{L}/\partial\mathbf{\theta}\). Finally, a hyperparameter study is carried out on a number of structures. A 12-dimensional microstructure representation (i.e. 11 hidden channels) is chosen. Hence, the NCA has 12 output and 48 input dimensions. With a single hidden layer of 120 neurons, the network amounts to a total of 7332 parameters. Further hyperparameters like the number of time steps are summarized in Table 1. In order to visually compare the results of descriptor-based NCA with other methods from the literature, three open-source codes are selected from GitHub. To represent Markov-based methods, a patch-based texture synthesis4 algorithm based on [115, 116] and a pixel-based, multi-resolution texture synthesis5 algorithm based on [117, 118] are chosen. Furthermore, _MCRpy_[102] implements differentiable microstructure characterization and reconstruction (DMCR) [94, 95]. While _MCRpy_ is provided by previous works of the authors, the former two methods are coded and provided by Anastasia Opara. The authors greatly acknowledge this effort and appreciate the will to share software. Footnote 5: [https://github.com/anopara/multi-resolution-texture-synthesis](https://github.com/anopara/multi-resolution-texture-synthesis) ## 3 Numerical experiments The microstructure evolution and the range of applicability is investigated in subsection 3.1. These results are then compared to the literature in subsection 3.2. Finally, the scalability of descriptor-based NCA is demonstrated in subsection 3.3. All numerical experiments are carried out on a laptop with a \(12^{\text{th}}\) Gen Intel(R) Core(TM) i7-12800H CPU at 2.40 GHz and an Nvidia A2000 GPU with 4 GB VRAM. ### Microstructure evolution and diversity Figure 2 shows reconstructions from different real materials taken from [71]. It can be seen that descriptor-based NCA are applicable to a wide variety of fundamentally different structures, ranging from relatively noise-free examples like the grain boundary structure and the ceramics to the more noisy sandstone. Some limitations can also be seen. As a first limitation, although the grain boundary character in the alloy is captured relatively well, not all lines are connected as in the reference case. In order to use the results for downstream tasks like numerical simulations, a post-processing algorithm is first needed to close the gaps or eliminate unnecessary line segments. Alternatively, it might be worth investigating \begin{table} \begin{tabular}{l l} \hline \hline Parameter & Value \\ \hline Hidden layer size & 120 \\ Non-hidden channels & 1 \\ Hidden channels & 11 \\ Activation function & ReLU \\ Fire rate & 0.5 \\ Batch size & 4 \\ Checkpointing1 pool size & 1024 \\ Learning rate & \(2\cdot 10^{-3}\) \\ Rollout length probability & \(\mathcal{U}(32,64)\) \\ Gradient normalization & True \\ Overflow loss coeff & \(10^{4}\) \\ Descriptors & \(S,G,\mathcal{V}\) \\ Descriptor weights & \(1,1,100\) \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameters chosen in the present work. Figure 1: Training procedure for a neural cellular automaton (NCA): In every iteration \(i\), random pixel locations are chosen where the gradient and curvature is computed numerically. Together with the pixel value, these quantities are given to the NCA to predict a pixel update. After some time increments, the result is compared to the reference to train the NCA. This comparison is only carried out in terms of statistical descriptors. if a different choice of descriptor can be used in order to better quantify the connectivity information. Although this approach is arguably more elegant, its difficulty lies in the requirement that the descriptor should be differentiable with respect to the microstructure. As a second limitation, it can be seen that the fingerprint-like structure of the copolymer is not adequately represented. Although the NCA successfully creates individual sub-regions with parallel lines, these regions are not sufficiently large and the lines do not exhibit smooth curves as in the reference. It is presently unclear to the authors how this issue can be addressed. As a third limitation, it is noted that the probability distribution of pixel values does not exactly match the original structures. Especially in the carbonate and PMMA, it can be seen that the white phase is reconstructed in bright grey color. Similar to the first limitation, the authors assume that a post-processing algorithm or a suitable descriptor should be sufficient to address this issue. To provide a better understanding of the generation method, the temporal evolution of the microstructure as well as the first four hidden dimensions is plotted in Figure 3. All fields are initialized by zero (black) and the structure slowly emerges. Different hidden channels take different roles in representing structural features. For example, the first hidden channel (second row) might be interpreted as a local vertical coordinate in each grain. In contrast, the fourth hidden channel (last row) contains a thickened version of the gain boundaries. Interestingly, the third hidden channel (second to last row) can be interpreted in different ways. It might be used as a type of residuum, since its norm decreases as the reconstruction converges. As an alternative, it might act as a marker for specific features like triple junctions. It can be concluded that different channels take different roles, although a direct interpretation is neither possible, nor necessary. It is demonstrated in the works of Mordvintsev et al. [111, 112] that the NCA-based generation process is often robust with respect to perturbations. To test whether this trend is transferred to descriptor-based NCA for microstructure reconstruction, two numerical experiments are carried out. After the generation process has converged to a good solution, the structure is perturbed by setting all values within a circular radius to 0.5. This is applied to all channels in Figure 4 and only to the non-hidden dimension in Figure 5. It can be seen that the structure only recovers in the latter case. Besides stressing the key role of the hidden channels, this indicates that the robustness of NCA is only partially observed in descriptor-based NCA for microstructure reconstruction. ### Comparison to literature In order to compare descriptor-based NCA reconstruction results to the literature, two Markov- and one descriptor-based approach is chosen6. Figure 6 shows the results, where only three material classes are selected for the sake of brevity. At a first glance, all methods produce high-quality results. Patch-based texture synthesis, however, does not produce new structural features, but copies pathes from the original structure to different locations in the target image. The patch boundaries can be distinguished upon closer inspection. As a pixel-based approach, multi-resolution texture synthesis does not suffer from this phenomenon. However, especially in the alloy, strange features like completely vertical grain boundaries can be observed and the structure coincidentally repeats itself in the top right corner. Furthermore, the highly complex fingerprint-like copolymer structure is not captured adequately. Finally, DMCR as a descriptor-based method7 produces good results for all considered materials. While the alloy and ceramics are similarly well reconstructed as with the descriptor-based NCA, DMCR produces visually superior results for the copolymer. It can be concluded that the result quality of NCA outperforms standard Markov-based techniques and almost reaches that of direct descriptor-based optimization. The advantage, with respect to the latter lies in the performance and scalability as discussed in the following. Footnote 7: The Yeong-Torquato algorithm can be expected to yield equally good or even better results, however, at a significantly higher computational cost. ### Performance and scalability An objective assessment of the reconstruction results in terms of microstructure descriptors is paramount to evaluating the accuracy of any reconstruction algorithm. In this context, it should be mentioned that the presented method is only partially descriptor-based since the descriptors are used during training, but not during sampling. For this reason, independent realizations of the material exhibit random deviations from the target descriptor. Naturally, these fluctuations are expected to decrease as the microstructure size increases. This is shown in Figure 7, where the error \[\mathcal{E}_{D}=\|\mathbf{D}(m^{\text{end}})-\mathbf{D}^{\text{des}}\|_{\text{MSE}} \tag{6}\] between a descriptor \(\mathbf{D}(m^{\text{end}})\) and its desired value \(\mathbf{D}^{\text{des}}\) from the reference structure is defined as a mean squared error (MSE). Note that the only difference between \(\mathcal{E}_{D}\) and the loss \(\mathcal{L}\) defined in Equation 5 is that the former measures individual descriptors, whereas the latter is based on a weighted concatenation of multiple descriptors. For all tested descriptors, the error converges to a value which is consistently lower for the proposed loss model than for the reference NCA-based texture synthesis method by Mordvintsev et al. [112]. It should be noted that the descriptor errors do not converge to zero as the resolution increases, but rather to a value that depends on the training quality. It is observed that longer training and training by larger samples reduces this value (not shown here). An interesting aspect of NCA is that the image sizes during training and sampling are independent. This is favorable because the sampling is relatively inexpensive and scales favorably with the image size compared to other methods. To demonstrate this, Figure 8 shows a reconstruction example where the resolution is chosen such that the sampling takes Figure 2: Reconstructions from various real materials. The original samples are given in the top left corner and are taken from [71], where they are released under the Creative Commons license [119]. Figure 3: The evolution of the alloy microstructure over time \(t\). The first channel is plotted in the top row and the first four hidden channels are given below. It can be seen that each hidden channel acts as a distinct feature map. Figure 4: The role of the hidden channels is illustrated by perturbing the microstructure evolution at time \(t=t^{\prime}\). All pixel values within a given radius are set to 0.5. In the presented case, all channels are perturbed, whereas in Figure 5, the hidden channels remain intact. Only the microstructure (top) and the first hidden channel (bottom) are plotted for brevity. Unlike in Figure 5, the structure does not recover. Figure 5: The role of the hidden channels is illustrated by perturbing the microstructure evolution at time \(t=t^{\prime}\). All pixel values within a given radius are set to 0.5. In the presented case, the hidden channels remain intact, whereas in Figure 4, all channels are perturbed. Only the microstructure (top) and the first hidden channel (bottom) are plotted for brevity. Unlike in Figure 4, the structure recovers, albeit to a different solution. Figure 6: Comparison of three selected materials reconstructions with methods from the literature. Patch-based texture synthesis (PBTS) and multi-resolution texture synthesis are Markov-based approaches, while differentiable microstructure characterization and reconstruction (DMCR) is descriptor-based. The reconstructed structures are two times larger than the reference in each direction. More information is given in subsection 2.3. ## References Figure 7: Influence of the loss function on the descriptor errors. The volume fractions \(\varphi\) (top), spatial correlations \(S\) (middle) and Gram matrices \(G\) (bottom) are compared for different materials (left to right) with 25 realizations per resolution. The resolutions are powers of two and an offset to the left (reference) and right (proposed) is applied only for visualization purposes. If can be seen that regardless of the model, material and descriptor, the variance of the descriptor error over different realizations decreases as the sample size increases. The proposed model consistently outperforms the reference [112]. For some structures like the ally (a), the reference model fails to converge, leading to massive discrepancies, whereas for the ceramics (c) the differences are relatively small. as long as the training8. Three different zoom levels of the same structure are shown for visualization purposes. Without any multigrid procedures, such large reconstructions are very challenging with classical descriptor-based methods. Footnote 8: Because the utilized VRAM is not sufficient for the large reconstruction, it is conducted on the CPU only, whereas the training occurs on the GPU. If a similar comparison was made on identical hardware, significantly larger structures could be reconstructed. Regardless, in the authors’ opinion, the presented results demonstrate the scalability sufficiently well. Generally, the computational cost of sampling a microstructure scales linearly in the number of pixels, because pixel updates are computed independently by the NCA. A comparison with the 2D DMCR algorithm [94] in _MCRpy_[102] is given in Figure 9. Both methods scale linearly. Sampling from a trained NCA is much faster than reconstructing by DMCR, and the computational cost grows more slowly. This is because the expensive evaluation of microstructure descriptors and iterative optimization are moved to the training stage. If the training is added to the computational cost of the NCA, they are slower for the considered microstructure sizes. As a conclusion, the expensive training phase of an NCA is compensated if large or many microstructures are reconstructed. Especially the latter might speed up a potential future extension for 2D-to-3D reconstruction. Furthermore, unlike with DMCR [94; 95], the sampling can be trivially parallelized because updates are based only on local information. ## 4 Conclusion A neural cellular automaton (NCA)-based algorithm for microstructure reconstruction is presented. The microstructure evolution is modeled as a partial differential equation which is learned by a small neural network, the NCA. Despite the purely local information in the NCA, long-range correlations are incorporated by introducing hidden dimensions to the microstructure function which can be used to communicate information. Unlike with previous approaches, this network is not trained on image data but on statistical microstructure descriptors. Thus, the method incorporates ideas from four different families of microstructure generation approaches, namely simulation, Markov, deep learning and descriptor-based methods, which are all briefly reviewed. The method is formulated, implemented and validated by a number of 2D numerical experiments. Compared to other microstructure reconstruction approaches, descriptor-based NCAs have a unique set of advantages. The neural network in the NCA enables the evolution of highly complex morphologies in a PDE-like manner without knowledge of the governing physical equations and the material parameters. It can be controlled by statistical descriptors. However, the sampling of structures from a trained NCA is based only on local information. This self-assembling nature of the algorithm makes it an inherently distributed algorithm and therefore trivial to parallelize. The random selection of the pixels to be updated make the method robust with respect to random perturbations, as long as not all channels are affected. Finally, the method scales very favorably as arbitrarily resolved structures can be sampled. In future work, the main challenge lies in enabling 3D reconstruction based on 2D or 3D reference data. ## Acknowledgements The authors thank Anastasia Opara for providing good implementations of texture synthesis algorithms to the community. The groups of M. Kastner and D. Peterseim thank the German Research Foundation DFG which supported this work under Grant numbers KA 3309/18-1 and PE 2143/7-1, respectively. ## Code and data availability The code is made available upon reasonable request. The data is taken from the literature [71], where it is released under the Creative Commons license. ## Competing interests The authors declare no competing interests. ## Author contributions **P. Seibert**: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Software, Supervision, Validation, Visualization, Writing - Original Draft Preparation, Writing - Review & Editing. **A. Ralsfloff**: Conceptualization, Writing - Review & Editing. **Y. Zhang**: Software, Writing - Review & Editing. **K. Kalina**: Conceptualization, Formal Analysis, Writing - Review & Editing. **P. Reck**: Conceptualization, Writing - Review & Editing. **D. Peterseim**: Conceptualization, Funding Acquisition, Writing - Review & Editing. **M. Kastner**: Conceptualization, Funding Acquisition, Resources, Supervision, Writing - Review & Editing.
複雑な材料の微構造のシミュレーション生成の課題は、シミュレーション、マルコフ、深層学習、および記述子ベースのアプローチなど、さまざまな方向から取り組まれています。本研究では、これらの4つのカテゴリを基に hybrid method を提案し、興味深いスケーラビリティを持つ。神経細胞オートマトンを、局所的な情報に基づいて微構造を進化させるために訓練します。他の多くの機械学習ベースのアプローチとは異なり、直接的な参照マイクログラフのデータセットを必要とせず、統計的微構造記述子から訓練されます。つまり、トレーニングコストは構造と関連する記述子の複雑さにのみ依存します。推定時に構造のサイズを設定できるため、非常に大きな構造も効率的に生成できます。同様に、同一の記述子から多くの構造を統計的評価のために再構成する場合、この方法は非常に効率的です。この方法について、さまざまな数値実験を通じて、その有用性とスケ
2305.00422
Drinfeld modules in SageMath
We present the first implementation of Drinfeld modules fully integrated in the SageMath ecosystem. First features will be released with SageMath 10.0.
David Ayotte, Xavier Caruso, Antoine Leudière, Joseph Musleh
2023-04-30T08:03:19
http://arxiv.org/abs/2305.00422v1
# Drinfeld modules in SageMath ###### Abstract. We present the first implementation of Drinfeld modules fully integrated in the SageMath ecosystem. First features will be released with SageMath to.o. The pursuit of Class Field Theory has been a long-standing dream, once held by Kronecker himself. In 1854, he made a significant contribution to the field with the announcement of the Kronecker-Weber theorem, which states that every abelian number field can be generated by a cyclotomic extension of \(\mathbb{Q}\). Similarly, extensions of imaginary quadratic number fields can be described using a so-called Hilbert class field [10]. Many important results of the field were conjectured by Hilbert and Kronecker. Some of them were only proven in the twentieth century, by mathematicians like Takagi, Artin, and Chevalley [10]. And to this day, the general quest for describing extensions of a number field remains elusive. But what if the quest was easier for function fields? In 1974, Drinfeld introduced the now-known Drinfeld modules [11], pursuing the ideas of Carlitz [12]. With Drinfeld modules, one can develop an explicit class field Theory for function fields: every Drinfeld module can be assigned a rank; cyclotomic function fields are generated by torsion spaces of rank 1 Drinfeld modules and \(j\)-invariants of rank 2 Drinfeld intervene in the construction of the function-field analogue of the Hilbert class field. Later developments saw Drinfeld modules being instrumental in Lafforgue's proof of some of Langlands conjectures for function fields [13]. The analogue question for number fields is still out of reach. In the recent years, purely algorithmic thesis [14] and papers [15, 16, 17, 18, 19, 20, 21, 22] have been published, emphasizing efficiency. The present implementation began as the need for a unified and tangible manipulation tool, which we hope will be useful to a large community. We made notable efforts to accompany the code with exemplary documentation and use pre-existing SageMath facilities wherever possible. Our three core principles were _reliability, user interface degame_, and _integration_. The original ticket (see Github pr#30026) was opened in April 2022 and merged in March 2023. Many _pull requests_ have since been proposed to enhance the capabilities of the original contribution and are under active development, fueling an ever-growing interest in Drinfeld modules. **Mathematical background.** Before entering into the core of this presentation, we need to recall basic definitions related to Drinfeld modules. Let \(\mathbb{F}_{q}\) be a finite field with \(q\) elements, let \(K\) be an extension of \(\mathbb{F}_{q}\) and let \(\overline{K}\) be an algebraic closure of \(K\). Additionally, we equip \(K\) with a structure of \(\mathbb{F}_{q}[T]\)_-field_, meaning we give ourselves a morphism of \(\mathbb{F}_{q}\)-algebras \(\gamma:\mathbb{F}_{q}[T]\to K\). We use the notation \(\tau\) to denote the \(\mathbb{F}_{q}\)-linear endomorphism of \(\overline{K}\) defined by \(x\mapsto x^{q}\). We define the _ring of Ore polynomials_\(K\{\tau\}\) as the ring whose elements are sum of the form \(a_{0}+a_{1}\tau+\cdots+a_{n}\tau^{n}\) where \(n\in\mathbb{Z}_{\geqslant 0}\) and \(a_{i}\in K\) for all \(0\leqslant i\leqslant n\). In \(K\{\tau\}\), we have the identity \(\tau a=a^{q}\tau\) whenever \(a\in K\). A _Drinfeld module_ over \(K\) is a morphism of \(\mathbb{F}_{q}\)-algebras \(\phi:\mathbb{F}_{q}[T]\to K\{\tau\}\) such that \(\phi(T)\) has constant coefficient \(\gamma(T)\) and nonzero degree in \(\tau\). We remark that \(\phi(T)\) entirely determines \(\phi\); we often denote it simply by \(\phi_{T}\). The name _module_ comes from the fact that \(\phi\) endows \(\overline{K}\) with an action of \(\mathbb{F}_{q}[T]\), defined by \(a\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ment of the project, we were constantly very careful to the simplicity of the interface, the clarity and the completeness of the documentation and the unit tests. Concretely, each class, method or function is augmented with a doctest that has description, tests and examples. The entry point of the documentation is the doctor string of DrinfeldModule, accessed in the SageMath console by running DrinfeldModule?. For specific methods, the? keyword is also used, _e.g._ phi.rank?. The documentation also appears in the SageMath Reference Manual [Dev]. Our library is completely open and as such, we encourage all mathematicians and computer scientists to improve it with any contribution that may interest them. ### The base type of Drinfeld module objects A first difficulty we encountered at the very beginning of the project was that a Drinfeld module is _not_ an actual module in the classical sense. In particular, a Drinfeld module has no underlying set and a morphism between Drinfeld modules is not a set-theoretical map. However, in the SageMath idiom, most objects are either sets with additional structures -- a so-called Parent -- or elements in such sets -- an Element. This philosophy is referred to as the _parent/dement framework_. It is often implicitely assumed in SageMath. For example, the default _Test Suite_ of a parent checks that its category is a subcategory of **Sets,** the constructor of Morphism objects assume that the domain and codomain are both parents, _etc._ For Drinfeld modules, this raises many questions and we eventually had to make a difficult choice between the three following compromises: 1. Making Drinfeld modules elements (as they are _in fine_ morphisms) and their set a parent (the so-called "homsets" in SageMath); this option offers a standard parent/element framework. 2. Implementing Drinfeld modules as parents without elements, following actually the implementation of elliptic curves1. This option makes the implementation of morphisms between Drinfeld module (and, more generally, of the category of Drinfeld modules) easier. Besides, making in some sense Drinfeld modules as function field analogues of elliptic curve, this option has a strong mathematical base. Footnote 1: In SageMath, elliptic curves \(\epsilon\) are schemes, and \(\epsilon\).an_element() return an element whose parent is not \(\epsilon\), but the group \(\phi\) of points of \(\epsilon\). In that case, \(6\) and \(\epsilon\) are distinct objects. 3. Implementing Drinfeld modules as CategoryObject. This class does exist in SageMath and it is not expected to have elements. However, unfortunately, it is used only sporadically, it is currently incompatible with Morphism objects and it is no longer maintained (it is possibly intended to disappear eventually). All these options have their benefits and drawbacks. We discussed all of them with the SageMath core developers (see Github pr #37313 and Github pr #34534). At some point, the third option looked to us the most mathematically appealing; however given that Categoryobjects are not fully supported, we decided to rule out this possibility. On the other hand, the first option seems more practical but we believed that it was too mathematically misleading; it would also require a workaround to make morphisms work. We then ultimately chose the second option. ## 2 Overview of our package Our package is publicly available on Github: [https://github.com/xcaruso/sage/tree/drinfeld-modules](https://github.com/xcaruso/sage/tree/drinfeld-modules). It is intended to be ultimately included in the standard distribution of SageMath. Actually, about half of the package will be released with SageMath no.o, the other half is still under review; we hope that it will be approved soon by the SageMath community. Alternatively, we offer the possibility to try our package online on the platform plm-binder. For this, please go to the URL [https://caruso.perso.math.cnrs.fr/notebook/drinfeld-modules/after](https://caruso.perso.math.cnrs.fr/notebook/drinfeld-modules/after) a few seconds, a Jupyter notebook will open with a full tutorial presenting the main functionalities of our package. Beyond reading the tutorial, plm-binder allows for editing the notebook, executing commands, creating new worksheets, _etc._ Be careful however that your modifications will not be stored after your session is closed; if you want to keep them, do not forget to download your notebooks! #### 2.1. Construction and basic properties A Drinfeld module is a rather sophisticated mathematical object, whose definition already involves several nontrivial ingredients: a morphism \(\gamma:\mathbb{F}_{q}[T]\to K\), the ring of Ore polynomials \(K\{\tau\}\). In our package, we have tried as much as possible to minimize the number of lines for creating a Drinfeld module. In particular, in most cases, it is not needed to define explicitely \(\gamma\) and \(K\{\tau\}\). ``` sage:K.<ww =GF(\(\lambda\)) sage:phi = DrinfeldModule(GF(2)['T'], [w, o, o, 1]) sage:phi Dinfeld module defined by T |--> t^3 + w ``` Once a Drinfeld module is instantiated, we have access to a panel of methods for accessing its most important invariants, _e.g._ phi.characteristic(), phi.rank(), phi.height(), _etc._ It is also also possible to compute the value \(\phi(a)\) by simply using the syntax phi(a). #### 2.2. Morphisms and isogenies Given that Drinfeld modules do not have elements, the morphisms between them are the main tools at our disposal for understanding their structure. Our package provides the method hom for easily constructing morphisms. ``` sage:t=phi.ore_variable() sage:phi.hom(t+w) DinfeldModulemorphism: From:Drinfeld module defined by T |--> t^3 + w To: Drinfeld module defined by T |--> t^3 + t^2 + wt * w ``` We observe that the software has automatically determined the codomain. Once we have constructed a morphism \(f\), many methods become available, _e.g._ f.codomain(), f.is_isomorphism(), _etc._ At the level of Drinfeld modules themselves, the method is_isomorphic allows for checking whether two Drinfeld modules are isomorphic. When \(K\) is finite, a very important morphism is the Frobenius endomorphism defined by the Ore polynomial \(\tau^{[K;\mathbb{F}_{q}]}\) (see also \(\phi_{2+4}\)). Our pack age provides the method phi.frobenius_endomorphism() for rapidly instantiating it. Of course, addition and composition of morphisms are implemented, as well as inverse of isomorphisms. We observe in addition that any polynomial \(P\in\mathbb{F}_{q}[T]\) defines an endomorphism of \(\hat{\varphi}\) (corresponding to the Ore polynomial \(\hat{\varphi}_{P}\)). In particular, the Horn spaces \(\mathrm{Hom}(\hat{\varphi},\varphi)\) inherits a structure of left module over \(\mathbb{F}_{q}[T]\), which is accessible in our package _via_ the operator \(\ast\). This simple syntax allows for writing down easily complex formulas. Finally, in contrast to the case of elliptic curves, computing morphisms between Drinfeld modules defined over finite fields amounts to solving a linear system over \(\mathbb{F}_{q}\). This leads to an efficient algorithm for finding isogenies [20], which we implemented in our package. ``` sage:psi=DrinfeldModule(GF(z)['T'],[w,w+1,1,1]) sage:Hom(phi,psi).an_isogeny() DrinfeldModulemorphism: From:Drinfeld module defined by T |-->t^3 + w To:Drinfeld module defined by T |-->t^3 + t^2 + (w + 1)+ w Defn:t^2 + (w * 1)+t + 1 ``` The command Hom(phi, psi).basis(degree=d) returns more generally an \(\mathbb{F}_{q}\)-basis of the vector space of morphisms between \(\phi\) and \(\psi\) defined by an Ore polynomial of degree at most \(d\). ### \(j\)-invariants In the classical theory, it is well known that elliptic curves over an algebraically closed field are classified, up to isomorphisms, by their \(j\)-invariants [19, Proposition 1.4]. Moreover, when working over a quadratic imaginary field \(R\), the \(j\)-invariants of elliptic curves with complex multiplication by \(R\) provide an explicit description of abelian extensions of \(R\)[19, Chap. II]. Similar results hold for Drinfeld modules: one can attach to any Drinfeld module \(\phi\) of rank \(2\)\(\mathit{a}j\)-invariant which determines the isomorphism class of \(\phi\) over an algebraic closure; besides, certain \(j\)-invariants play a pivotal role in the study of certain algebraic extensions of \(\mathbb{F}_{q}(T)\)[18, (4.-4)], [20, Theorem 6.9]. The \(j\)-invariant of a Drinfeld module of rank \(2\) is given by a simple closed formula: if \(\hat{\varphi}_{T}=\gamma(T)+g_{1}(\hat{\varphi})\tau+g_{2}(\hat{\varphi})\tau^ {2}\), then \(j(\hat{\varphi}):=g_{1}(\hat{\varphi})^{\hat{\varphi}\dagger\dagger}/g_{2}( \hat{\varphi})\). This makes it easy to compute and our package provides a direct method for accessing it. ``` sage:phi=DrinfeldModule(GF(z)['T'],[w,w+1,w+2]) sage:phi,j_invariant() w + 1 ``` In the context of Drinfeld modules, it turns out that \(j\)-invariants are defined in any rank [21]. A Drinfeld module of rank \(r>2\) does not have a single \(j\)-invariant but a complete family of \(j\)-invariants indexed by the integral points of a convex subset of \(\mathbb{R}^{r}\). Fortunately, those \(j\)-invariants are still given by explicit closed formulas, making their computation possible. Our package provides methods (basic_j_invariant_parameters, basic_j_invariants, jk_invariants, _etc._) for computing and manipulating those \(j\)-invariants in any rank. We refer to our tutorial on plm-binder for more details. ### Norms and characteristic polynomials In the classical setting, morphisms (resp. endomorphisms) between elliptic curves have norms (resp. characteristic polynomials) which can be found by looking at the action on the Tate module [22, SS]. Again, similar facts hold true in the Drinfeld setting [18, Lem. 3.10]: there is a well-defined notion of Tate module of a Drinfeld module and morphisms between Drinfeld modules do induce linear transformations of the associated Tate modules. From this construction, one can define the _norm_ of a general isogeny and the _characteristic polynomial_ of an endomorphism. Unfortunately, computing in practice the Tate module is a hard task in general given that the latter usually lives in a quite large extension of \(K\). Norms and characteristic polynomials have however alternative interpretations, which makes tangible the perspective of computing them efficiently. Concretely, algorithms for this task based on the notion of Anderson motives [1] have been designed in [19]. We implemented them in our package; they are available through the methods norm and charpoly. When \(K\) is finite, a distinguished endomorphism of a Drinfeld module \(\phi\) is its Frobenius endomorphism. Its characteristic polynomial plays a prominent role in the theory; notably, it entirely determines the isogeny class of \(\hat{\varphi}\)[18, Th. 3.5]. In our package, we implemented three different algorithms for computing this invariant, namely: * the motive algorithm, based on Anderson motives as already discussed above, * the crystalline algorithm [20], based on the action of the Frobenius on the crystalline cohomology, * the CSA algorithm [19], based on a reinterpretation of the characteristic polynomial of the Frobenius as a reduced norm in some central simple algebra. Figure 1 (on page 1) compares the timings of our three algorithms3 depending on the rank of the Drinfeld module and the degree of the extension \(K/\mathbb{F}_{q}\) (with \(q=5\) in our example). We observe that the CSA algorithm performs better when the rank is large, whereas the crystalline algorithm is the best when \([K:\mathbb{F}_{q}]\) is large. The method frobenius_charpoly, which is the entry point for this task in our package, is tuned for choosing by itself the best available algorithm depending on the input; the user can nevertheless require the use of a specific algorithm by passing in the keyword algorithm. Footnote 3: There is still some place for optimization, here. Indeed, the three algorithm rely eventually to the computation of the characteristic polynomial of an actual matrix with coefficients in \(K[T]\). For this task, we just called the charpoly function of \(\mathtt{SegMat}\) which, unfortunately, implements a slow generic algorithm with quartic complexity. Nevertheless, we believe that Figure 1 is meaningful in the sense that the comparison between timings are relevant. As a byproduct of this computation, we implemented a method is_isogenous which checks whether two given Drinfeld modules are isogenous. ### Exponential and logarithm A quite important perspective on Drinfeld modules is the analytic point of view. To explain it, let us go back to the case of elliptic curves: we know that an elliptic curve \(E\) over \(\mathbb{C}\) is uniformed by a free \(\mathbb{Z}\)-submodule in \(\mathbb{C}\) of rank \(2\), _i.e._\(E(\mathbb{C})\cong\mathbb{C}/\Lambda\) as complex Lie groups [19, VI SS]. In the Drinfeld setting, a similar result holds after replacing the field \(\mathbb{C}\) by \(\mathbb{C}_{\infty}\), the completion for the valuation associated to \(\frac{1}{T}\) of an algebraic closure of \(\mathbb{F}_{q}((\frac{1}{T}))\)[14, Theorem 4.6.9]. In this situation, the uniformization is obtained _via_ a \(\mathbb{F}_{q}\)-linear, surjective and nonconstant function \(c_{\phi}:\mathbb{C}_{\infty}\to\mathbb{C}_{\infty}\) called the _exponential_ of the Drinfeld module \(\phi\). The exponential may be represented by a power series \[c_{\phi}(z)=z+\sum_{j\in\mathbb{J}}\alpha_{i}z^{q^{j}}\] for \(\alpha_{i}\in\mathbb{C}_{\infty}\) and \(z\in\mathbb{C}_{\infty}\). The _logarithm_ of \(\phi\), denoted \(\log_{\phi}\) is the compositional inverse of the exponential. We refer the reader to chapter 4 of [14] for more details. In our implementation, any Drinfeld module possesses the methods exponential and logarithm which compute power series approximation of \(c_{\phi}\) and \(\log_{\phi}\) respectively. The code computes the power series lazily, meaning that any coefficient is computed on demands and the user does not need to input any precision parameter. ## Acknowledgements We thank Pierre-Jean Spaenlehauer and Emmanuel Thome for their guidance. David Ayotte was supported by FRQNT doctoral scholarship. This work also benefited from the financial support of the ANR projects CLap-CLap (ANR-18-CE40-0026-01) and PadLEfAn (ANR-22-CE40-0013).
``` Drinfeldモジュールの最初の実装は、SageMathエコシステムに完全に統合されています。最初の機能は、SageMath 10.0と共にリリースされます。 ```
2307.16439
Asymptotic behavior of the first Dirichlet eigenvalue of AHE manifolds
In this article, we investigate the rate at which the first Dirichlet eigenvalue of geodesic balls decreases as the radius approaches infinity. We prove that if the conformal infinity of an asymptotically hyperbolic Einstein manifold is of nonnegative Yamabe type, then the two-term asymptotic of the eigenvalues is the same as that in hyperbolic space.
Xiaoshang Jin
2023-07-31T06:55:05
http://arxiv.org/abs/2307.16439v1
# Asymptotic behavior of the first Dirichlet eigenvalue of AHE manifolds ###### Abstract In this article, we investigate the rate at which the first Dirichlet eigenvalue of geodesic balls decreases as the radius approaches infinity. We prove that if the conformal infinity of an asymptotically hyperbolic Einstein manifold is of nonnegative Yamabe type, then the two-term asymptotic of the eigenvalues is the same as that in hyperbolic space. ## 1 Introduction Suppose that \(\mathbb{H}^{n+1}\) is an \(n+1-\) dimensional hyperbolic space, then it is well-known that the spectrum of the Laplacian \(\sigma(-\Delta_{\mathbb{H}})=[\frac{n^{2}}{4},+\infty).\) Later the result is extended by R. Mazzeo in [9] and [10]. He showed that if \((X,g)\) is an \(n+1-\) dimensional asymptotically hyperbolic manifold, then the spectrum of the Laplacian \[\sigma(-\Delta)=\sigma_{p}(-\Delta)\cup[\frac{n^{2}}{4},+\infty).\] Here \(\Delta=\Delta_{g}=\frac{1}{\sqrt{G}}\frac{\partial}{\partial x^{1}}(g^{ij} \sqrt{G}\frac{\partial}{\partial x^{j}})\) stands for the Laplace-Beltrami operator and \(\sigma_{p}(-\Delta)\subseteq(0,\frac{n^{2}}{4})\) is a finite set of point spectrums (\(L^{2}\) eigenvalues). Lee [6] discovered a connection between its spectrum and conformal infinity when \(g\) is also Einstein. In other words, \(\sigma_{p}(-\Delta)\) is empty when \((X,g)\) is an asymptotically hyperbolic Einstein (AHE) manifold with conformal infinity of nonnegative Yamabe type. One can also see [14] for another proof. We rewrite Lee's result: \(Y(\partial X,[\hat{g}])\geq 0\Rightarrow\lambda_{1}(X)=\frac{n^{2}}{4}.\) It is clear from the property of the first Dirichlet eigenvalue on the noncompact manifold that \[\lim_{R\rightarrow+\infty}\lambda_{1}(B(p,R))=\frac{n^{2}}{4}\] holds for any \(p\in X.\) Here \(B(p,R)\) is the geodesic ball centered at \(p\) of radius \(R\). In this paper, we will present the rate of how \(\lambda_{1}(B(p,R))\) tends to \(\frac{n^{2}}{4}.\) Before stating our primary theorem, let's first present some basic conceptions. Suppose that \(\overline{X}\) is a compact manifold with smooth boundary \(\partial X\) and \(g\) is a complete metric in its interior \(X.\) We say that \((X,g)\) is conformally compact if there exists a defining function \(\rho\) such that \(\bar{g}=\rho^{2}g\) extends continuously to \(\overline{X}.\) Here \[\rho>0\ in\ X,\ \ \ \rho=0\ on\ \partial X,\ \ \ d\rho\neq 0\ on\ \partial X.\] \((X,g)\) is called \(C^{m,\alpha}\) (smoothly) conformally compact if \(\bar{g}=\rho^{2}g\) is \(C^{m,\alpha}\) (smooth) on \(\overline{X}.\) For any defining function \(\rho,\) we call \(\hat{g}=\rho^{2}g|_{T\partial X}\) the boundary metric. Hence the conformal class \((\partial X,[\hat{g}])\) is uniquely determined by \(g\) and we call it the conformal infinity of \(g.\) Let \(\bar{g}=\rho^{2}g\) be a \(C^{2}\) conformal compactification of \((X,g),\) then a simple calculation, such as that in [9] indicates that the sectional curvature of \(g\) tends to \(-|d\rho|^{2}_{\bar{g}}|_{\partial X}\) as \(\rho\to 0.\) Therefore no matter what the topology and geometry of \(g\) look like in \(X,\) the boundary behavior would always remind us of hyperbolic space. We call \((X,g)\) an asymptotically hyperbolic (AH for short) manifold if it is conformally compact and \(|d\rho|^{2}_{\bar{g}}=1\) on the boundary \(\partial X.\) Let \((X,g)\) be a \(C^{2}\) conformally compact manifold of dimension \(n+1,\) if \(g\) is also Einstein, i.e. \(Ric[g]=-ng,\) then \(|d\rho|^{2}_{\rho^{2}g}=1\) on \(\partial X\) for any smooth defining function \(\rho.\) In this case, we say that \((X,g)\) is an asymptotically hyperbolic Einstein (AHE for short) manifold. Here is the main result of this paper: **Theorem 1.1**.: _Let \((X,g)\) be an \(n+1-\) dimensional AHE manifold with conformal infinity \((\partial X,[\hat{g}]).\) If the Yamabe constant \(Y(\partial X,[\hat{g}])\geq 0,\) then for any \(p\in X,\)_ \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{1.1}\] _Here \(\lambda_{1}\) denotes the first Dirichlet eigenvalue._ Theorem 1.1 makes clear that the rate at which the first Dirichlet eigenvalue of geodesic balls tends to \(\frac{n^{2}}{4}\) is the same as that in hyperbolic space, at least in the second term of the expansion. On the other hand, we believe that the rate at which the first Dirichlet eigenvalue decreases is related to the geometry structure of manifolds. It is connected to the number of ends. Let's recall the work of Witten-Yau [15]. They showed that the boundary \(\partial X\) of an AHE manifold \((X,g)\) is connected if \(Y(\partial X,[\hat{g}])>0.\) Later the work was extended by Cai and Galloway in [2] where they relaxed the assumption that \(\partial X\) has nonnegative Yamabe constant. In [13], Wang proved that if the first eigenvalue of an AHE manifold \(\lambda_{1}(X)\geq n-1,\) then it either has only one end or it must be a warped product \((\mathbb{R}\times N,dt^{2}+\cosh^{2}tg_{N}).\) It would provide a new proof for Cai-Galloway's result if combined with Lee's work in [6]. Let's summarize their work: for an AHE manifold \((X,g),\) \[Y(\partial X,[\hat{g}])\geq 0\Longrightarrow\lambda_{1}(X)=\frac{n^{2}}{4} \Longrightarrow\partial X\ is\ connected\ (X\ has\ one\ end).\] Later, Li and Wang expanded the results in [7] and [8] where they didn't require \(X\) to be conformally compact. In this case, \(X\) either has one end or is a warped product. Now we could rule out the case of warped product by a direct calculation. In fact, as an application of theorem 0.5 and 0.6 in [8], we could obtain the following property: **Proposition 1.2**.: _Let \((X,g)\) be a complete \(n+1-\)dimensional manifold with \(n\geq 2\) and \(Ric[g]\geq-ng.\) If_ \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty \tag{1.2}\] _for some \(p\in X,\) then \(X\) has only one end with infinite volume._ This paper is organized as follows. In section 2, we first provide some background information on the Dirichlet eigenvalue. Then in sections 3 and 4, we prove theorem 1.1. In order to get the upper bound of the first Dirichlet eigenvalue of geodesic balls, we use the eigenvalue comparison theory and the eigenvalue formula in hyperbolic space. To estimate the lower bound, we somewhat enhance Lee's work. To be more precise, we create a new test function \(u^{-\frac{n}{2}}\cdot\sin(a\ln\varepsilon u)\) on a bounded domain. Here \(u\) is the eigenfunction solution to \(\Delta u=(n+1)u\) and was first used by Lee in [6]. In the end, we prove proposition 1.2 in section 5. ## 2 The first Dirichlet eigenvalue of manifolds Let's introduce some materials about Dirichlet eigenvalue in this section. Suppose that \((M,g)\) is a complete manifold and \(\Omega\subseteq M\) is a bounded domain of \(M\) with piecewise smooth boundary. The Dirichlet eigenfunctions are defined by solving the following problem for \(u\neq 0\) and eigenvalue \(\lambda\). \[\left\{\begin{array}{ll}\Delta u=-\lambda u&in\ \Omega,\\ u=0&on\ \partial\Omega\end{array}\right. \tag{2.1}\] where \(\Delta=\frac{1}{\sqrt{G}}\frac{\partial}{\partial x^{i}}(g^{ij}\sqrt{G}\frac{ \partial}{\partial x^{j}}).\) The smallest eigenvalue is denoted by \(\lambda_{1}=\lambda_{1}(\Omega)>0.\) Recall the Sobolev space \(H^{1}(\Omega)=W^{1,2}(\Omega)\) and \(H^{1}_{0}(\Omega)\subseteq H^{1}(\Omega)\) is defined to be the closure of the infinitely differentiable functions compactly supported in \(\Omega.\) Then by the max-min principle, \[\lambda_{1}(\Omega)=\inf_{f\in H^{1}_{0}(\Omega)\setminus\{0\}}\frac{\int_{ \Omega}|\nabla f|^{2}dV_{g}}{\int_{\Omega}f^{2}dV_{g}} \tag{2.2}\] It's easy to see that the eigenvalue has domain monotonicity: if \(\Omega_{1}\subseteq\Omega_{2}\Subset M,\) then \(\lambda_{1}(\Omega_{1})\geq\lambda_{1}(\Omega_{2}).\) Now we suppose that \((M,g)\) is a noncompact manifold, and denote the greatest lower bound for the \(L^{2}\)-spectrum of the Laplacian by \[\lambda_{1}(M):=\inf spec(-\Delta)=\inf_{f\in H^{1}_{0}(M)\setminus\{0\}} \frac{\int_{M}|\nabla f|^{2}dV_{g}}{\int_{M}f^{2}dV_{g}}. \tag{2.3}\] Notice that \(\lambda_{1}(M)\) does not necessarily be an \(L^{2}\) eigenvalue of \(-\Delta,\) but is motivated by the characterization by \[\lambda_{1}(M)=\lim_{k\to\infty}\lambda_{1}(\Omega_{k}) \tag{2.4}\] for any smoothly compact exhaustion \(\{\Omega_{k}\}\) of \(M.\) For example, for the hyperbolic space \(M=\mathbb{H}^{n+1},\) we know that \(spec(-\Delta)=[\frac{n^{2}}{4},+\infty),\) see [11]. Then for any \(p\in\mathbb{H}^{n+1},\) \[\lim_{R\to+\infty}\lambda_{1}(B(p,R))=\frac{n^{2}}{4}. \tag{2.5}\] It is an interesting problem what the formula of \(\lambda_{1}(B(p,R)\) looks like. Or how \(\lambda_{1}(B(p,R))\) tends to \(\frac{n^{2}}{4}?\) It is shown in [12] and [1] that \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{2.6}\] In this paper, we prove that (2.6) still holds for AHE manifolds with conformal infinity of nonnegative Yamabe type. ## 3 The upper bound of eigenvalues Let's recall the classic eigenvalue comparison theorem of Cheng [3]. If \((X,g)\) is an \(n+1\) dimensional complete manifold satisfying that \(Ric[g]\geq-ng,\) then for any \(p\in X\) and \(R>0,\)\(\lambda_{1}(B(p,R))\leq\lambda_{1}(B^{\mathbb{H}}(R)).\) Here \(B^{\mathbb{H}}(R)\) is a geodesic ball of radius \(R\) in hyperbolic space. He also showed that \(\lambda_{1}(B^{\mathbb{H}}(R))\leq\frac{n^{2}}{4}+\frac{C}{R^{2}}\) for some positive constant \(C.\) Later the upper bound estimate was extended by Gage, see theorem 5.2 in [4]. In the following, we provide a weak version of the estimate for the upper bound and the proof is also simpler. **Theorem 3.1**.: _Let \(\mathbb{H}^{n+1}\) be the hyperbolic space of \(n+1\) dimension, then for any \(p\in\mathbb{H}^{n+1},\)_ \[\lambda_{1}(B(p,R))\leq\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(R^{-3})\ \ R\to+\infty. \tag{3.1}\] Proof.: Consider the rotationally symmetric model: \[(\mathbb{R}^{n+1},g_{\mathbb{H}}=dt^{2}+\sinh^{2}tg_{\mathbb{S}})\] and let \(p\) be the center point. For any \(R>0,\) we define the function \[f=e^{-\frac{n}{2}t}\cdot\sin(\frac{\pi}{R}t)\in H^{1}_{0}((B(p,R)) \tag{3.2}\] Then \[\begin{split}\lambda_{1}(B(p,R))&\leq\frac{\int_{ B(p,R)}|\nabla f|^{2}dV[g^{\mathbb{H}}]}{\int_{B(p,R)}f^{2}dV[g^{\mathbb{H}}]}\\ &=\frac{\int_{0}^{R}e^{-nt}(-\frac{n}{2}\sin(\frac{\pi}{R}t)+ \frac{\pi}{R}\cos(\frac{\pi}{R}t))^{2}\cdot\omega_{n}\sinh^{n}tdt}{\int_{0}^{ R}e^{-nt}\sin^{2}(\frac{\pi}{R}t)\cdot\omega_{n}\sinh^{n}tdt}\\ &=\frac{\int_{0}^{R}(1-e^{-2t})^{n}\cdot(-\frac{n}{2}\sin(\frac{ \pi}{R}t)+\frac{\pi}{R}\cos(\frac{\pi}{R}t))^{2}dt}{\int_{0}^{R}(1-e^{-2t})^{n }\cdot\sin^{2}(\frac{\pi}{R}t)dt}\\ &=\frac{\int_{0}^{\pi}(1-e^{-\frac{2R\theta}{\pi}})^{n}\cdot(- \frac{n}{2}\sin\theta+\frac{\pi}{R}\cos\theta)^{2}d\theta}{\int_{0}^{\pi}(1-e^ {-\frac{2R\theta}{\pi}})^{n}\cdot\sin^{2}\theta d\theta}\\ &=\frac{F(R)}{G(R)}\end{split} \tag{3.3}\] where \[F(R)\leq\int_{0}^{\pi}(-\frac{n}{2}\sin\theta+\frac{\pi}{R}\cos\theta)^{2}d \theta=\frac{\pi}{2}\cdot(\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}})\] For the term \(G(R),\) a direct calculation indicates that \[\int_{0}^{\pi}e^{-r\theta}\cdot\sin^{2}\theta d\theta=\frac{2}{r(r^{2}+4)}(1- e^{-\pi r})=\frac{2}{r^{3}}+O(r^{-4}),\;r\to+\infty \tag{3.4}\] Hence we could get that \[\begin{split} G(R)&=\int_{0}^{\pi}[1+\sum_{k=1}^{n }C_{n}^{k}(-e^{-\frac{2R\theta}{\pi}})^{k}]\sin^{2}\theta d\theta\\ &=\frac{\pi}{2}-\frac{\pi^{3}}{4}[\sum_{k=1}^{n}C_{n}^{k}\frac{(- 1)^{k+1}}{k^{3}}]\frac{1}{R^{3}}+O(R^{-4})\end{split} \tag{3.5}\] In the end, we deduce that \[\lambda_{1}(B(p,R))\leq\frac{F(R)}{G(R)}\leq\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}} +\frac{n^{2}\pi^{2}}{8}[\sum_{k=1}^{n}C_{n}^{k}\frac{(-1)^{k+1}}{k^{3}}]\frac{1} {R^{3}}+O(R^{-4}). \tag{3.6}\] ## 4 The lower bound of eigenvalues Suppose that \((X,g)\) satisfies the conditions in theorem 1.1. Lee proved that \(\lambda_{1}(X)=\frac{n^{2}}{4}\) in [6]. The key step is to construct a proper test function \(\varphi=u^{-\frac{n}{2}}.\) Here \(u\) is an important positive eigenfunction with prescribed growth at infinity. In order to make our proof more clear, we would give a short quick introduction to Lee's proof. ### A quick review of Lee's work **Lemma 4.1**.: _[_6_]_ _Let \((X,g)\) be an \(n+1-\) dimensional AHE manifold with boundary metric \(\hat{g}\) and let \(x\) be the associated geodesic defining function. Then there is a unique positive eigenfunction \(u\) on \(X\) such that_ \[\Delta u=(n+1)u.\] _and \(u\) has the following form of expansion at the boundary_ \[u=\frac{1}{x}+\frac{\hat{R}}{4n(n-1)}x+O(x^{2}).\] Let \(\varphi=u^{-\frac{n}{2}},\) then \[-\frac{\Delta\varphi}{\varphi}=\frac{n^{2}}{4}+\frac{n(n+2)}{4}(1-\frac{|du|_ {g}^{2}}{u^{2}}).\] One can estimate near the boundary: \[u^{2}-|du|_{g}^{2}=\frac{\hat{R}}{n(n-1)}+o(1).\] On the other hand, the Bochner formula implies that \[-\Delta(u^{2}-|du|_{g}^{2})=2|\frac{\Delta u}{n+1}g+\nabla^{2}u|^{2}\geq 0.\] When \(Y(\partial X,[\hat{g}])\geq 0\) we can choose a representative \(\hat{g}\) whose scalar curvature \(\hat{R}\geq 0,\) then the maximum principle implies that \(u^{2}-|du|_{g}^{2}\geq 0\) in \(X.\) So \(-\frac{\Delta\varphi}{\varphi}\geq\frac{n^{2}}{4}\) in \(X.\) According to the eigenvalue comparison theorem of Cheng-Yau [3], \(\lambda_{1}(X)\geq\frac{n^{2}}{4}.\) Now we turn to research the first Dirichlet eigenvalue of geodesic balls. For sufficiently small \(\varepsilon>0,\) let \(X_{\varepsilon}=\partial X\times(0,\varepsilon).\) We study the first Dirichlet eigenvalue of \(X\setminus X_{\varepsilon}.\) If \(\hat{R}>0,\) we get that \[1-\frac{|du|_{g}^{2}}{u^{2}}\geq\frac{\hat{R}}{n(n-1)}\frac{1}{u^{2}}+o(\frac{ 1}{u^{2}})=\frac{\hat{R}}{n(n-1)}x^{2}+o(x^{2}).\] Then \[-\frac{\Delta\varphi}{\varphi}\geq\frac{n^{2}}{4}+c\varepsilon^{2},\ \ \ on\ X\setminus X_{\varepsilon}\] for some positive constant \(c\) and hence \(\lambda_{1}(X\setminus X_{\varepsilon})\geq\frac{n^{2}}{4}+c\varepsilon^{2}.\) As a consequence, \[\lambda_{1}(B(p,R))\geq\frac{n^{2}}{4}+\frac{C}{e^{2R}} \tag{4.1}\] for some \(C>0\) provided \(R\) is large enough. If \(\hat{R}=0,\) we know that \(1-\frac{|du|_{g}^{2}}{u^{2}}\) is still positive in \(X,\) see [5]. Then a similar estimate of (4.1) could be obtained. The lower bound \(\frac{C}{e^{2R}}\) is too "small" compared to \(\frac{\pi^{2}}{R^{2}}.\) We need to find a better test function to get a sharper lower bound of \(\lambda_{1}(B(p,R)).\) ### A new test function Let \(u\) be the eigenfunction that is defined in lemma 4.1 and \(\varphi=u^{-\frac{n}{2}}.\) In the following, for sufficiently small \(\varepsilon>0,\) we consider a new test function \[\psi=\varphi\cdot h=u^{-\frac{n}{2}}\cdot\sin(a\ln\varepsilon u) \tag{4.2}\] on the bounded domain \[F_{\varepsilon}=\{p\in X:u(p)<\frac{1}{\varepsilon}\} \tag{4.3}\] where \(a=a(n,\varepsilon)<0\) is a constant to be determined. A simple calculation indicates that \[h^{\prime}=\frac{a\cos(a\ln\varepsilon u)}{u},\ \ h^{\prime\prime}=\frac{-a^{2} \sin(a\ln\varepsilon u)-a\cos(a\ln\varepsilon u)}{u^{2}}=-a^{2}\frac{h}{u^{2 }}-\frac{h^{\prime}}{u}. \tag{4.4}\] Hence \[\Delta h=h^{\prime}\Delta u+h^{\prime\prime}|du|^{2}=(n+1)uh^{\prime}-(a^{2}h+uh^ {\prime})\frac{|du|^{2}}{u^{2}} \tag{4.5}\] and \[2g(d\ln\varphi,d\ln h)=2g(-\frac{n}{2}\frac{du}{u},\frac{h^{\prime}}{h}du)=-n \frac{uh^{\prime}}{h}\frac{|du|^{2}}{u^{2}}. \tag{4.6}\] As a consequence, \[-\frac{\Delta\psi}{\psi} =-\frac{\Delta\varphi}{\varphi}-(\frac{\Delta h}{h}+2g(d\ln\varphi,d\ln h) \tag{4.7}\] \[=\frac{n^{2}}{4}+\frac{n(n+2)}{4}(1-\frac{|du|^{2}}{u^{2}})-[(n+1 )\frac{uh^{\prime}}{h}-(a^{2}+\frac{uh^{\prime}}{h})\frac{|du|^{2}}{u^{2}}-n \frac{uh^{\prime}}{h}\frac{|du|^{2}}{u^{2}}]\] \[=\frac{n^{2}}{4}+a^{2}+(1-\frac{|du|^{2}}{u^{2}})[\frac{n(n+2)}{ 4}-(n+1)\frac{uh^{\prime}}{h}-a^{2}]\] \[=\frac{n^{2}}{4}+a^{2}+(1-\frac{|du|^{2}}{u^{2}})[\frac{n(n+2)}{ 4}-(n+1)a\cdot\cot(a\ln\varepsilon u)-a^{2}]\] We could assume that \(u\geq 1\) on \(X,\) or else we use \(ku\) instead where \(k\) is a constant large enough. Now set \[a=\frac{\pi}{\ln\varepsilon}+\frac{c_{n}}{\ln^{2}\varepsilon} \tag{4.8}\] for some constant \(c_{n}>0,\) then \(a<0.\) Hence on \(F_{\varepsilon},\) we have that \[a\ln\varepsilon u\in(0,\pi+\frac{c_{n}}{\ln\varepsilon}]\subseteq(0,\pi).\] As a result, \(h\) is smooth and positive on \(F_{\varepsilon}\) and so is \(\psi.\) Furthermore, \[\begin{split}-a\cdot\cot(a\ln\varepsilon u)&\geq-a \cdot\cot(\pi+\frac{c_{n}}{\ln\varepsilon})=-a\frac{\cos(\pi+\frac{c_{n}}{ \ln\varepsilon})}{\sin(\pi+\frac{c_{n}}{\ln\varepsilon})}\\ &>\frac{a}{\sin(\pi+\frac{c_{n}}{\ln\varepsilon})}=\frac{\frac{ \pi}{\ln\varepsilon}+\frac{c_{n}}{\ln^{2}\varepsilon}}{\sin(-\frac{c_{n}}{ \ln\varepsilon})}\rightarrow-\frac{\pi}{c_{n}}.\end{split} \tag{4.9}\] Therefore \[\liminf_{\varepsilon\to 0}[\frac{n(n+2)}{4}-(n+1)a\cdot\cot(a\ln \varepsilon u)-a^{2}]\geq\frac{n(n+2)}{4}-(n+1)\frac{\pi}{c_{n}}. \tag{4.10}\] If we choose \(c_{n}\geq\frac{4\pi(n+1)}{n(n+2)},\) then the formula (4.10)is nonnegative and finally we can get that \[-\frac{\Delta\psi}{\psi}\geq\frac{n^{2}}{4}+a^{2} \tag{4.11}\] on \(F_{\varepsilon}\) provided \(\varepsilon\) is sufficiently small. Then \[\lambda_{1}(F_{\varepsilon})\geq\frac{n^{2}}{4}+a^{2}=\frac{n^{2}}{4}+\frac{ \pi^{2}}{\ln^{2}\varepsilon}+O(\frac{1}{\ln^{3}\varepsilon}). \tag{4.12}\] For any \(p\in X\) and large \(R>0,\) let's consider the first Dirichlet eigenvalue of \(B(p,R).\) Since \[|u-\frac{1}{x}|\leq C_{1},\ \ \ |-\ln x(\cdot)-d_{g}(p,\cdot)|\leq C_{2} \tag{4.13}\] where \(C_{1}\) and \(C_{2}\) are positive constants, we have that \[e^{d_{g}(p,\cdot)}\geq\frac{e^{-C_{2}}}{x}\geq e^{-C_{2}}(u-C_{1}) \tag{4.14}\] Then \[B(p,R)\subseteq\{q\in X:u(q)<e^{R+C_{2}}+C_{1}\}\subseteq F_{e^{-R-C_{3}}}\] for some constant \(C_{3}>0\) when \(R\) is large enough. Hence \[\begin{split}\lambda_{1}(B(p,R)&\geq\lambda_{1}(F_ {e^{-R-C_{3}}})\\ &\geq\frac{n^{2}}{4}+\frac{\pi^{2}}{(R+C_{3})^{2}}+O(\frac{1}{(R+ C_{3})^{3}})\\ &=\frac{n^{2}}{4}+\frac{\pi^{2}}{R^{2}}+O(\frac{1}{R^{3}}).\end{split} \tag{4.15}\] Proof of theorem 1.1: theorem 3.1 and the eigenvalue comparison theorem in [3] provide the upper bound of the first Dirichlet eigenvalue of balls while (4.15) provides the lower bound. Then we finish the proof for theorem 1.1. ## 5 The geometric property of the asymptotical behavior To prove proposition 1.2, we introduce an important result of Li and Wang: **Theorem 5.1**.: _[_8_]_ _Let \((M,g)\) be an \(n+1\) dimensional complete manifold with \(n\geq 2.\) Suppose that \(Ric[g]\geq-n\) and \(\lambda_{1}(M)=\frac{n^{2}}{4}.\) Then_ _(1) \(M\) has only one end with infinite volume; or_ _(2) \((M,g)=(\mathbb{R}\times N,dt^{2}+e^{2t}g_{N})\) where \((N,g_{N})\) is an \(n-\)dimensional compact manifold satisfying that \(Ric[g_{N}]\geq 0;\) or_ _(3) \(n=2\) and \((M,g)=(\mathbb{R}\times N,dt^{2}+\cosh^{2}tg_{N})\) where \(N\) is a compact surface satisfying that the sectional curvature \(K_{N}\geq-1.\)_ In the following, we will show that the rate of eigenvalues in case (2) and (3) of theorem 5.1 does not match the formula (1.2). **Example 5.2**.: _Let \((M,g)=(\mathbb{R}\times N,dt^{2}+e^{2t}g_{N})\) be an \(n+1-\)dimensional manifold \((n\geq 2)\) where \((N,g_{N})\) is an \(n-\)dimensional compact manifold satisfying that \(Ric[g_{N}]\geq 0.\) Then \(Ric[g]\geq-n\) and \(\lambda_{1}(M)=\frac{n^{2}}{4}.\)_ Now we are going to study the formula of \(\lambda_{1}(B(p,R)).\) We assume that \(d_{N}=diam(N,g_{N})>0\) and for any \(p\in M\) and large \(R>0,\) let \(d_{p}=dist_{g}(p,N).\) Then \[B(p,R-d_{p})\subseteq B(N,R)\subseteq B(p,R+d_{N}+d_{p}). \tag{5.1}\] Here \(B(N,R)=(-R,R)\times N.\) Let \(f=e^{-\frac{n}{2}t}\cos(\frac{\pi}{2R}t),\) then \[f>0\;\;in\;B(N,R),\quad f=0\;\;on\;\partial B(N,R).\] On the other hand, \[\begin{split}\Delta f&=f^{\prime}(t)\Delta t+f^{ \prime\prime}(t)|\nabla t|^{2}=nf^{\prime}(t)+f^{\prime\prime}(t)\\ &=ne^{-\frac{n}{2}t}[-\frac{n}{2}\cos(\frac{\pi}{2R}t)-\frac{\pi }{2R}\sin(\frac{\pi}{2R}t)]+e^{-\frac{n}{2}t}[\frac{n^{2}}{4}\cos(\frac{\pi}{2 R}t)\\ &\quad+\frac{n\pi}{4R}\sin(\frac{\pi}{2R}t)+\frac{n\pi}{4R}\sin( \frac{\pi}{2R}t)-\frac{\pi^{2}}{4R^{2}}\cos(\frac{\pi}{2R}t)]\\ &=e^{-\frac{n}{2}t}[-\frac{n^{2}}{4}\cos(\frac{\pi}{2R}t)-\frac{ \pi^{2}}{4R^{2}}\cos(\frac{\pi}{2R}t)]\\ &=-(\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}})f\end{split} \tag{5.2}\] which means that \(u\) is an eigenfunction and \(\lambda_{1}(B(N,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}}.\) Then the monotonicity of the first Dirichlet eigenvalue and (5.1) would imply that \[\lambda_{1}(B(p,R))=\frac{n^{2}}{4}+\frac{\pi^{2}}{4R^{2}}+O(R^{-3}),\;\;R \rightarrow+\infty. \tag{5.3}\] **Example 5.3**.: _Let \((M,g)=(\mathbb{R}\times N,dt^{2}+\cosh^{2}(t)g_{N})\) be a \(3-\)dimensional manifold where \((N,g_{N})\) is a compact surface with Gaussian curvature bounded from below by \(-1.\) Then \(Ric[g]\geq-2\) and \(\lambda_{1}(M)=1.\)_ As discussed above, we only need to calculate the first Dirichlet eigenvalue of \(B(N,R)=(-R,R)\times N.\) Let \(f=\frac{\cos(\frac{\pi}{2R}t)}{\cosh(t)},\) then \[f>0\;\;in\;B(N,R),\quad f=0\;\;on\;\partial B(N,R).\] Furthermore, \[f^{\prime}(t)=-\frac{\pi}{2R}\frac{\sin(\frac{\pi}{2R}t)}{\cosh(t)}-\tanh(t)\cdot f \tag{5.4}\] and \[f^{\prime\prime}(t)=(-\frac{\pi^{2}}{4R^{4}}+\tanh^{2}(t)-\frac{1}{\cosh^{2}(t)} )f+\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t) \tag{5.5}\] and hence \[\begin{split}\Delta f&=f^{\prime}(t)\Delta t+f^{ \prime\prime}(t)|\nabla t|^{2}=f^{\prime}(t)\cdot 2\tanh(t)+f^{\prime\prime}(t)\\ &=-\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t )-2\tanh^{2}(t)\cdot f\\ &\quad+(-\frac{\pi^{2}}{4R^{4}}+\tanh^{2}(t)-\frac{1}{\cosh^{2}(t )})f+\frac{\pi}{R}\frac{\sinh(t)}{\cosh^{2}(t)}\sin(\frac{\pi}{2R}t)\\ &=(-\frac{\pi^{2}}{4R^{4}}-\tanh^{2}(t)-\frac{1}{\cosh^{2}(t)})f \\ &=-(1+\frac{\pi^{2}}{4R^{2}})f\end{split} \tag{5.6}\] We obtain that \(\lambda_{1}(B(N,R))=1+\frac{\pi^{2}}{4R^{2}}\) and hence for any \(p\in M,\) \[\lambda_{1}(B(p,R))=1+\frac{\pi^{2}}{4R^{2}}+O(R^{-3}),\ \ R\to+\infty. \tag{5.7}\] Theorem 5.1 and (5.3), (5.7) all lead naturally to Proposition 1.2.
この文章では、円筒座標系の最初のディリッヒ特ベクトルが半径が無限大に近づくにつれて減少する速度を調べます。私たちは、asymptotically hyperbolic Einsteinmanifoldの共形無限性がある場合、その2つの項の近似値は、 hiperbolic spaceの近似値と等しいことを証明しました。
2309.06386
Lung Diseases Image Segmentation using Faster R-CNNs
Lung diseases are a leading cause of child mortality in the developing world, with India accounting for approximately half of global pneumonia deaths (370,000) in 2016. Timely diagnosis is crucial for reducing mortality rates. This paper introduces a low-density neural network structure to mitigate topological challenges in deep networks. The network incorporates parameters into a feature pyramid, enhancing data extraction and minimizing information loss. Soft Non-Maximal Suppression optimizes regional proposals generated by the Region Proposal Network. The study evaluates the model on chest X-ray images, computing a confusion matrix to determine accuracy, precision, sensitivity, and specificity. We analyze loss functions, highlighting their trends during training. The regional proposal loss and classification loss assess model performance during training and classification phases. This paper analysis lung disease detection and neural network structures.
Mihir Jain
2023-09-10T16:37:03
http://arxiv.org/abs/2309.06386v1
# Lung Diseases Image Segmentation using Faster R-CNNs ###### Abstract Lung diseases are the number one cause of child mortality across the developing world, about half of the global pneumonia deaths occur in India 370k according to Indian academy of pediatrics and national Health profile 2016. Timely diagnosis could significantly reduce the mortality level. This paper uses a low density neural network structure to avoid topological lag which comes with network depth and breadth. The neural network integrates the parameters into a feature pyramid network, extracting data to avoid losses. Softening Non maximal suppression is applied on regional proposals created by RPN. All function information is filtered and a model to achieve a high level of detection is obtained. For performance evaluation the trained model is validated with randomly chest x-ray images taken from the same data set, we then compute the confusion matrix to obtain the accuracy, precision, sensitivity, specificity values. All the loss functions involved in training and testing model is shown by total loss. The loss function decreases when the training is longer. The regional proposal loss gives the performance of the model during the training phase and determines the quality of the regional proposals. The classification loss demonstrated the loss function during the classification phase and determines the quality of classification. ## 1 Introduction and Motivation According to Indian Academy of Pediatrics and National Health Profile 2016, [1] Fifty percent of pneumonia deaths occur in India which means approximately 370k children die of pneumonia annually in India. Moreover Pneumonia is the number one killer of children causing 18% of all child mortality in the world. This harsh number is still minuscule compared to mortality due to other chest lung diseases. Timely diagnosis and treatment could reduce the mortality level. The conventional X-ray images are slow[2] making human evaluation inept. Computer aided diagnosis can enhance productivity and lead to timely treatment potentially saving millions of lives. The size, shape and position of pneumonia can vary greatly[3]. The outline is very vague and blurry which leads to great difficulty for detection and enhancing the accuracy of detection is a major research problem. At present detection algorithms include two stage region detectors and object classification such as Faster R-CNN [4] and one stage region detectors and object classification such as YOLO [5] and SSD [6]. In latter object classification and bounding box regression are done directly without using pre generated region proposals. While in two stage detectors there is generation of region proposals and then object classification for each region proposal. Though faster than two stage detectors one stage detectors are less accurate. Treatment and Diagnosis needs high accuracy and thus two stage detectors and classifiers have advantage. However, there are still problems with the present image classification models such as Xception [7] and VGG [8]. They need a large network depth leading to prolonged training time and large downsampling that leads to the target position and semantic information being lost. ### Project Statement Object Detection is one of the most promising fields to expand global healthcare services especially in lower income countries. Automated algorithms and tools have the potential to support workflow, improve efficiency, and reduce errors. The Aim of this paper is to address common classification problem The ideal aim would be to assess images using a deep learning feature map, since such a map has a big receptive field which makes the size of the region in the input that produces the feature anchor also large. However, with deep maps, it reduces the object-edge resolution, which reduces the assessment accuracy of the regression curve. So after continuous downsampling in such maps, the semantic features of the targets which are small disappear in the layers; the large target is also partially lost, and the edge will move, which is not favorable for accurate detection of the target. Conventionally, to optimize a network, such as X-Ception, is to extend the width or depth [9], but this method creates huge numbers of parameters, which leads to high variance in the model, and needs large amounts of data to train. ### Organisation of Report This paper uses a low density neural network structure to avoid topological lag which comes with network depth and breadth. The neural network integrates the parameters into a [FPN] feature pyramid network [10], extracting data to avoid losses. Softening-Non maximal suppression [S-NMS] [11] is applied on regional proposals created by RPN. all functional information is filtered and a model to achieve high level detection is obtained. This paper uses Faster RCNN using ResNet. ## 2 Background Material ### Conceptual Overview The reason why one cannot proceed with object detection by making a neural network (convolutional) followed by a layer is that the length of the output layer is not fixed as the number of occurrences of the object of interest can be fixed. One might think to take different regions of interest and then filter the presence of objects within those regions. The problem with this approach is that objects of interest could have different aspect ratios and different locations within the image. Hence a huge number of regions and this could computationally blow up. Thus arises the need for algorithms like R-CNN, YOLO, SSD. Conventionally object detection models used 3 steps. The first step involved generating large number of region proposals. Region proposals are basically the useful part of image which might have the objects to be detected in them. Algorithms such as selective search and EdgeBoxes generate region proposals. Then from each of these region proposals a feature vector is extracted using image descriptors such as histogram of object gradients. This feature vector is critical for the model to work correctly these vectors should decribe an object even if it varies in scale or transition. The feature vector then gives region proposals to the object classes. But as the object classes increase the complexity for such models vary greatly with classes. After feature extraction the method which is used for classifying the region proposals are like support vector machine. ### Yolo YOLO (You only look once) was proposed by Redmon in 2016. As stated in the original proposal [5] "Compared to other region proposal classification networks (fast RCNN) which perform detection on various region proposals and thus end up performing prediction multiple times for various regions in a image, Yolo architecture is more like CNN (convolutional neural network) and passes the image (nxn) once through and output is (mxm) prediction. This the architecture is splitting the input image in mxm grid and for each grid generation 2 bounding boxes and class probabilities for those bounding boxes" The Biggest advantage of the model is speed (45 frames per second) and an even faster version 155 fps which is less accurate due to smaller architecture. But YOLO imposes strong constraints on bounding box predictions as it treats them as a regression problem. This limits the number of small objects that a model can predict. Thus the model struggles with small objects that appear in groups and thus accuracy is low. ### Ssd SSD (Single Shot Detection) was proposed by Liu [6] takes only one multiscale feature map to detect independently, It is significantly faster in speed and accuracy object detection. The comparison between speed and accuracy of different object detection models on VOC2007 [12] SDD300 : 59 FPS with mAP 74.3% SSD500 : 22FPS with mAP 76.9% Faster RCNN : 7 FPS with mAP 73.4% YOLO : 45 FPS with mAP 63.4% SSD has a base VGG-16 network followed by multibox layers. The high speed and accuracy is because it eliminates the bounding box proposals like one used in R-CNN and filters with different sizes, and aspect ratio for object detection. The spatial resolution is reduced which makes it unable to locate small targets reducing accuracy. ### Region Proposals Region Proposals or Region of Interests on given an input image find all the possible places where the object can be located. The output given is a list of bounding boxes of likely positions of the objects. ### Rol Polling RoI pooling is a layer neural network used for object detection. It achieves speedup for both training and inference while maintaining high accuracy. Consider we need to perform RoI pooling on the map below for one region of interest and an output of size 2x2.Also say we have a region proposal (x,y,h,w coordinates).By dividing into 2x2 size.Note that the size of RoI need not be perfectly divisible by number of pooling sections The max values are also the output from RoI pooling layer. ### Rcnn R-CNN was proposed by Girschick in 2014 [13] to solve the problem of selecting a high number of regions. He proposed a method to selectively approach search to extract only regions from the image which are data giving instead of going through all the regions. This improved the speed of training and the accuracy on VOC2010 dataset the mAP from 35.1% to 53.7%. Compared to conventional model the R-CNN model can detect 80 different types of objects in images extracting features on the basis of convolutional neural network. Other than this everything is same from the traditional model. RCNN contains 3 steps. The first module generates 2k region proposals using selective search algorithm. Then after augmentation the second module extracts feature vector of each region proposal proportional to length of 4,096. The third module uses a pre-trained SVM algorithm to classify region proposal as either one of the object classes or the background. The issue with R-CNN is that it still takes a huge time to train the model network as one has to classify region proposals on each image. Also it can't be implemented in real time. The selective search approach is a fixed algorithm and therefore no learning happens at the stage which could lead to bad candidate region proposals. ### Fast RCNN This is an object detector which was also developed by Girshick, It overcomes some of the issues of R-CNN. - He Proposed a new layer called ROI pooling that extracts equal-length feature vector from all region of interests (i.e. proposals) in the same image. - Compared to R-CNN which has multiple steps, Fast R-CNN builds the network in a single step. - Fast R-CNN shares convolutional layer calculations across all region of interests rather than doing calculations for each of proposal singularly. Using the ROI Pooling layer making the Fast R-CNN faster than R-CNN. The feature map from the last convolutional layer is fed to an ROI Pooling layer. The reason is to extract a fixed length feature from each ROI. ROI Pooling works by splitting each region proposal into a grid of cells. The max pooling operation is applied to each grid cell to return a single value. After which the extracted feature vector is passed to some neural layers. The output of which is splitted in 2 branches softmax layer to predict the scores and FC (neural network) layer to predict the bounding boxes for detected object. This leads to decrease in time consumption as fast rcnn shares the computationally across proposals while rcnn computes for each proposal. Rcnn also takes one single RoI from input image which if say model need 256 proposals it would take from 256 images while the fast rcnn could potentially take 256 RoI from 4 images about 64 per image leading to a reduced time span. Though using multiple layers on the same image reduces the accuracy of the model as all regions start to correlate. Even though fast rcnn is better compared to rcnn on time it has its problems as it depends on selective search to generate region proposals which cannot be customized for a specific object detection dataset. ### Faster RCNN Selective search [14] is a slow and time consuming process to overcome which in 2015 Ren [4] came up with the Faster R-CNN model, an object detection algorithm that doesn't need to search and lets the network learn the region proposal.It is an extension of Fast R-CNN due to region proposal network RPN Region proposal network is a convolutional neural network that generates proposals with various augmentations. It tells the model network for where to look. Instead of using various images at different shape or sizes the paper introduced anchor boxes. An anchor box is a reference box of specific scale and ratio. Multiple reference boxes leads to multiple share and sizes for binary region. Each region is then mapped to reference anchor box for detecting at different scales. The computations are shared across the RPN and Fast-RCNN proposed above to reduce the computational time. The architecture of Faster R-CNN :- consists of two parts RPN : For detecting region proposals & Fast R-CNN for object detection in the proposed regions. It works as follows RPN generates region proposals. Then from all region proposals a fixed length feature vector is extracted using RoI Pooling layer. Then extracted features are classified using Fast r-cnn. Then class scores of detected objects in addition to their boxes are returned. Both r-cnn models before faster rcnn depend on selective search algorithms to generate region proposals. Each proposal is fed to a pre trained CNN while faster rcnn uses region proposals network to produce region proposals.Thus region proposals are produced using a network which means they can be trained for specific tasks in object detection. Also as they are trained now the model can be used on customised dataset which produces better proposals than selective Figure 1: Fast RCNN Architecture search or EdgeBoxes.By sharing convolutional layers, the RPN is merged with Fast-rcnn to a single unified network to do training only once. RPN works on the output feature map of the convolution layer shared with the Fast-rcnn a sliding window passes over each of the feature map that leads to generation of region proposal. Each proposal is characterized by score given by a reference box called the anchor box. The anchor box has two parameters Scale and aspect ratio. k regions are produced from each region proposal where k regions varies in scale or size. As anchor boxes vary in different sizes and scale they are able to get a non changing scale object detector as a single image at a single scale is used. Multi scale anchor boxes are to the benefit of the model. From region proposal a feature vector is extracted and fed to two layers. The first is a binary classification that generates object score for each proposal and second returns the bounding box of region. First layer has two outputs whether the region is object that had to be detected or the background. The layer outputs two elements if the first is 1 and second is 0 then region is classified as background whereelse if the first is 0 and second is 1 it is the object. For RPN training each anchor is also given a score based on IoU which is later discussed is the ratio of intersection of area between the anchor box and ground truth to the union of the same boxes. IoU increases as the boxes come closer to each other. The image is used as an input to a network which outputs a convolutional feature map. Instead of relying on a selective search algorithm on the feature map for identifying the region proposals, a separate network is used to predict the region proposals. Faster R-CNN is faster and can even be used for real-time object detection. ### Pneumonia Detection Works Many researchers have sought to detect pneumonia in the recent past. Abiyev and Ma' aitah [15] applied a CNN on the chest x-ray in comparison to RNN the convolutional neural network gets higher accuracy but has a longer training time. Guendel [16] proposed to use the DenseNet for the detection on chest x-ray dataset. Abiyev and Ma' aitah [15] extracted the features from the layers of CNN and explored them like GIST on more than 600 radiographs. As covid pandemic engulfed the world in 2020 Wang and Wong [17] proposed COVID-Net which is a deep CNN specialized for detection of covid-19 cases from chest x-ray (CXR) images. Ozturk [18] proposed an automatic detection using raw chest X-ray images; this model is used to provide accurate diagnostics for binary classification (COVID vs. no findings) and multiclass classification (COVID vs. no findings vs. pneumonia). ## 3 Methodology In this part, I would introduce in detail our proposed model method, including data processing, architecture and enhancement effect used of soft non maxial suppression. ### Dataset The dataset [19] for chest x-ray images and metadata is provided by National institute of health and clinical research through kaggle. This dataset contains images from 27864 unique patients. Each image is labelled with three classes. The case for lung complications is when the lungs are replaced by something other than air like bacteria, fluids, cells. This causes lung opacities to differ which is why xray beams are used as such contingent lungs opacity greater than normal because lung tissue is not healthy. Normal class had images of perfectly healthy lungs without any pathological diseases found in cxr. The third class in the dataset lung opacity had images of lungs with clouds associated with diseases such as pneumonia. This region of lung opacity is labelled with bounding boxes. An image can have multiple such boxes If more than one area is detected with more opacity by object detection model. The middle class are of patients with more opaque lungs but no lung contingencies. ### Evaluation Metrics The model is evaluated on the mean average precision at different intersections over union IoU thresholds. The IoU of a set of predicted bounding boxes and ground truth boxes are given by IoU(A,B) = \(\frac{A\cap B}{A\cup B}\) The metric over a range of IoU thresholds at each point calculating an average precision value. At a threshold of 0.5 a object detected is considered a hit if its IoU with a ground truth object is greater than 0.5. At each threshold value t a precision value is calculated based on the number of True positives (TP), False negatives (FN) and false positives arising from model compared with testing truth object A true positive is counted when a single predicted object matches a ground truth with an IoU above threshold. A false positive is when model predicted object had no associated truth object. A false negative is when a testing object has no associated predicted object. When no ground truth objects testing at all for an image, any number of false positives will result in the image receiving a score of zero and will be included in mean average precision. The average precision of a single image is calculated as the mean of precision value at each of IoU threshold. mAP(p,t) = \(\frac{1}{|\text{thresholds}|}\sum_{t}\frac{TP(t)}{TP(t)+FP(t)+FN(t)}\) In this model we are using confidence level for each of bounding boxes. Bounding boxes will be used to evaluate in order of the confidence levels in the said process. That is the bounding boxes with higher confidence will be checked first against testing solution which determines what boxes considered are true and false positives. None of the edge cases are known to exist in the data set. Lastly the score returned by the competition metric is the mean taken over the individual average precision of each image in test dataset. ### Model To avoid test time augmentations which would require high graphical usage and pseudolabelling which is not possible and feasible in practice. To reduce the memory footprint and time of \begin{table} \begin{tabular}{|c|c|c|} \hline CLASS & Pathogens (1) & None (0) \\ \hline Lung Opacity & 1 & 9555 \\ \hline Not Normal / No Opacity & 0 & 11821 \\ \hline Normal & 0 & 8851 \\ \hline \end{tabular} \end{table} Table 1: Distribution Of Classes in the dataset Figure 2: Model inference. I propose a faster rcnn model utilizing a Resnet encoder which is pertained on TensorFlow ImageNet. Also because Soft-NMS [23] improves the performance of detection model I have precoded the soft-nms algorithm. Faster RCNN is used for object detection while ResNet is the architecture which performs feature extraction for the model. Faster RCNN defines label for each input, defines how the features [24] are used and label to perform supervised learning, defines the loss function and optimiser and defines training and testing pipeline. While resnet performs how we will extract the feature [25]. After having saved the checkpoints for the trained model we call the model with argument = generatePredictions with the path to model checkpoint and generate predictions for test images. ## 4 Implementation ### Model Building The faster rcnn model is implemented on cloud pytorch with resnet architecture. The input to the model is a list of tensors of c-color h-height w-width for each image and in range 0-1. Different images have different sizes The model behaviour changes depending on if it is training or in evaluation. During training the model expects both the input tensors as well as targets containing the boxes which are the ground-truth boxes and labels for each of these ground truth boxes. The model returns a Dict[Tensor] during tensors containing the classification and regression losses for the regional proposal network and R-CNN. During inference the model requires only the input tensors and returns processesed predicted results as List[Dict[Tensor]], one for each input image. FasterRCNN needs following arguments as inputs - a backbone network which is used to compute the features for the model. The backbone should have a at least an attribute which fives the number of output that each feature map has. The backbone returns a single Tensor. - number of output classes for the model. If boxPredictor is specified then number of classes are none. - Min and max size of the image to be rescaled before feeding it to the backbone. - ImageNetan the values used for input normalisation. Should be the mean values of dataset on which the backbone has been trained on. Mean average precision at differnet intersection over union (IoU) threshold is calculated. Images with no ground truth bounding boxes are not included in the map score unless there is a false positive detection. None is returned if both are empty, don't count the image in final evaluation. True Positive (TP) tp = tp+1 if matched = True is counted when bt (boxes_true) are matched. False Negative (FN) is when the truth box bt has no match then it is calculated as FN fn = fn+1 False Positive (FP) is the box predicted that is not matched by any truth boxes fp = len(boxes_pred) - len (matched_bt) m = tp/ (tp+fn+fp). map total += m the score for the mAP is given by map_total/len (thresholds) - Images with empty boxes are added to model for contributing to optimisation and loss calculation - Original RetinaNet implementation in pytorch [20] ignored images with no boxes while this model has added them to get better loss calculation and optimisation. - the small anchors output is added to handle smaller boxes - to reduce overfitting dropout was added to classification to achieve regularisation and optimal classification result at same epoch. The base model preparation part is not that comparatively quicker as PyTorch already provides a Faster RCNN ResNet50 FPN model with also preTrained implementations so to save time and computing power, we need to: Load that model and its comparitive models for results. Append the model for our input and output needs. Modify the base model heads with the number of classes according to our dataset. ### Model Training Training dataset included data for 27864 and used data for 1000 participants for testing set. The range of pretrained pytorch imagenet basemodel dataset was used to pretrain our model. Without which the model worked on regression but failed on classification. As training dataset was reasonably balanced there was no need to extra balance. For learning rate used the ReduceLRonPlateu with patience of 4 and decrease factor of 0.2. Entire image classification loss and boxes classification were combined to get as total loss. Figure 8 shows Model Comparision Shows validation losses for a range of various backbones. The SE-Type nets demonstrated optimal performance with resnext101 showing the best results and resnet50 being slightly worse. Different architecture were compared with our architecture PNASNet, NASNet, Inception, SE_ResNet, Xception and ResNet. There were significant tradeoff between the speed and complexity and accuracy also parameteres. ResNet demonstration between accuracy and complexity was optimal. VGG nets did not provide accuracy on dataset while SeNet and NasNet had the longest training time period. The Training configuration file contains - Number of epochs to train for. - Computation method used for training as the CPU is slow for Faster-RCNN we need GPU alternatively can use collaboratory by google for object detection in general. - The batch size used for training - The dimensions for augmentation we need to resize the images too. ### Model Learning For removal of parameters we trained the model with fixed augmentations and without classification outputs. Result was improved when making the model predict other related functions other than only output of regression based model. Without pretraining the model took much longer for the validation loss to converge pretrained in cloud imagenet and dropouts of 0.5 and 0.75 showed the best results on dataset ### Dataset Image Preprocessing The dataset was scaled to 512x512 pixels and the 256 pixels gave unsatisfactory results. The original images were 2000x2000 pixels hence was not practical being heavier to train the dataset. So deployed following augmentations rotations, shift, horizontal flip. Without enough augmentation the model was overfitted as validation stopped improving on increasing the training. Heavy augmentation led the model to show better validation loss and mAP results. The grayscale is different for the images and low brightness contrast of cxf leads to higher validation losses as it makes harder to locate the lesion [21] and hence the model proposed in this paper used clahe algorithm [22] to equal the gray scale of our dataset. Figure 3: Model Training ### Result And Analysis The deep learning model is used to perform and detect lung contingencies in chest xray images dataset. Faster RCNN + ResNet was trained for 15 epochs. The Resnet model was compared with Xception, PNASNet And Inception and showed considerably greater accuracy and specificy for the same dataset used in comparision. Accuracy, Specificity, Precision, Recall were compared for comparable architecutres as well as the method used in the paper. The faster rcnn model was trained to classify the ccr dataset. For evaluation of the validity of the model 5 fold cross validation was performed with first fold used to test and remaining used to train the model. Then performance as a binary classification problem. Performance change in resnet on each fold cross validation shown in table 3 The Researched FasterRCNN - ResNet Model showcased 95.28specificity. ## 5 Conclusion and Future Scope ### Conclusions In this paper, faster rcnn model algorithm was used based on resnet architecture. A number of changes were implemented to improve the accuracy of the model. Also the architecture was compared with similar models to verify the showcased loss validation results. Heavy augmentation in particular was applied the dataset. Several checkpoints were utilised in the model to create a generalise paper. Improvements were made using the said approaches as shown in the validation loss results. The model did not involve end user processing and augmentation, gives a good arrangement between accuracy and speed considering the resources of dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Method & AC & SP & PR & RC & F1 \\ \hline Xception & 93.16\% & 89.03\% & 94.63\% & 87.26\% & 94.5\% \\ \hline Resnet & 95.28\% & 96.36\% & 96.6\% & 97.32\% & 96.93\% \\ \hline PNASNet & 94.35\% & 91.79\% & 96.48\% & 89.35\% & 95.74\% \\ \hline Inception & 95.23\% & 92.64\% & 94.35\% & 87.62\% & 97.63\% \\ \hline \end{tabular} \end{table} Table 2: Performance Metrics by Method Figure 4: Model Learning ### Future Scope The paper can be compared with other deep learning technologies such as YOLO, SSN and even add to that different types of CNN like RCNN, Mask RCNN, Fast RCNN - Different architectures can be used to increase the comparision and verify the paper results. Even a combination of architectures can be combined to create a newer architecture which would change the paper labelled tradeoff between the accuracy and dataset limitation - The model needs to be trained largerly on a localised GPU for a bigger dataset to verify that the projected data holds true in further experiments. - The aspects to solve in further studies involves increasing the learning dataset which would need labelling done manually by a medical professional and verified by another medical professional to mitigate any human errors. ## 6 Backmatter Acknowledgments.I would like to express my special thanks of gratitude to my project advisor, Dr. Narendra Singh Yadav, for their able guidance and support. Their help, suggestions, and encouragement greatly contributed to the completion of my report. I would also like to thank Manipal University and the Department of Information Technology for providing me with all the facilities required for this project. Data Availability Statement.The dataset [19] containing chest X-ray images and metadata is provided by the National Institute of Health and Clinical Research through Kaggle. Data Availability.Data underlying the results presented in this paper are available in Dataset 1.
肺疾患は、発展途上国で子供死亡率の主要な原因であり、2016年にはインドが世界Pneumoniaの死亡者数の約半数(370,000人)を占めています。迅速な診断は、死亡率を低下させるために不可欠です。この論文では、深層ネットワークにおけるtopological課題に対処するための低密度ニューラルネットワーク構造を導入します。このネットワークは、特徴ピラミッドにパラメータを組み込み、データの抽出を強化し、情報損失を最小限に抑えます。Soft Non-Maximal Suppressionは、Region Proposal Networkによって生成された地域提案を最適化します。この研究では、胸部X線画像でモデルを評価し、混同行列を使用して正確性、精度、感受性、特異性を決定します。損失関数について分析を行い、トレーニング中に発生するトレンドを分析します。地域提案損失と分類損
2306.00172
Learning for Edge-Weighted Online Bipartite Matching with Robustness Guarantees
Many problems, such as online ad display, can be formulated as online bipartite matching. The crucial challenge lies in the nature of sequentially-revealed online item information, based on which we make irreversible matching decisions at each step. While numerous expert online algorithms have been proposed with bounded worst-case competitive ratios, they may not offer satisfactory performance in average cases. On the other hand, reinforcement learning (RL) has been applied to improve the average performance, but it lacks robustness and can perform arbitrarily poorly. In this paper, we propose a novel RL-based approach to edge-weighted online bipartite matching with robustness guarantees (LOMAR), achieving both good average-case and worst-case performance. The key novelty of LOMAR is a new online switching operation which, based on a judicious condition to hedge against future uncertainties, decides whether to follow the expert's decision or the RL decision for each online item. We prove that for any $\rho\in[0,1]$, LOMAR is $\rho$-competitive against any given expert online algorithm. To improve the average performance, we train the RL policy by explicitly considering the online switching operation. Finally, we run empirical experiments to demonstrate the advantages of LOMAR compared to existing baselines. Our code is available at: https://github.com/Ren-Research/LOMAR
Pengfei Li, Jianyi Yang, Shaolei Ren
2023-05-31T20:41:42
http://arxiv.org/abs/2306.00172v1
# Learning for Edge-Weighted Online Bipartite Matching with Robustness Guarantees ###### Abstract Many problems, such as online ad display, can be formulated as online bipartite matching. The crucial challenge lies in the nature of sequentially-revealed online item information, based on which we make irreversible matching decisions at each step. While numerous expert online algorithms have been proposed with bounded worst-case competitive ratios, they may not offer satisfactory performance in average cases. On the other hand, reinforcement learning (RL) has been applied to improve the average performance, but it lacks robustness and can perform arbitrarily poorly. In this paper, we propose a novel RL-based approach to edge-weighted online bipartite matching with robustness guarantees (LOMAR), achieving both good average-case and worst-case performance. The key novelty of LOMAR is a new online switching operation which, based on a judicious condition to hedge against future uncertainties, decides whether to follow the expert's decision or the RL decision for each online item. We prove that for any \(\rho\in[0,1]\), LOMAR is \(\rho\)-competitive against any given expert online algorithm. To improve the average performance, we train the RL policy by explicitly considering the online switching operation. Finally, we run empirical experiments to demonstrate the advantages of LOMAR compared to existing baselines. Our code is available at: [https://github.com/Ren-Research/LOMAR](https://github.com/Ren-Research/LOMAR) Machine Learning, Edge-Weighted Online Bipartite Matching, Robustness Guarantees ## 1 Introduction Online bipartite matching is a classic online problem of practical importance (Mehta, 2013; Kim and Moon, 2020; Fahrbach et al., 2020; Antoniadis et al., 2020; Huang and Shu, 2021; Gupta and Roughgarden, 2020). In a nutshell, online bipartite matching assigns online items to offline items in two separate sets: when an online item arrives, we need to match it to an offline item given applicable constraints (e.g., capacity constraint), with the goal of maximizing the total rewards collected (Mehta, 2013). For example, numerous applications, including scheduling tasks to servers, displaying advertisements to online users, recommending articles/movies/products, among many others, can all be modeled as online bipartite matching or its variants. The practical importance, along with substantial algorithmic challenges, of online bipartite matching has received extensive attention in the last few decades (Karp et al., 1990; Fahrbach et al., 2020). Concretely, many algorithms have been proposed and studied for various settings of online bipartite matching, ranging from simple yet effective greedy algorithms to sophisticated ranking-based algorithms (Karp et al., 1990; Kim and Moon, 2020; Fahrbach et al., 2020; Aggarwal et al., 2011; Devanur et al., 2013). These expert algorithms typically have robustness guarantees in terms of the competitive ratio -- the ratio of the total reward obtained by an online algorithm to the reward of another baseline algorithm (commonly the optimal offline algorithm) -- even under adversarial settings given arbitrarily bad problem inputs (Karp et al., 1990; Huang and Shu, 2021). In some settings, even the optimal competitive ratio for adversarial inputs has been derived (readers are referred to (Mehta, 2013) for an excellent tutorial). The abundance of competitive online algorithms has clearly demonstrated the importance of performance robustness in terms of the competitive ratio, especially in safety-sensitive applications such as matching mission-critical items or under contractual obligations (Fahrbach et al., 2020). Nonetheless, as commonly known in the literature, the necessity of conservativeness to address the worst-case adversarial input means that the average performance is typically not optimal (see, e.g., (Christianson et al., 2022; Zeynali et al., 2021) for discussions in other general online problems). More recently, online optimizers based on reinforcement learning (RL) (Chen et al., 2022; Georgiev and Lio, 2020; Wang et al., 2019; Alomrani et al., 2022; Du et al., 2019; Zuzic et al., 2020) have been proposed in the context of online bipartite matching as well as other online problems. Specifically, by exploiting statistical information of problem inputs, RL models are trained offline and then applied online to produce decisions given unseen problem inputs. These RL-based optimizers can often achieve high average rewards in many typical cases. Nonetheless, they may not have any performance robustness guarantees in terms of the competitive ratio. In fact, a crucial pain point is that the worst-case performance of many RL-based optimizers can be arbitrarily bad, due to, e.g., testing distribution shifts, inevitable model generalization errors, finite samples, and/or even adversarial inputs. Consequently, the lack of robustness guarantees has become a key roadblock for wide deployment of RL-based optimizers in real-world applications. In this paper, we focus on an important and novel objective -- achieving both good average performance and guaranteed worst-case robustness -- for _edge-weighted_ online bipartite matching (Fahrbach et al., 2020; Kim and Moon, 2020). More specifically, our algorithm, called LOMAR (Learning-based approach to edge-weighted Online bipartite MAtching with Robustness guarantees), integrates an expert algorithm with RL. The key novelty of LOMAR lies in a carefully-designed online _switching_ step that dynamically switches between the RL decision and the expert decision online, as well as a switching-aware training algorithm. For both no-free-disposal and free-disposal settings, we design novel switching conditions as to when the RL decisions can be safely followed while still guaranteeing robustness of being \(\rho\)-competitive against _any_ given expert online algorithms for any \(\rho\in[0,1]\). To improve the average performance of LOMAR, we train the RL policy in LOMAR by explicitly taking into account the introduced switching operation. Importantly, to avoid the "no supervision" trap during the initial RL policy training, we propose to approximate the switching operation probabilistically. Finally, we offer empirical experiments to demonstrate that LOMAR can improve the average cost (compared to existing expert algorithms) as well as lower the competitive ratio (compared to pure RL-based optimizers). ## 2 Related Works Online bipartite matching has been traditionally approached by expert algorithms (Mehta, 2013; Karande et al., 2011; Huang et al., 2019; Devanur et al., 2013). A simple but widely-used algorithm is the (deterministic) greedy algorithm (Mehta, 2013), achieving reasonably-good competitive ratios and empirical performance (Alomrani et al., 2022). Randomized algorithms have also been proposed to improve the competitive ratio (Ting and Xiang, 2014; Aggarwal et al., 2011). In addition, competitive algorithms based on the primal-dual framework have also been proposed (Mehta, 2013; Buchbinder et al., 2009). More recently, multi-phase information and predictions have been leveraged to exploit stochasticity within each problem instance and improve the algorithm performance (Kesselheim et al., 2013). For example, (Korula and Pal, 2009) designs a secretary matching algorithm based on a threshold obtained using the information of phase one, and exploits the threshold for matching in phase two. Note that stochastic settings considered by expert algorithms (Mehta, 2013; Karande et al., 2011) mean that the arrival orders and/or rewards of different online items within each problem instance are stochastic. By contrast, as shown in (2), we focus on an unknown distribution of problem instances whereas the inputs within each instance can still be arbitrary. Another line of algorithms utilize RL to improve the average performance (Wang et al., 2019; Georgiev and Lio, 2020; Chen et al., 2022; Alomrani et al., 2022). Even though heuristic methods (such as using adversarial training samples (Zuzic et al., 2020; Du et al., 2022)) are used to empirically improve the robustness, they do not provide any theoretically-proved robustness guarantees. ML-augmented algorithms have been recently considered for various problems (Rutten et al., 2023; Jin and Ma, 2022; Christianson et al., 2022; Chledowski et al., 2021; Lykouris and Vassilvitskii, 2021). By viewing the ML prediction as blackbox advice, these algorithms strive to provide good competitive ratios when the ML predictions are nearly perfect, and also bounded competitive ratios when ML predictions are bad. But, they still focus on the worst case without addressing the average performance or how the ML model is trained. By contrast, the RL model in LOMAR is trained by taking into account the switching operation and performs inference based on the actual state (rather than its own independently-maintained state as a blackbox). Assuming a given downstream algorithm, (Wang et al., 2021; Liu and Grigas, 2021; Wilder et al., 2019; Elmachtoub and Grigas, 2017; Du et al., 2021; Anand et al., 2021) focus on learning the ML model to better serve the end goal in completely different (sometimes, offline optimization) problems. LOMAR is relevant to conservative bandits/RL (Wu et al., 2016; Kazerouni et al., 2017; Yang et al., 2022; Garcelon et al., 2020). With unknown reward functions (as well as transition models if applicable), conservative bandits/RL leverages an existing policy to safeguard the exploration process. But, they only consider the cumulative reward without addressing future uncertainties when deciding exploration vs. rolling back to an existing policy. Thus, as shown in Section 4, this cannot guarantee robustness in our problem. Also, constrained policy optimization (Yang et al., 2020; Kumar et al., 2020; Schulman et al., 2015; Achiam et al., 2017; Thomas et al., 2021; Berkenkamp et al., 2017) focuses on average (cost) constraints in the long run, whereas LOMAR achieves stronger robustness (relative to an expert algorithm) for any episode. ## 3 Problem Formulation We focus on _edge-weighted_ online bipartite matching, which includes un-weighted and vertex-weighted matching as special cases (Fahrbach et al., 2020; Kim & Moon, 2020). In the following, we also drop "edge-weighted" if applicable when referring to our problem. ### Model The agent matches items (a.k.a. vertices) between two sets \(\mathcal{U}\) and \(\mathcal{V}\) to gain as high total rewards as possible. Suppose that \(\mathcal{U}\) is fixed and contains _offline_ items \(u\in\mathcal{U}\), and that the _online_ items \(v\in\mathcal{V}\) arrive sequentially: in each time slot, an online item \(v\in\mathcal{V}\) arrives and the weight/reward information \(\{w_{uv}\mid w_{u,\min}\leq w_{uv}\leq w_{u,\max},u\in\mathcal{U}\}\) is revealed, where \(w_{uv}\) represents the reward when the online item \(v\) is matched to each offline \(u\in\mathcal{U}\). We denote one problem instance by \(\mathcal{G}=\{\mathcal{U},\mathcal{V},\mathcal{W}\}\), where \(\mathcal{W}=\{w_{uv}\mid u\in\mathcal{U},v\in\mathcal{V}\}\). We denote \(x_{uv}\in\{0,1\}\) as the matching decision indicating whether \(u\) is matched to \(v\). Also, any offline item \(u\in\mathcal{U}\) can be matched up to \(c_{u}\) times, where \(c_{u}\) is essentially the capacity for offline item \(u\) known to the agent in advance. The goal is to maximize the total collected reward \(\sum_{v\in\mathcal{V},u\in\mathcal{U}}x_{uv}w_{uv}\). With a slight abuse of notations, we denote \(x_{v}\in\mathcal{U}\) as the index of item in \(\mathcal{U}\) that is matched to item \(v\in\mathcal{V}\). The set of online items matched to \(u\in\mathcal{U}\) is denoted as \(\mathcal{V}_{u}=\{v\in\mathcal{V}\,|\,x_{uv}=1\}\). The edge-weighted online bipartite matching problem has been mostly studied under two different settings: no free disposal and with free disposal (Mehta, 2013). In the no-free-disposal case, each offline item \(u\in\mathcal{U}\) can only be matched strictly up to \(c_{u}\) times; in the free-disposal case, each offline item \(u\in\mathcal{U}\) can be matched more than \(c_{u}\) times, but only the top \(c_{u}\) rewards are counted when more than \(c_{u}\) online items are matched to \(u\). Compared to the free-disposal case, the no-free-disposal case is significantly more challenging with the optimal competitive ratio being \(0\) in the strong adversarial setting unless additional assumptions are made (e.g., \(w_{u,\min}>0\) for each \(u\in\mathcal{U}\)(Kim & Moon, 2020) and/or random-order of online arrivals) (Fahrbach et al., 2020; Mehta, 2013). The free-disposal setting is not only analytically more tractable, but also is practically motivated by the display ad application where the advertisers (i.e., offline items \(u\in\mathcal{U}\)) will not be unhappy if they receive more impressions (i.e., online items \(v\in\mathcal{V}\)) than their budgets \(c_{u}\), even though only the top \(c_{u}\) items count. LOMAR can handle both no-free-disposal and free-disposal settings. For better presentation of our key novelty and page limits, we focus on the no-free-disposal setting in the body of the paper, _while deferring the free-disposal setting to Appendix B_. Specifically, the **offline** problem with no free disposal can be expressed as: \[\max\sum_{x_{uv}\in\{0,1\},u\in\mathcal{U},v\in\mathcal{V}}x_{uv}w_{uv}, \tag{1}\] \[\mathrm{s.t.},\ \ \sum_{v\in\mathcal{V}}x_{uv}\leq c_{u},\ \mathrm{and}\ \sum_{u\in\mathcal{U}}x_{uv}\leq 1, \forall u\in\mathcal{U},v\in\mathcal{V}\] where the constraints specify the offline item capacity limit and each online item \(v\in\mathcal{V}\) can only be matched up to one offline item \(u\in\mathcal{U}\). Given an online algorithm \(\alpha\), we use \(f^{\alpha}_{u}(\mathcal{G})\) to denote the total reward collected for offline item \(u\in\mathcal{U}\), and \(R^{\alpha}(\mathcal{G})=\sum_{u\in\mathcal{U}}f^{\alpha}_{u}(\mathcal{G})\) to denote the total collected reward. We will also drop the superscript \(\alpha\) for notational convenience wherever applicable. ### Objective Solving the problem in (1) is very challenging in the online case, where the agent has to make irreversible decisions without knowing the future online item arrivals. Next, we define a generalized competitiveness as a metric of robustness and then present our optimization objective. **Definition 1** (Competitiveness).: An online bipartite matching algorithm \(\alpha\) is said to be \(\rho\)-competitive with \(\rho\geq 0\) against the algorithm \(\pi\) if for any problem instance \(\mathcal{G}\), its total collected reward \(R^{\alpha}(\mathcal{G})\) satisfies \(R^{\alpha}(\mathcal{G})\geq\rho R^{\pi}(\mathcal{G})-B\), where \(B\geq 0\) is a constant independent of the problem input, and \(R^{\pi}\) is the total reward of the algorithm \(\pi\). Competitiveness against a given online algorithm \(\pi\) (a.k.a., expert) is common in the literature on algorithm designs (Christianson et al., 2022): the greater \(\rho\geq 0\), the better robustness of the online algorithm, although the average rewards can be worse. The constant \(B\geq 0\) relaxes the strict competitive ratio by allowing an additive _regret_(Antoniadis et al., 2020). When \(B=0\), the competitive ratio becomes the strict one. In practice, the expert algorithm \(\pi\) can be viewed as an existing solution currently in use, while the new RL-based algorithm is being pursued subject to a constraint that the collected reward must be at least \(\rho\) times of the expert. Additionally, if the expert itself has a competitive ratio of \(\lambda\leq 1\) against the offline oracle algorithm (OPT), then it will naturally translate into LOMAR being \(\rho\lambda\)-competitive against OPT. On top of worst-case robustness, we are also interested in the average award. Specifically, we focus on a setting where the problem instance \(\mathcal{G}=\{\mathcal{U},\mathcal{V},\mathcal{W}\}\) follows an _unknown_ distribution, whereas both the rewards \(\mathcal{W}\) and online arrival order within each instance \(\mathcal{G}\) can be adversarial. Nonetheless, the average reward and worst-case robustness are different, and optimizing one metric alone does not necessarily optimize the other one (which is a direct byproduct of Yao's principle (Yao, 1977). In fact, there is a tradeoff between the average performance and worst-case robustness in general online problems (Christianson et al., 2022). The reason is that an online algorithm that maximizes the average reward prioritizes typical problem instances, while conservativeness is needed by a robust algorithm to mitigate the worst-case uncertainties and outliers. In Lomar, we aim to maximize the average reward subject to worst-case robustness guarantees as formalized below: \[\max\mathbb{E}_{\mathcal{G}}\left[R^{\alpha}(\mathcal{G})\right] \tag{2a}\] \[\mathrm{s.t.} R^{\alpha}(\mathcal{G})\geq\rho R^{\pi}(\mathcal{G})-B,\;\;\; \forall\mathcal{G}, \tag{2b}\] where the expectation \(\mathbb{E}_{\mathcal{G}}\left[R^{\alpha}(\mathcal{G})\right]\) is over the randomness \(\mathcal{G}=\{\mathcal{U},\mathcal{V},\mathcal{W}\}\). The worst-case robustness constraint for each problem instance is significantly more challenging than an average reward constraint. Our problem in (2) is novel in that it generalizes the recent RL-based online algorithms (Aolmrani et al., 2022) by guaranteeing worst-case robustness; it leverages robustness-aware RL training (Section 5) to improve the average reward and hence also differs from the prior ML-augmented algorithms that still predominantly focus on the worst-case performance (Christianson et al., 2022; Wei and Zhang, 2020). Some manually-designed algorithms focus on a _stochastic_ setting where the arrival order is random and/or the rewards \(\{w_{uv}\mid w_{u,\min}\leq w_{uv}\leq w_{u,\max},u\in\mathcal{U}\}\) of each online item is independently and identically distributed (i.i.d.) within each problem instance \(\mathcal{G}\)(Mehta, 2013). By contrast, our settings are significantly different -- we only assume an unknown distribution for the entire problem instance \(\mathcal{G}=\{\mathcal{U},\mathcal{V},\mathcal{W}\}\) while both the rewards \(\mathcal{W}\) and online arrival order within each instance \(\mathcal{G}\) can be arbitrary. ## 4 Design of Online Switching for Robustness Assuming that the RL policy is already trained (to be addressed in Section 5), we now present the inference of Lomar, which includes novel online _switching_ to dynamically follow the RL decision or the expert decision, for robustness guarantees against the expert. ### Online Switching While switching is common in (ML-augmented) online algorithms, _"how to switch"_ is highly non-trivial and a key merit for algorithm designs (Antonialis et al., 2020; Christianson et al., 2022; Rutten et al., 2023). To guarantee robustness (i.e., \(\rho\)-competitive against a given expert for any \(\rho\in[0,1]\)), we propose a novel online algorithm (Algorithm 1). In the algorithm, we independently run an expert online algorithm \(\pi\) -- the cumulative reward and item matching decisions are all maintained virtually for the expert, but not used as the actual decisions. Based on the performance of expert online algorithm, we design a robust constraint which serves as the condition for online switching. ``` 0: Competitiveness constraint \(\rho\in[0,1]\) and \(B\geq 0\) 1:for\(v=1\) to \(|\mathcal{V}|\)do 2: Run the expert \(\pi\) and get expert's decision \(x_{v}^{\pi}\). 3: If \(x_{v}^{\pi}\neq\) skip: \(\mathcal{V}_{x_{v}^{\pi},v}^{\pi}=\mathcal{V}_{x_{v}^{\pi},v-1}^{\pi}\bigcup\{v\}\), \(R_{v}^{\pi}=R_{v-1}^{\pi}+w_{x_{v}^{\pi},v}\). //Update the virtual decision set and reward of the expert 4:\(s_{u}=w_{uv}-h_{\theta}(I_{u},w_{uv}),\forall u\in\mathcal{U}\) //Run RL model to get score \(s_{u}\) with history information \(I_{u}\) 5:\(\tilde{x}_{v}=\arg\max_{u\in\mathcal{U}_{u}}\bigcup\{s_{\mathrm{i}up}\}\{ \{s_{u}\}_{u\in\mathcal{U}_{u}},s_{\mathrm{skip}}\}\), with \(s_{\mathrm{skip}}=0\) and \(\mathcal{U}_{a}=\{u\in\mathcal{U}\mid|\mathcal{V}_{u,v-1}|<c_{u}\}\). //Get RL decision \(\tilde{x}_{v}\) 6:if Robust constraint in (3) is satisfied then 7: Select \(x_{v}=\tilde{x}_{v}\). //Follow RL 8:elseif\(x_{v}^{\pi}\) is available (i.e., \(|\mathcal{V}_{x_{v}^{\pi},v-1}|<c_{x_{v}^{\pi}}\)) then 9: Select \(x_{v}=x_{v}^{\pi}\). //Follow the expert 10:else 11: Select \(x_{v}=\) skip. 12:endif 13: If \(x_{v}\neq\) skip, \(\mathcal{V}_{x_{v},v}=\mathcal{V}_{x_{v},v-1}\bigcup\{v\}\), \(R_{v}=R_{v-1}+w_{x_{v},v}\). //Update the true decision set and reward reward 14:endfor ``` **Algorithm 1** Inference of Robust Learning-based Online Bipartite Matching (Lomar) Concretely, we define the set of items that is actually matched to offline item \(u\in\mathcal{U}\) before the start of \((v+1)-\)th step as \(\mathcal{V}_{u,v}\), and the set of items that is virtually matched to offline item \(u\in\mathcal{U}\) by expert before the start of \((v+1)-\)th step as \(\mathcal{V}_{u,v}^{\pi}\). Initially, we have \(\mathcal{V}_{u,0}=\emptyset\), and \(\mathcal{V}_{u,0}^{\pi}=\emptyset\). We also denote \(\mathcal{U}_{a}\) as the set of available offline item and initialize it as \(\mathcal{U}\). When an online item \(v\) arrives at each step, Algorithm 1 first runs the expert algorithm \(\pi\), gets the expert decision \(x_{v}^{\pi}\) and update the virtual decision set and reward if the expert decision is not skipping this step. Then the RL policy gives the scores \(s_{u}\) of each offline item \(u\in\mathcal{U}\). By assigning the score of skipping as \(0\) and comparing the scores, the algorithm obtain the RL action advice \(\tilde{x}_{v}\). Then the algorithm perform online switching to guarantee robustness. The most crucial step for safeguarding RL decisions is our online switching step: Lines 6-12 in Algorithm 1. The key idea for this step is to switch between the expert decision \(x_{v}^{\pi}\) and the RL decision \(\tilde{x}_{v}\) in order to ensure that the actual online decision \(x_{v}\) meets the \(\rho\)-competitive requirement (against the expert \(\pi\)). Specifically, we follow the RL decision \(\tilde{x}_{v}\) only if it can safely hedge against any future uncertainties (i.e., the expert's future reward increase); otherwise, we need to roll back to the expert's decision \(x_{v}^{\pi}\) to stay on track for robustness. Nonetheless, naive switching conditions, e.g., only ensuring that the actual cumulative reward is at least \(\rho\) times of the expert's cumulative reward at each step (Wu et al., 2016; Yang et al., 2022), can fail to meet the competitive ratio requirement in the end. The reason is that, even though the competitive ratio requirement is met (i.e., \(R_{v}\geq\rho R_{v}^{\pi}-B\)) at the current step \(v\), the expert can possibly obtain much higher rewards from future online items \(v+1,v+2,\cdots\), if it has additional offline item capacity that the actual algorithm LOMAR does not have. Thus, we must carefully design the switching conditions to hedge against future risks. ### Robustness Constraint In the no-free-disposal case, an offline item \(u\in\mathcal{U}\) cannot receive any additional online items if it has been matched for \(c_{u}\) times up to its capacity. By assigning more online items to \(u\in\mathcal{U}\) than the expert algorithm at step \(v\), LOMAR can possibly receive a higher cumulative reward than the expert's cumulative reward. But, such advantages are just _temporary_, because the expert may receive an even higher reward in the future by filling up the unused capacity of item \(u\). Thus, to hedge against the future uncertainties, LOMAR chooses the RL decisions only when the following condition is satified: \[R_{v-1}+w_{\tilde{x}_{v},v}\geq \rho\Big{(}R_{v}^{\pi}\sum_{u\in\mathcal{U}}\left(|\mathcal{V}_{ u,v-1}|-|V_{u,v}^{\pi}|\right. \tag{3}\] \[\left.+\mathbb{I}_{u=\tilde{x}_{v}}\right)^{+}\cdot w_{u,\max} \Big{)}-B,\] where \(\mathbb{I}_{u=\tilde{x}_{v}}=1\) if and only if \(u=\tilde{x}_{v}\) and 0 otherwise, \((\cdot)^{+}=\max(\cdot,0)\), \(\rho\in[0,1]\) and \(B\geq 0\) are the hyperparameters indicating the desired robustness with respect to the expert algorithm \(\pi\). The interpretation of (3) is as follows. The left-hand side is the total reward of LOMAR after assigning the online item \(v\) based on the RL decision (i.e. \(\tilde{x}_{t}\)). The right-hand side is the expert's cumulative cost \(R_{v}^{\pi}\), plus the term \(\sum_{u\in\mathcal{U}}\left(|\mathcal{V}_{u,v-1}|-|V_{u,v}^{\pi}|+\mathbb{I}_{ u=\tilde{x}_{v}}\right)^{+}\cdot w_{u,\max}\) which indicates the maximum reward that can be possibly received by the expert in the future. This reservation term is crucial, especially when the expert has more unused capacity than LOMAR. Specifically, \(|\mathcal{V}_{u,v-1}|\) is the number of online items (after assigning \(v-1\) items) already assigned to the offline item \(u\in\mathcal{U}\), and hence \(\left(|\mathcal{V}_{u,v-1}|-|V_{u,v}^{\pi}|+\mathbb{I}_{u=\tilde{x}_{v}} \right)^{+}\) represents the number of more online items that LOMAR has assigned to \(u\) than the expert if LOMAR follows the RL decision at step \(v\). If LOMAR assigns fewer items than the expert for an offline item \(u\in\mathcal{U}\), there is no need for any hedging because LOMAR is guaranteed to receive more rewards by filling up the item \(u\) up to the expert's assignment level. The term \(w_{u,\max}\) in (3) is the set as the maximum possible reward for each decision. Even when \(w_{u,\max}\) is unknown in advance, LOMAR still applies by simply setting \(w_{u,\max}=\infty\). In this case, LOMAR will be less "greedy" than the expert and never use more resources than the expert at any step. While we have focused on the no-free-disposal setting to highlight the key idea of our switching condition (i.e., not following the RL decisions too aggressively by hedging against future reward uncertainties), the free-disposal setting requires a very different switching condition, which we defer to Appendix B due to the page limit. ### Robustness Analysis We now formally show the competitive ratio of LOMAR. The proof is available in the appendix. **Theorem 4.1**.: _For any \(0\leq\rho\leq 1\) and \(B\geq 0\) and any expert algorithm \(\pi\), LOMAR achieves a competitive ratio of \(\rho\) against the algorithm \(\pi\), i.e., \(R\geq\rho R^{\pi}-B\) for any problem input._ The hyperparameters \(0\leq\rho\leq 1\) and \(B\geq 0\) govern the level of robustness we would like to achieve, at the potential expense of average reward performance. For example, by setting \(\rho=1\) and \(B=0\), we achieve the same robustness as the expert but leave little to no freedom for RL decisions. On the other hand, by setting a small \(\rho>0\) and/or large \(B\), we provide higher flexibility to RL decisions for better average performance, while potentially decreasing the robustness. In fact, such tradeoff is necessary in the broad context of ML-augmented online algorithms (Rutten et al., 2022; Christianson et al., 2022). Additionally, in case of multiple experts, we can first combine these experts into a single expert and then apply LOMAR as if it works with a single combined expert. While the competitive ratio of all online algorithms against the optimal offline algorithm is zero in the no-free-disposal and general adversarial setting, there exist provably competitive online expert algorithms under some technical assumptions and other settings (Mehta, 2013). For example, the simple greedy algorithm achieves \(\left(1+\max_{u\in\mathcal{U}}\frac{w_{u,\max}}{w_{u,\min}}\right)^{-1}\) under bounded weights assumptions for the adversarial no-free-disposal setting (Kim and Moon, 2020), and \(\frac{1}{2}\) for the free-disposal setting (Fahrbach et al., 2020), and there also exist \(1/e\)-competitive algorithms against the optimal offline algorithm for the random-order setting (Mehta, 2013). Thus, an immediate result follows. **Corollary 4.1.1**.: _For any \(0\leq\rho\leq 1\) and \(B\geq 0\), by using Algorithm 1 and an expert online algorithm \(\pi\) that is \(\lambda\)-competitive against the optimal offline algorithm OPT, then under the same assumptions for \(\pi\) to be \(\lambda\)-competitive, LOMAR is \(\rho\lambda\)-competitive against OPT._ Corollary 4.1.1 provides a general result that applies to any \(\lambda\)-competitive expert algorithm \(\pi\) under its respective required assumptions. For example, if the expert \(\pi\) assumes an adversarial or random-order setting, then Corollary 4.1.1 holds under the same adversarial or random-order setting. ## 5 RL Policy Training with Online Switching The prior ML-augmented online algorithms typically assume a standalone RL model that is pre-trained without considering what the online algorithm will perform (Christianson et al., 2022). Thus, while the standalone RL model may perform well on its own, its performance can be poor when directly used in LOMAR due to the added online switching step. In other words, there will be a training-testing mismatch. To rectify the mismatch, we propose a novel approach to train the RL model in LOMAR by explicitly considering the switching operation. **RL architecture.** For online bipartite matching, there exist various network architectures, e.g., fully-connected networks and scalable invariant network for general graph sizes. The recent study (Alomrani et al., 2022) has shown using extensive empirical experiments that the invariant network architecture, where each offline-online item pair runs a separate neural network with shared weights among all the item pairs, is empirically advantageous, due to its scalability to large graphs and good average performance. We denote the RL model as \(h_{\theta}(I_{u},w_{uv})\) where \(\theta\) is the network parameter. By feeding the item weight \(w_{uv}\) and applicable history information \(I_{u}\) for each offline-online item pair \((u,v)\), we can use the RL model to output a _threshold_ for possible item assignment, following threshold-based algorithms (Alomrani et al., 2022; Huang et al., 2019; Mehta, 2013). The history information \(I_{u}\) includes, but is not limited to, the average value and variance of weights assigned to \(u\), average in-degree of \(u\), and maximum weight for the already matched items. More details about the information can be found in the appendix. Then, with the RL output, we obtain a score \(s_{u}=w_{uv}-h_{\theta}(I_{u},w_{uv})\) for each possible assignment. **Policy training.** Training the RL model by considering switching in Algorithm 1 is non-trivial. Most critically, the initial RL decisions can perform arbitrarily badly upon policy initialization, which means that the initial RL decisions are almost always overridden by the expert's decisions for robustness. Due to following the expert's decisions, the RL agent almost always receive a good reward, which actually has nothing to do with the RL's own decisions and hence provides little to no supervision to improve the RL policy. Consequently, this creates a _gridlock_ for RL policy training. While using an offline pre-trained standalone RL model without considering online switching (e.g., (Alomrani et al., 2022)) as an initial policy may partially address this gridlock, this is certainly inefficient as we have to spend resources for training another RL model, let alone the likelihood of being trapped into the standalone RL model's suboptimal policies (e.g. local minimums). To address these issues, during training, we introduce a softmax probability to approximate the otherwise non-differentiable switching operation. Specifically, the switching probability depends on the cumulative reward difference \(R_{diff}\) in the switching condition, which is \[R_{diff}= R_{v-1}+w_{\tilde{x}_{v},v}+B-\rho\cdot\Big{(}R_{v}^{\pi}+\] \[\sum_{u\in\mathcal{U}}\big{(}|\mathcal{V}_{u,v-1}|-|V_{u,v}^{\pi} |+\mathbb{I}_{u=\tilde{x}_{v}}\big{)}^{+}\cdot w_{u,\max}\Big{)}\] Then, the probability of following RL is \(p_{os}=\frac{e^{R_{diff}/t}}{1+e^{R_{diff}/t}}\), where \(t\) is the softmax function's temperature. Importantly, this softmax probability is differentiable and hence allows backpropagation to train the RL model weight \(\theta\) to maximize the expected total reward while being aware of the switching operation for robustness. Next, with _differentiable_ switching, we train the RL model by policy gradient (Williams, 1992) to optimize the policy parameter \(\theta\). Denote \(\tau=\{x_{1},\cdots,x_{v}\}\) as an action trajectory sample and \(p_{\theta}(x_{v}|I_{u})\) as the probability of matching offline item \(u\) to online item \(v\): \[p_{\theta}(x_{v}|I_{u})=(1-p_{os})\cdot\tilde{p}_{\theta}(x_{v}|I_{u})+p_{os} \cdot p_{\theta}^{\pi}(x_{v}|I_{u}), \tag{4}\] where \(\tilde{p}_{\theta}(x_{v}|I_{u})\) is the RL's item selection probability obtained with the RL's output score \(s_{u}\), and \(p_{\theta}^{\pi}(x_{v}|I_{u})\) is the item selection probability for expert \(\pi\). If the expert's item selection is not available (i.e., Line 11 in Algorithm 1), then \(x_{v}^{\pi}\) will be replaced with \(\operatorname*{skip}\) when calculating (4). During the training process, our goal is to maximize the expected total reward \(R_{\theta}=\mathbb{E}_{\tau\sim p_{\theta}}\left[w_{x_{v},v}\right]\). Thus, at each training step, given an RL policy with parameter \(\theta\), we sample \(n\) action trajectories \(\{\tau_{i}=\{x_{1,i},\cdots,x_{v,i}\},i\in[n]\}\) and record the corresponding rewards. We can get the approximated average reward as \(\hat{R}_{\theta}=\frac{1}{n}\sum_{i=1}^{n}w_{x_{i,..,v}}^{i}\), and calculate the gradient as \[\nabla_{\theta}\hat{R}_{\theta}=\sum_{i=1}^{n}\left(\sum_{v\in\mathcal{V}} \nabla_{\theta}\log p_{\theta}(x_{v,i}|I_{u,i})\right)\left(\sum_{v\in\mathcal{ V}}w_{x_{v,i},v}^{i}\right) \tag{5}\] Then, we update the parameter \(\theta\) by \(\theta=\theta+\alpha\nabla_{\theta}\hat{R}_{\theta}\), where \(\alpha\) is the step size. This process repeats until convergence and/or the maximum number of iterations is reached. At the beginning of the policy training, we can set a high temperature \(t\) to encourage the RL model to explore more aggressively, instead of sticking to the expert's decisions. As the RL model performance continuously improves, we can reduce the temperature in order to make the RL agent more aware of the downstream switching operation. The training process is performed offline as in the existing RL-based optimizers (Alomrani et al., 2022; Du et al., 2022) and described in Algorithm 2 for one iteration. ## 6 Experiment ### Setup We conduct experiments based on the movie recommendation application. Specifically, when an user (i.e., online item \(v\)) arrives, we recommend a movie (i.e., offline item \(u\)) to this user and receive a reward based on the user-movie preference information. We choose the MovieLens dataset (Harper and Konstan, 2015), which provides a total of 3952 movies, 6040 users and 100209 ratings. We preprocess the dataset to sample movies and users randomly from the dataset to generate subgraphs, following the same steps as used by (Dickerson et al., 2019) and (Alomrani et al., 2022). In testing dataset, we empirically evaluate each algorithm using average reward (**AVG**) and competitive ratio (**CR**, against OPT), which represents the average performance and worst case performance, respectively. Thus, the value of CR is the empirically worst reward ratio in the testing dataset. For fair comparison, all the experimental settings like capacity \(c_{u}\) follow those used in (Alomrani et al., 2022). More details about the setup and training are in Appendix A. **Baseline Algorithms.** We consider the following baselines. All the RL policies are trained offline with the same architecture and applicable hyperparameters. **OPT:** The offline optimal oracle has the complete information about the bipartite graph. We use the Gurobi optimizer to find the optimal offline solution. **Greedy:** At each step, Greedy selects the available offline item with highest weight. **DRL:** It uses the same architecture as in LOMAR, but does not consider online switching for training or inference. That is, the RL model is both trained and tested with \(\rho=0\). More specifically, our RL architecture has 3 fully connected layers, each with 100 hidden nodes. **DRL-OS (DRL-OnlineSwitching):** We apply online switching to the same RL policy used by DRL during inference. That is, the RL model is trained with \(\rho=0\), but tested with a different \(\rho>0\). This is essentially an existing ML-augmented algorithm that uses the standard practice (i.e., pre-train a standalone RL policy) (Christianson et al., 2022). Our baselines include all those considered in (Alomrani et al., 2022). In the no-free-disposal setting, the best competitive ratio is 0 in general adversarial cases (Mehta, 2013). Here, we use Greedy as the expert, because the recent study (Alomrani et al., 2022) has shown that Greedy performs better than other alternatives and is a strong baseline. ### Results **Reward comparison.** We compare LOMAR with baseline algorithms in Table 1. First, we see that DRL has the highest average reward, but its empirical competitive ratio is the lowest. The expert algorithm Greedy is fairly robust, but has a lower average award than RL-based policies. Second, DRL-OS can improve the competitive ratio compared to DRL. But, its RL policy is trained alone without being aware of the online switching. Thus, by making the RL policy aware of online switching, LOMAR can improve the average reward compared to DRL-OS. Specifically, by training LOMAR using the same \(\rho\) as testing it, we can obtain both the highest average cost and the highest competitive ratio. One exception is the minor decrease of competitive ratio when \(\rho=0.8\) for testing. This is likely due to the dataset and a few hard instances can affect the empirical competitive ratio, which also explains why the empirical competitive ratio is not necessarily monotonically increasing in the \(\rho\in[0,1]\). Nonetheless, unlike DRL that may \begin{table} \begin{tabular}{c|c c|c c|c c|c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{DRL-OS} & \multicolumn{2}{c}{LOMAR (\(\rho=0.4\))} & \multicolumn{2}{c}{LOMAR (\(\rho=0.6\))} & \multicolumn{2}{c}{LOMAR (\(\rho=0.8\))} & \multicolumn{2}{c}{Greedy} \\ \hline Test & AVG & CR & AVG & CR & AVG & CR & AVG & CR & AVG & CR \\ \hline \(\rho=0.4\) & 12.315 & 0.800 & **12.364** & **0.819** & 12.288 & 0.804 & 12.284 & 0.804 & 11.000 & 0.723 \\ \(\rho=0.6\) & 11.919 & 0.787 & 11.982 & 0.807 & **11.990** & **0.807** & 11.989 & 0.800 & 11.000 & 0.723 \\ \(\rho=0.8\) & 11.524 & **0.773** & 11.538 & 0.766 & 11.543 & 0.762 & **11.561** & 0.765 & 11.000 & 0.723 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison under different \(\rho\). In the top, LOMAR (\(\rho=x\)) means LOMAR is trained with the value of \(\rho=x\). The average reward and competitive ratio are represented by AVG and CR, respectively — the higher, the better. The highest value in each testing setup is highlighted in bold. The AVG and CR for DRL are **12.909** and **0.544** respectively. The average reward for OPT is **13.209**. only work well empirically without guarantees, LOMAR offers provable robustness while exploiting the power of RL to improve the average performance. The boxplots in Fig. 1 visualizes the reward ratio distribution of LOMAR, further validating the importance of switching-aware training. **Impact of \(\rho\).** To show the impact of \(\rho\), we calculate the bi-competitive reward ratios. Specifically, for each problem instance, the bi-competitive ratio compares the actual reward against those of Greedy and RL model, respectively. To highlight the effect of online switching, we focus on DRL-OS (i.e., training the RL with \(\rho=0\)) whose training process of RL model is not affected by \(\rho\), because the RL model trained with \(\rho>0\) in LOMAR does not necessarily perform well on its own and the reward ratio of LOMAR to its RL model is not meaningful. The histogram of the bi-competitive ratios are visualized in Fig. 2. When \(\rho=0\), the ratio of DRL-OS/ DRL is always 1 unsurprisingly, since DRL-OS are essentially the same as DRL in this case (i.e., both trained and tested with \(\rho=0\)). With a large \(\rho\) (e.g. 0.8) for testing, the reward ratios of DRL-OS/Greedy for most samples are around 1, which means the robustness is achieved, as proven by our theoretical analysis. But on the other hand, DRL-OS has limited flexibility and can less exploit the good average performance of DRL. Thus, the hyperparameter \(\rho\in[0,1]\) governs the tradeoff between average performance and robustness relative to the expert and, like other hyperparameters, can be tuned to maximize the average performance subject to the robustness requirement. We also consider a crowdsourcing application, as provided by the gMission dataset (Chen et al., 2014). Additional results for gMission are deferred to Appendix A. ## 7 Conclusion In this paper, we propose LOMAR for edge-weighted online bipartite matching. LOMAR includes a novel online switching operation to decide whether to follow the expert's decision or the RL decision for each online item arrival. We prove that for any \(\rho\in[0,1]\), LOMAR is \(\rho\)-competitive against any expert online algorithms, which directly translates a bounded competitive ratio against OPT if the expert algorithm itself has one. We also train the RL policy by explicitly considering the online switching operation so as to improve the average performance. Finally, we run empirical experiments to validate LOMAR. There are also interesting problems that remain open, such as how to incorporate multiple RL models or experts and what the performance bound of LOMAR is compared to pure RL in terms of the average reward. ## Acknowledgement This work was supported in part by the U.S. National Science Foundation under the grant CNS-1910208. Figure 1: Boxplot for reward ratio with different \(\rho\) within testing dataset. Greedy and DRL-OS are also shown here for comparison. The best average performance in each figure is achieved by choosing the same \(\rho\) during training and testing. Figure 2: Histogram of bi-competitive reward ratios of DRL-OS (against Greedy and DRL) under different \(\rho\).
オンライン広告表示などの問題をオンライン二部マッチングとして表すことができ、重要な課題は、順番Revealedオンラインアイテム情報に基づく非 reversibleなマッチングの決定です。多くの専門家オンラインアルゴリズムが、最悪ケースの競争比を制限したものが提案されていますが、平均ケースでは満足のいくパフォーマンスを提供しない可能性があります。一方、強化学習 (RL) は平均的なパフォーマンスを向上させるのに適用されていますが、頑健性が欠如しており、任意の悪いパフォーマンスを示す可能性があります。本論文では、強度の保証を持つノウハウに基づくオンライン二部マッチング(LOMAR)を提案します。このLOMARの重要な特徴は、将来の不確実性に対抗するため、賢い条件に基づいて、専門家の決定またはRLの決定に従うかどうかを判断する新しいオンラインスワップ操作です。LOMARは、任意の $\rho \in [0,1]$ に対して
2309.07098
Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding
Hallucinations and off-target translation remain unsolved problems in MT, especially for low-resource languages and massively multilingual models. In this paper, we introduce two related methods to mitigate these failure cases with a modified decoding objective, without either requiring retraining or external models. In source-contrastive decoding, we search for a translation that is probable given the correct input, but improbable given a random input segment. In language-contrastive decoding, we search for a translation that is probable, but improbable given the wrong language indicator token. Experiments on the massively multilingual models M2M-100 (418M) and SMaLL-100 show that these methods suppress hallucinations and off-target translations, reducing the number of translations with segment-level chrF2 below 10 by 67-83% on average, and the number of translations with oscillatory hallucinations by 75-92% on average, across 57 tested translation directions. In a proof of concept on out-of-English translation, we also show that we can suppress off-target translations with large language models. We release our source code at https://github.com/ZurichNLP/ContraDecode.
Rico Sennrich, Jannis Vamvas, Alireza Mohammadshahi
2023-09-13T17:15:27
http://arxiv.org/abs/2309.07098v2
Mitigating Hallucinations and Off-target Machine Translation with Source-Contrastive and Language-Contrastive Decoding ###### Abstract Hallucinations and off-target translation remain unsolved problems in machine translation, especially for low-resource languages and massively multilingual models. In this paper, we introduce methods to mitigate both failure cases with a modified decoding objective, without either requiring retraining or external models. In source-contrastive decoding, we search for a translation that is probable given the correct input, but improbable given a random input segment, hypothesising that hallucinations will be similarly probable given either. In language-contrastive decoding, we search for a translation that is probable, but improbable given the wrong language indicator token. In experiments on M2M-100 (418M) and SMaLL-100, we find that these methods effectively suppress hallucinations and off-target translations, improving chrF2 by 1.7 and 1.3 points on average across 57 tested translation directions. In a proof of concept on English-German, we also show that we can suppress off-target translations with the Llama 2 chat models, demonstrating the applicability of the method to machine translation with LLMs. We release our source code.1 Footnote 1: [https://github.com/ZurichNLP/ContraDecode](https://github.com/ZurichNLP/ContraDecode) ## 1 Introduction Hallucinations are a long-standing well-known problem in machine translation (MT) (Koehn and Knowles, 2017) and natural language generation (Ji et al., 2023). While there has been extensive research on their identification and mitigation (Lee et al., 2019; Raunak et al., 2021; Mohammadshahi et al., 2022; Guerreiro et al., 2023; Dale et al., 2023, among others), they still persist as an issue, especially in low-resource settings. Contrastive conditioning (Vamvas and Sennrich, 2021) has previously been used for the analysis of specific translation errors such as disambiguation errors and undertranslation (Vamvas and Sennrich, 2022). The main idea is that translations that are equally or more probable given some corrupted source than the true source are likely to be erroneous in respect to the part of the source that was corrupted. We can apply the same intuition to hallucinations and translations that are in the wrong language, so called off-target translations: if hallucinations are detached from the source, they should have a similar probability given the true source and given a random other source. If a translation is in the wrong language, it should have a similar or higher probability if that language is marked as the desired output language. Inspired by this, we design decoding objectives that do not simply search for the most probable translation under our model, but search for a translation that maximizes the probability given the true input, while at the same time minimizing the probability given one or several contrastive inputs. To sum up, this paper makes the following contributions: * We propose decoding objectives to address two problems often observed in MT: we mitigate hallucinations with source-contrastive decoding and suppress off-target translations with language-contrastive decoding. Figure 1: Our decoding objective yields a translation that is probable given the actual input, but improbable given a source-contrastive or language-contrastive input. * By evaluating two massively multilingual MT models, M2M-100 (418M) and SMaLL-100, across 57 translation directions, we demonstrate the effectiveness of both decoding objectives, improving translation quality for low-resource translation directions, improving chrF2 by 1.7 and 1.3 points for M2M-100 and SMaLL-100, respectively. * Finally, we provide a proof of concept for applying our approach to LLM-based translation, where off-target issues are common. ## 2 Method Different from previous work on contrastive conditioning that focused on analyzing translation errors, we modify the decoding objective to improve translations. To suppress hallucinations, we pair each input \(X\) with a randomly selected input segment \(X^{\prime}\).2 Rather than finding a translation that maximizes \(p(Y|X)\), we search for one that both maximizes \(p(Y|X)\) and minimizes \(p(Y|X^{\prime})\). We add a hyperparameter \(\lambda\) to control the strength of this contrastive penalty, yielding equation 1. Footnote 2: In practice, by shuffling the segments of the input document. \[score(Y,X)=\sum_{i=1}^{n}-\log\biggl{(}p(y_{i}|y_{<i},X)\\ -\lambda p(y_{i}|y_{<i},X^{\prime})\biggr{)} \tag{1}\] We denote this decoding objective **source-contrastive decoding**. Off-target translations are a common failure mode in multilingual MT systems Arivazhagan et al. (2019). They have been linked to the predominance of English in the training of multilingual systems Rios et al. (2020). Production of text in the source language, often a copy of the input, is connected to the occurrence of copying in the training data (from innocuous copying of names to segment-level copies due to noisy data extraction), and the high probability of continuing to copy once a copy has been started Ott et al. (2018). The majority of multilingual machine translation systems use special tokens to indicate the desired target language, following Johnson et al. (2017)3. To penalize output in the wrong language, we can add contrastive inputs that keep the original source segment, but vary in the choice of language indicator token. Footnote 3: The target language indicator token is in the source segment for SMaLL-100, and the beginning of the output segment in M2M-100, using forced decoding. Let \(l_{y}\) be the language indicator token, and \(l_{\hat{y}}\) the desired target language. We simply add contrastive variants \(l_{y^{\prime}}\) for output languages we wish to suppress. Based on the predominant off-target languages in multilingual MT Arivazhagan et al. (2019), we include English4 and the respective source language5 in the set of contrastive languages. This results in equation 2. Footnote 4: Unless the desired target language is English. Footnote 5: If English is the source language, we deduplicate. \[score(Y,X)=\sum_{i=1}^{n}-\log\biggl{(}p(y_{i}|y_{<i},X,l_{y}=l _{\hat{y}})\\ -\sum_{l_{y^{\prime}}}\lambda p(y_{i}|y_{<i},X,l_{y}=l_{y^{\prime }})\biggr{)} \tag{2}\] We refer to decoding with contrastive translation directions as **language-contrastive decoding**. We can combine source-contrastive and language-contrastive decoding by summing all contrastive variants, and will then refer to the individual weights as \(\lambda_{\text{src}}\) and \(\lambda_{\text{lang}}\). ## 3 Evaluation ### Data and Models We perform our experiments with two massively multilingual machine translation models: M2M-100 (418M) Fan et al. (2020), and SMaLL-100 Mohammadshahi et al. (2022), a distilled version of M2M-100 (12B). We use beam size 5 across experiments. We employ minimal hyper-parameter tuning on the ps-ast translation direction with M2M-100 and set \(\lambda_{\text{src}}\) to \(0.7\). We exclude ps-ast from average results reported. Since off-target translation only affects a small number of translation directions, we report results without any hyperparameter tuning, simply setting \(\lambda_{\text{lang}}=0.1\). We evaluate our method on three sets of translation directions: * the 25 non-English-centric directions used by Guerreiro et al. (2023) (**HLMT**). These are af-zu, ar-fr, be-ru, cs-sk, de-hr, de-hu, el-tr, fr-sw, hi-bn, hi-mr, hr-cs, hr-hu, hr-sk, hr-sr, it-de, it-fr, nl-de, nl-fr, ro-de, ro-hu, ro-hy, ro-ru, ro-tr, ro-uk, uk-ru.6 Footnote 5: See Appendix B for full language names. * 29 translation directions (all but ps-ast) between 5 low-resource languages from different branches of Indo-European, plus Zulu from the Atlantic-Congo family (**X-branch**): af, ast, hr, ps, ur, zu. * 4 high-resource translation directions: en-de, de-en, en-fr, fr-en (**high-res**). We additionally report results for the union of these sets (**all**). We evaluate the methods with spBLEU (Goyal et al., 2022) and chrF2 (Popovic, 2015) using sacreBLEU (Post, 2018)7 on the Flores-101 devtest set (Goyal et al., 2022). We use OpenLID (Burchell et al., 2023) for language identification to measure off-target translation rates. To quantify the number of hallucinations, we employ a rough approximation following Lee et al. (2019); Muller and Sennrich (2021), counting the proportion of segments with chrF2 \(<10\).8 Footnote 7: BLEU@#: llc:mixedle:nobtok:flores101ls:explv:2.3.1 Footnote 7: BLEU@r1lc:mixedle:yeslnc:fhnw:0ls:nov:2.3.1 Footnote 8: Müller and Sennrich (2021) report a threshold of 1, but we confirmed that this is a typo (personal communication with Mathias Müller). Note that this method does not distinguish between hallucinations and off-target translations. ### Results We report results using source-contrastive decoding (\(C_{src}\)), and combining source-contrastive and language-contrastive decoding (\(C_{src+lang}\)) in Tables 1 and 2.9 Across the 57 translation directions tested, chrF2 improves by 1.3 (M2M-100) and 1.1 (SMaLL-100) points with source-contrastive decoding. When adding language-contrastive decoding, we see additional gains in chrF2 of 0.4 (M2M-100) and 0.2 (SMaLL-100). Footnote 9: See Appendix A for full results. Improvements are more modest when measured with spBLEU (0.2 on M2M-100; 0.3 on SMaLL-100). We notice that hallucinations tend to be over-long, and can perversely improve BLEU by reducing the brevity penalty. We thus consider chrF2, which pairs n-gram precision with n-gram recall instead of a simplistic brevity penalty, to be our primary metric. Off-target translations are relatively rare for the translation directions tested, especially for SMaLL-100 (see Table 3). With M2M-100, the highest proportion of English outputs in the baseline was detected for af-zu (9.1%), the highest percentage of outputs in the source language for hr-sr (4.2%)10. These are also among the translation directions that benefit the most from language-contrastive decoding: chrF2 increases by 2.3 for hr-sr11, and by 2 for af-zu. However, we observe the largest increase in chrF2 (2.6) for ast-zu, a translation direction that sees an increase of off-target translations with \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{chrF2} & \multicolumn{4}{c}{spBLEU} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-10} baseline & 46.4 & 28.8 & 61.3 & 39.0 & 22.0 & 8.3 & 37.2 & 16.4 \\ \(C_{src}\) & 46.7 & 31.4 & 60.8 & 40.3 & 21.6 & 9.1 & 36.4 & 16.6 \\ \(C_{src+lang}\) & 46.8 & 32.1 & 60.7 & 40.7 & 21.5 & 9.3 & 36.1 & 16.6 \\ \hline \hline \end{tabular} \end{table} Table 1: results for M2M-100. Averages over different sets of translation directions. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{chrF2} & \multicolumn{4}{c}{spBLEU} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-10} baseline & 48.3 & 32.0 & 62.5 & 41.4 & 23.5 & 10.2 & 38.7 & 18.1 \\ \(C_{src}\) & 48.5 & 34.2 & 62.1 & 42.5 & 23.2 & 11.1 & 37.9 & 18.4 \\ \(C_{src+lang}\) & 48.7 & 34.6 & 62.0 & 42.7 & 23.3 & 11.2 & 37.6 & 18.4 \\ \hline \hline \end{tabular} \end{table} Table 2: results for SMaLL-100. Averages over different sets of translation directions. source-contrastive decoding alone, and where the English output rate goes from 5.5% (baseline) to 9.9% (\(C_{src}\)) to 2.7% (\(C_{src+lang}\)). The proportion of translations with chrF2 below 10 is shown in Table 4. We observe large reductions in the number of defect translations, with a reduction from 7.3% to 1.2% for M2M-100, and from 5.6% to 1.8% for SMaLL-100. ### Ablation Studies The fact that we pick contrastive inputs from the test sets at random raises a few questions about this approximation. We repeated the translation with M2M-100 across all 57 translation directions 3 times and find that the standard deviation is minimal (0.0107 for chrF2). Using a single random input as a contrastive variant is a heavy approximation, but our ablation study in Table 5 shows that this yields the majority of the performance gains, and using up to 3 inputs as contrastive examples12 only yields an additional 0.1 point improvement in chrF2. Footnote 12: we divide \(\lambda_{\text{seq}}\) by the number of contrastive inputs. ## 4 Application to Large Language Models In this section, we demonstrate that our method can be applied to the prompting of large language models (LLM). Previous work has achieved competitive translation quality for some language pairs by prompting models such as PaLM (Vilar et al., 2023; Garcia et al., 2023), GPT (Hendy et al., 2023) or BLOOM (Bawden and Yvon, 2023). However, LLM-based translation is still prone to hallucination and off-target translation (Zhang et al., 2023; Guerreiro et al., 2023). Our demonstration is based on the Llama 2 model family (Touvron et al., 2023) and specifically the instruction-tuned version (_Llama Chat_), exploiting the fact that MT examples were among the data used for instruction tuning (Wei et al., 2022; Chung et al., 2022). We generate translations by instructing the model to translate a segment into a given language, force-decoding the line _"Sure, here's the translation:"_, and then decoding until the next line break. The template is provided in Appendix C. When using this simple prompting approach in the en-de direction, we find that off-target output in English is very common. Moreover, providing a 1-shot example in the prompt, while improving translation quality, does not prevent the off-target issue. We thus apply language-contrastive decoding and add a contrastive prompt that instructs the model to translate into English instead of German. The decoding objective is analogous to Eq. 2. Table 2 shows the percentage of off-target output for different values of \(\lambda_{\text{lang}}\). Generally, we observe that the percentage falls with an increasing \(\lambda_{\text{lang}}\), demonstrating that our method can be effectively applied to LLM prompting. ## 5 Related Work #### 5.0.1 Hallucination Detection and Reduction Various methods have been proposed to detect hallucinations, including identifying typical patterns in the output (Raunak et al., 2021), using internal information like attention patterns (Lee et al., 2019) or the contribution of the source to the prediction (Dale et al., 2023), or measures of decoder confidence, including the probability of the output (Guerreiro et al., 2023) or stability of samples under perturbation (Lee et al., 2019; Guerreiro et al., 2023). Hallucination mitigation is more difficult, espe \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{M2M-100} & \multicolumn{2}{c}{SMaLL-100} \\ & EN & SRC & EN & SRC \\ \cline{2-5} baseline & 260 & 55 & 54 & 63 \\ \(C_{src}\) & 375 & 47 & 78 & 70 \\ \(C_{src+lang}\) & 88 & 28 & 16 & 21 \\ \hline \hline \end{tabular} \end{table} Table 3: Total number of translations (out of 57684 output sentences) that are off-target, producing English (EN) or the source language (SRC) according to Open-LID. Figure 2: Off-target translation rate for Llama 2 Chat models when translating the English Flores-101 devtest set into German. Language-contrastive decoding tends to reduce off-target translation as \(\lambda_{\text{lang}}\) is increased. cially if we assume that models are already trained using best practices, and focus on training-free methods. Several studies use external models for mitigation, e.g. using other translation models as a fall-back Guerreiro et al. (2023), or doing sample reranking based on quality estimation models Guerreiro et al. (2023). Our method has the advantage that it does not require external models, and we note that modern quality estimation metrics are themselves prone to score certain hallucinations highly Freitag et al. (2022). Mitigation methods that do not rely on external models are typically sampling-based. Guerreiro et al. (2023) report that even the translation model's own sequence probability can be used for sample reranking. A consensus translation can be identified via sampling-based Minimum Bayes Risk (MBR) decoding Eikema and Aziz (2020), which benefits from the fact that hallucinations are dissimilar from each other Muller and Sennrich (2021). #### 5.0.2 Contrastive Decoding Contrastive decoding bears similarity to contrastive learning Hadsell et al. (2006); Socher et al. (2014); Gao et al. (2021), among others) in that positive and negative examples are contrasted, but involves no training. Li et al. (2023) introduce a form of contrastive decoding that contrasts the probability between different models, whereas our methods work with a single model, contrasting probabilites given different inputs. Source-contrastive decoding also be seen as a variant of implicit language model (ILM) compensation, mirroring recent work by Herold et al. (2023). Our work is different in motivation in that ILM is typically used to allow the inclusion of an external LM, where we show the effectiveness of simply suppressing the ILM. Also, we show the effectiveness of a different, simple approximation, using a single contrastive source segment. Finally, language-contrastive decoding bears some resemblance to negative prompting, a technique used to suppress undesired concepts in guided image generation. ## 6 Conclusion This paper shows that certain failure modes of MT can be addressed by contrastive decoding objectives that use pairs or sets of inputs for the prediction. Specific contrastive inputs address specific errors, and we introduce strategies to mitigate hallucinations and off-target translation. Future work could expand on our work by exploring if other failure modes of machine translation can be mitigated with appropriate contrastive inputs, or if other forms of control can be improved. For example, for models that use domain indicator tokens Kobus et al. (2017), we could perform domain-contrastive decoding and potentially achieve stronger domain control. Beyond MT, we expect that source-contrastive decoding can also be useful for other tasks, e.g. to penalize over-generic responses in dialogue systems. ## 7 Limitations We only tested language-contrastive decoding in multilingual models that control the target language via language indicator tokens. It is possible to apply the same strategy to modular architectures that use language-specific components Firat et al. (2016); Vazquez et al. (2019); Bapna and Firat (2019), but its effectiveness remains to be tested. For bilin \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{M2M-100} & \multicolumn{4}{c}{SMaLL-100} \\ & HLMT & X-branch & high-res & all & HLMT & X-branch & high-res & all \\ \cline{2-9} baseline & 2.1 & 13.0 & 0.0 & 7.3 & 1.3 & 10.6 & 0.0 & 5.6 \\ \(C_{src}\) & 1.0 & 4.1 & 0.0 & 2.4 & 0.8 & 4.3 & 0.0 & 2.5 \\ \(C_{src+lang}\) & 0.5 & 2.0 & 0.0 & 1.2 & 0.4 & 3.4 & 0.0 & 1.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Proportion of translations (in %) with segment-level chrF\(2<10\). \begin{table} \begin{tabular}{l c c} \hline \hline & chrF2 & spBLEU \\ \cline{2-3} baseline & 38.97 & 16.40 \\ \(C_{src}\) (1) & 40.31 & 16.60 \\ \(C_{src}\) (2) & 40.39 & 16.68 \\ \(C_{src}\) (3) & 40.41 & 16.67 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation results for M2M-100 with different numbers of source-contrastive inputs. Average over all languages reported. gual translation models that suffer from off-target translations, e.g. because of noisy training data (Khayrallah and Koehn, 2018), we would need bilingual models for other translation directions to implement language-contrastive decoding, but this sacrifices the main strength of our approach: not relying on external models. We employ minimal hyperparameter tuning for \(\lambda_{\text{src}}\), and did not tune \(\lambda_{\text{lang}}\). Using the same hyperparameters across translation directions and translation models results in performance degradations in some cases, most noticeably for high-resource translation directions. We consider it a positive result that we obtain improvements on average with minimal hyperparameter tuning, but future work may wish to use more complex strategies to weight (or disable) contrastive variants across translation directions. ## Acknowledgements This work was funded by the Swiss National Science Foundation (project MUTAMUR; no. 213976).
hallucinationsとオフターゲット翻訳は、MTにおいても解決されていない課題であり、特に低リソース言語と大規模多言語モデルでは、その問題が顕著です。この論文では、これらの失敗ケースを軽減するための2つの関連する手法を導入します。これは、再訓練や外部モデルを必要とせずに、変更されたデコーディングの目的関数を使用します。ソース対照デコーディングでは、正しい入力に対して probable な翻訳を探索しますが、ランダムな入力セグメントに対しては improbable な翻訳を探求します。言語対照デコーディングでは、 probable な翻訳を探求しますが、誤った言語指示トークンに対しては improbable な翻訳を探求します。大規模多言語モデル M2M-100 (418M) と SMaLL-100 を対象とした実験により、これらの手法は、平均で segment-level chrF2 が 10 を下回る翻訳数を 6
2309.14972
Improving Unsupervised Visual Program Inference with Code Rewriting Families
Programs offer compactness and structure that makes them an attractive representation for visual data. We explore how code rewriting can be used to improve systems for inferring programs from visual data. We first propose Sparse Intermittent Rewrite Injection (SIRI), a framework for unsupervised bootstrapped learning. SIRI sparsely applies code rewrite operations over a dataset of training programs, injecting the improved programs back into the training set. We design a family of rewriters for visual programming domains: parameter optimization, code pruning, and code grafting. For three shape programming languages in 2D and 3D, we show that using SIRI with our family of rewriters improves performance: better reconstructions and faster convergence rates, compared with bootstrapped learning methods that do not use rewriters or use them naively. Finally, we demonstrate that our family of rewriters can be effectively used at test time to improve the output of SIRI predictions. For 2D and 3D CSG, we outperform or match the reconstruction performance of recent domain-specific neural architectures, while producing more parsimonious programs that use significantly fewer primitives.
Aditya Ganeshan, R. Kenny Jones, Daniel Ritchie
2023-09-26T14:44:48
http://arxiv.org/abs/2309.14972v1
# Improving Unsupervised Visual Program Inference with ###### Abstract Programs offer compactness and structure that makes them an attractive representation for visual data. We explore how _code rewriting_ can be used to improve systems for inferring programs from visual data. We first propose Sparse Intermittent Rewrite Injection (_SIRI_), a framework for unsupervised bootstrapped learning._ SIRI _sparsely applies code rewrite operations over a dataset of training programs, injecting the improved programs back into the training set. We design a family of rewriters for visual programming domains: parameter optimization, code pruning, and code grafting. For three shape programming languages in 2D and 3D, we show that using_ SIRI _with our family of rewriters improves performance: better reconstructions and faster convergence rates, compared with bootstrapped learning methods that do not use rewriters or use them naively. Finally, we demonstrate that our family of rewriters can be effectively used at test time to improve the output of_ SIRI _predictions. For 2D and 3D CSG, we outperform or match the reconstruction performance of recent domain-specific neural architectures, while producing more parsimonious programs that use significantly fewer primitives._ + Footnote †: Project page: [https://bardofcodes.github.io/coref/](https://bardofcodes.github.io/coref/) ## 1 Introduction Visual data is often highly structured: manufactured shapes are produced by assembling parts; vector graphics images are built from layers of primitives; detailed textures can be created via intricate compositions of noise functions. _Visual programs_, i.e. programs that produce visual outputs when executed, are a natural approach to capturing this complexity in a structure-aware fashion. Access to well-written visual programs supports downstream applications across visual computing domains, including editing, generative modeling, and structural analysis. But how can we obtain a program which generates a given visual datum? _Visual Program Inference (VPI)_ methods aim to solve this problem by automatically inferring programs that represent visual inputs. Solving this search problem is very difficult: the space of possible programs is often vast, even when constrained by a domain-specific language (DSL). To overcome this challenge, recent works have investigated learning-based solutions, where a neural network is employed to guide the search. When a dataset of visual programs exist, such networks can be trained in a supervised fashion [36, 33, 37, 38, 13]. Unfortunately, most domains lack such data, so recent works have investigated how to train VPI networks in an _unsupervised_ fashion. Learning to infer visual programs without supervision is challenging: programs usually contain discrete and continuous elements, which complicates end-to-end learning. Various solutions have been proposed to work around this issue: end-to-end learning is possible with neural architectures that act as a smooth relaxations of program executors [24], while policy gradient reinforcement learning [26] and bootstrapped learning methods [15] are able to treat Figure 1: Our method SIRI (top row) generates highly compact yet accurate programs, in contrast to CSG-Stump [24] (bottom row), which generates programs with numerous primitives. Here, we show shapes rendered with colored primitives. program executors as (potentially non-differentiable) 'black boxes.' These solutions come with downsides: designing differentiable relaxations is challenging (or even impossible) for some domains; reinforcement learning suffers from noisy gradients and slow convergence; bootstrapped learning methods are prone to getting stuck in local minima. Moreover, a seldom acknowledged limitations of these latter methods is that they treat programs as _just_ sequences of tokens. We argue that this view is suboptimal: programs are structured objects that support domain-specific reasoning to meaningfully constrain and guide the VPI search process. One example of such reasoning is the use of domain-specific operations that modify programs toward optimizing an objective--we call such operations _rewrites_. Rewrites have been explored in the context of VPI tasks, but primarily as a test-time optimization, e.g. finding better continuous parameters for a fixed program structure. While such optimization is useful, our claim is that other rewrite operations are similarly useful, especially when used in tandem, and that they can be employed to benefit VPI network learning, not only as test-time optimization schemes. In this paper, we investigate how to use code rewriting to improve visual program inference. Unlike prior work, we focus on _families_ of code rewriters, each of which makes improvements to a program with some goal in mind. We propose _Sparse Intermittent Rewrite Injection_ (SIRI), a bootstrapped learning algorithm that sparsely applies rewriters and injects rewritten programs into a search-train loop at intermittent intervals. To realize SIRI, we design a family of rewrites applicable to multiple visual programming DSLs: gradient-based parameter optimization (_Parameter Optimization_), removing spurious sub-programs (_Code Pruning_), and sub-program substitutions from a cache (_Code Grafting_). We also propose a test-time rewriting scheme that searches for improved programs through interleaved rewrites that is well-suited to the types of programs inferred by SIRI-trained networks. We evaluate SIRI and our family of rewriters (_PO_, _CP_, _CG_) on three shape program DSLs: 2D Constructive Solid Geometry (CSG), 3D CSG, and ShapeAssembly [13]. We compare VPI networks trained with SIRI to VPI networks trained by PLAD, a recently-proposed bootstrapped learning method [15], and find that SIRI both increases reconstruction performance and converges significantly faster. We further show that naively combining our rewrite families with PLAD performs much worse than SIRI, and in some domains even worsens performance compared with PLAD. Finally, we demonstrate that combining SIRI with our test-time rewriting scheme infers visual programs that can match (3D CSG [24]) or surpass (2D CSG [16]) reconstruction performance of domain-specific neural architectures while producing significantly more parsimonious programs (see number of primitives, Figure 1). In summary, we make the following contributions: 1. [noitemsep,topsep=0pt] 2. _Sparse Intermittent Rewrite Injection_, a framework for unsupervised visual program inference that leverages a family of code rewriters. 3. A family of code rewriters applicable to multiple DSLs that benefit VPI learning methods and can be used in a test-time rewriting scheme. ## 2 Related Work _Visual program inference_ (VPI) is a sub-problem within program synthesis. Program synthesis is a storied field, with roots back to the inception of Artificial Intelligence, where the objective is to produce programs that meet some specification [11]. Under our framing, the specification is an input visual datum that the synthesized program should reconstruct when executed. In the rest of this section, we first overview VPI learning paradigms and then summarize prior work that looks at visual-program rewriting. **Learning to infer visual programs:**_End-to-end learning_ methods train by propagating reconstruction loss gradients directly to a network via differentiable execution. Though such approaches can yield impressive reconstruction accuracy, they either require a soft relaxation of the program execution [16, 24, 41, 40, 25, 4] which is infeasible for many languages, or require training domain-specialized neural executors [29, 12], which can introduce approximation errors. In SIRI, we instead leverage a 'partially' differentiable execution of visual programs, bypassing the need of program relaxation or neural executors. _Reinforcement learning_ has also been used by prior VPI approaches [26, 8, 30]. Usually, the inference network is treated as an 'agent' maximizing a reward signal tied to its reconstruction accuracy. The high variance of policy gradient methods has limited the application of such techniques to toy datasets, especially for 3D data. Similar in spirit to SIRI, other programmatic RL methods for non-visual domains have explored blends of program optimization and learning [31, 32]. Also related are non-programmatic RL methods that explore episode modification, through episode relabeling or neurally-guided search [1, 19, 10, 18, 23]. Like SIRI, these approaches aim to improve learning targets through local search, but they do so for vastly different domains (often much simpler than complex 3D shape-programs), employ simplistic rewriting techniques, and don't target bootstrapped learning frameworks. _Bootstrapped learning_ is an attractive alternative that can avoid the pitfalls of RL and end-to-end learning [15, 17]. Such approaches alternate between _Search_ phases, that discover 'good' programs, and _Train_ phases, that use discovered programs to train a network. While these approaches have demonstrated improvements over RL, they are still limited by treating each program as a sequence of tokens, rather than a structured object. Our work bridges this gap by offering an effective way of integrating a family of code rewriters into a recent bootstrapped learning paradigm [15]. **Rewriting visual programs:**_Gradient-based optimization_ is a common approach for optimizing visual programs. For neural architectures that serve as a relaxation of the executor, this can be achieved via test-time fine-tuning. While such approaches achieve impressive reconstruction performance, they are expensive to run and often produce messy program structure [41, 40]. This fine-tuning can take prohibitively long to converge, requiring anywhere from \(3\) minutes [41] to even 30 minutes [24]_per sample_. Typically, visual program executors can be made piecewise differentiable with respect to parameters of an input program (up to control flow decisions), which supports test-time gradient-based optimization [26, 27, 28]. Yet, as this rewriting scheme does not change program structure, reliance on _only_ gradient-based optimization is vulnerable to getting stuck in local minima. We employ a _Parameter Optimization_ rewriter (Sec. 4.1) as one member of a rewrite family, where other rewriters can make structural program changes, to avoid getting stuck in these local minima. Critically, our implementation is highly-efficient, and we can apply each member of our rewrite family multiple times in under 20 seconds during test-time optimization. _Gradient-free optimization_ techniques have been investigated that leverage domain heuristics to develop rewriting strategies that modify the control flow decisions of visual programs. For 3D CAD languages, these operations have been explored for non-learning based reverse-engineering methods [6, 20]. Recently, E-graphs [34] have been employed to efficiently search for rewritten programs that optimize criteria such as program length [21] or fabrication cost [42, 35]. While these methods do not consider _learning_ from rewritten programs, it would be possible to integrate these types of techniques into our family of rewriters. _Abstraction discovery_ is a special form of rewriting, where common subcomputations shared across many programs are factored out into subroutines (i.e. abstractions), and programs are rewritten to use these subroutines. When a dataset of programs is given as input, this step can be decoupled from visual program inference [14, 39, 3, 2]. Alternatively, some methods have investigated how abstraction discovery (AD) phases can improve visual program inference performance [9, 7]. In an iterative procedure, an abstraction phase greedily rewrites a dataset of programs with abstractions, then a recognition model learns on rewritten programs to discover higher-order abstractions and solve more complex inference problems. Such methods are not yet able to scale to the complex 3D shape domains we study in this work, as they employ simplistic recognition models and rely on expensive enumerative search. Furthermore, we find that naively integrating rewriter outputs, as done in past AD approaches, can in fact be detrimental to the bootstrapping process. We instead propose SIRI, a non-deleterious procedure for integrating rewriters under bootstrapped learning paradigms, described in Sec. 3.2, which may prove similarly beneficial for AD approaches. ## 3 Method In this section, we explore how families of code rewriters can be employed to improve visual program inference. In section 3.1, we formalize our task specification, objective function, and rewriter assumptions. We then present _Sparse Intermittent Rewrite Injection_ (SIRI), an unsupervised learning paradigm for visual program inference (VPI) in Section 3.2. SIRI employs a family of rewriters to improve reconstruction performance for VPI tasks while maintaining a parsimonious program representation. Finally, we describe how these operations can also be used in a test-time rewriting scheme, that is especially well-suited to the outputs of SIRI (Sec. 3.3). We describe the family of rewriters we employ for shape-program domains in Section 4. Figure 2: Bootstrapped learning methods [15] store a record of the objective maximizing program for each shape in a Best Program (BP) data-structure. Naive rewrite integration applies a family of rewriters to each entry in BP, and overwrites each entries when a rewrite is successful. Our method SIRI, instead applies the rewriters to a subset of BP entries and overwrites entries only when their sources \(S_{i}\) match. ### Task Specification We define the visual program inference task as follows: given a target distribution \(S\) of visual inputs (e.g. shapes), we want to learn a model \(p_{\theta}(z|x)\), where \(x\in S\), which infers a program \(z\) whose execution \(E(z)\) reconstructs the input shape \(x\). Following Occam's razor, we seek programs that are parsimonious. More formally, we seek a \(p_{\theta^{*}}\) that maximizes our objective function \(\mathcal{O}\): \[\theta^{*} =\arg\max_{\theta}\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{}}}}}} \nolimits\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! This procedure can in theory be applied to programs produced by any source (e.g. networks that acts as a differentiable language executor [24]). However, we find that there are unique benefits to applying this procedure to the predictions made by a \(p_{\theta}(z|x)\) network trained with SIRI. Some rewrites build a cache of partial results during bootstrapped learning that can be quickly and effectively applied at inference time (_CG_, Section 4.3). Furthermore, some rewrites are too expensive to run on very complex programs, consuming massive amounts of memory, but are well-suited to the parsimonious programs produced by SIRI. For instance, one rewriter (_PO_, Section 4.1) was not able to operate on the highly complex programs output by CSG-Stump [24] (cf. supplemental material). ## 4 Rewriting Visual Programs As our method relies on an input family of rewriters, _RWS_, we identify three rewrite operators that generalize across multiple shape-program domains. Figure 3 depicts these rewriters in action. In the rest of this section, we provide a high-level description and motivation for the different rewriters we use during SIRI and test-time rewriting: _Parameter Optimization_ (Section 4.1), _Code Pruning_ (Section 4.2), and _Code Grafting_ (Section 4.3). We provide the implementation details in the supplemental material. ### Parameter Optimization Visual languages often contain continuous (and differentiable) parameters such as the scale and position of primitives, and discrete parameters such as control flow (e.g. how to combine primitives). While keeping the discrete parameters fixed, _Parameter Optimization_ (_PO_) rewriter aims to improve the continuous parameters of a given program using gradient-based optimization. Given a program \(z_{\phi}\) with continuous parameters \(\phi\) inferred for a shape \(x\in S\), _PO_ adjusts \(\phi\) to maximizes the reconstruction accuracy \(\mathcal{R}\) between \(x\) and the program execution \(E(z_{\phi})\): \(\phi^{*}\sim\arg\max_{\phi}\mathcal{R}(x,E(z_{\phi}))\). To propagate gradients back from a reconstruction metric to the continuous parameters of \(z_{\phi}\), _PO_ requires that the program executor \(E\) is partially differentiable: one or more continuous parameters should have well-defined derivatives for a given program structure (though the program execution may itself be only piecewise continuous). We highlight that _PO_ is useful _even_ under such constraints, precisely because _PO_ is not the only rewriter we consider: other rewriters in _RWS_ can influence structural changes, along with SIRI's _Search_ phase. _PO_'s requirements on \(E\) differ significantly from the differentiable executors employed by neural relaxation architectures. These works [16, 24] often attempt to differentiate through both discrete and continuous decisions, which leads to noisy gradients and poor program quality. Although the design of executors is domain-dependant, we outline our procedure for converting the outputs of \(E\) into an equivalent signed-distance field representation compatible with our reconstruction metric \(\mathcal{R}\). For each language we consider, we map outputs from \(E\) into a tree-like representation, where each leaf represents a primitive (spheres, etc.) and each intermediate node represents a transformation (position, etc.) or a combinator (union, etc.). For CSG-like languages, we can directly map from program parameters \(\phi\) to this representation. It is also possible to convert the output of more complex program executors that produce collections of primitives into this format [13]. Then, we perform boolean combinations of the parameterized primitives, and apply the transformation operators, to obtain the program's implicit equivalent. With the program's implicit equivalent, we now uniformly sample points \(t\in\mathbb{R}^{n}\), and convert the signed distance at the points into soft-occupancy to yield a differentiable execution of the program. This framing should be extensible to other visual-programming domains of interest: where either this mapping may be explicitly extracted from the input program (SVG) or parsed from primitives created by a more complex executor [22]. Figure 3: SIRI uses _rewrites_ to improve VPI networks. We depict three rewrites in action that: optimize continuous parameter values (_Parameter Optimization_), remove extraneous code (_Code Pruning_), and substitute sub-expressions from a cache (_Code Grafting_). ### Code Pruning One drawback of bootstrapping techniques is their tendency to reinforce spurious patterns [17]. The _Code Pruning_ (_CP_) rewriter mitigates this problem by identifying and removing program fragments that negatively contribute to our objective \(\mathcal{O}\). Given a shape \(x\) and input \(z\), _CP_ rewrites \(z\) s.t. \(z^{R}\sim\operatorname*{arg\,max}_{\Omega^{CP}}\mathcal{O}(x,z)\), where \(\Omega^{CP}(z)=\{z*|z*\subseteq z\}\) represents the set of all valid sub-programs of \(z\). A naive _CP_ rewriter that considers every valid sub-expression would be prohibitively slow. We instead implement a greedy version of _CP_ designed for declarative, functional languages. We describe our general approach below, and provide further details in the supplemental material. Our implementation of _CP_ employs two greedy searches, a top-down pass and a bottom-up pass, to approximate \(z^{R}\). At each pass, we identify and prune nodes which decrease the overall objective score \(\mathcal{O}\). The top-down traversal relabels the highest objective scoring node as the root, pruning all but the tree starting at that node. The bottom-up traversal then checked each node's contribution to the final execution and prunes branches with negligible contribution. ### Code Grafting A key feature of symbolic representations is the ability to perform part-based reasoning, e.g. compose parts from different instances. The _Code Grafting_ (_CG_) rewriter exploits this feature to improve programs. Specifically, the _CG_ rewriter replaces sub-expressions of a particular program with a better sub-expression (in terms of \(\mathcal{O}\)). The primary challenge is specifying _what_ to replace a sub-expression with, as the space of potential replacements is enormous. To make the search tractable, _CG_ builds a program cache populated by sub-expressions discovered while running SIRI, and searches for replacement candidates within the cache. When the cache is large, searching for good replacement expressions can become expensive. Our _CG_ implementation makes this search tractable by focusing on _execution equivalence_, i.e. equivalence is measured by comparing their executions (stored as an \(n\)-dimensional occupancy grid). But finding cache entries which would improve reconstruction performance requires some notion of the _desired execution_, i.e. what sub-expression execution would make the program better match the target shape \(x\)? We develop a procedure for calculating such desired execution through masked function inversion; an example is depicted in Figure 4. We provide the high-level insight by walking through this figure; further details are in the supplemental material. In this example, \(T\) denotes the target shape (the desired final execution), which is a union of two subexpressions. Suppose we wish to replace subexpression \(A\). Given the current state of its sibling subexpressions (\(B\), in this case), we can invert the Union to produce the _desired-execution_\(A^{*}\). \(A^{*}\) can be broken into sub-regions: (black and white) areas where the optimal execution behavior is known and (gray) areas where the optimal execution behavior is unknown (for example, due to the non-invertibility of some operators). _CG_ uses such ternary _desired-executions_ to search for cache entries that are likely to improve reconstruction accuracy and substitutes the most suitable candidate. ## 5 Results We evaluate the efficacy of our rewriters over a collection of VPI domains for two tasks: improving bootstrapped learning (Section 5.2), and improving VPI with test-time rewrites (Section 5.3). First, we provide the details concerning our experiments in Section 5.1. ### Experimental Design **Domain-Specific Languages:** We consider three VPI domains: 2D Constructive Solid Geometry (CSG), 3D CSG, and ShapeAssembly [13]. Shapes are formed in CSG by declaring primitives such as cylinders, applying transformations, and composition via boolean operations. ShapeAssembly produces hierarchies of cuboid part proxies (which can themselves contain sub-programs) assembled through attachment operations. Please see the supplemental for the complete DSL grammars. To ease learning, past approaches have used simplified versions of these languages, e.g. restricting CSG to contain only primitive-level transformations or removing hierarchical sub-programs from ShapeAssembly [16, 15, 26]. For fair comparison, we match our DSLs to prior work. **Shape Datasets:** We evaluate 2D CSG on the CAD dataset introduced in CSGNet [26]. It contains front and side views of chairs, desks and lamps from the Trimble 3D warehouse. This dataset is divided into 10K training, 3K validation and 3K testing shapes. We evaluate 3D CSG and ShapeAssembly on the 3D CAD dataset released in [15] containing shapes from chair, table, couch, and bench categories of Figure 4: Given a target \(T\), we derive the _desired execution_, \(A^{*}\), for sub-expression \(A\), through operator inversion. Our _CG_ rewriter leverages _desired-executions_ to search for replacement candidates. ShapeNet dataset [5] in a voxel grid format. This dataset is split into 10k training, 1k validation and 1k testing shapes. **Model Architecture:** Our model \(p(z|x)\) synthesizes programs as a sequence of tokens, where each token specifies a command type or its parameters. Numeric parameters are normalized and discretized into 33 bins. Each \(p(z|x)\) uses a domain-specific feature extractor (e.g. a 2D or 3D CNN), and a decoder-only transformer module. For 2D languages, the feature extractor takes a \(64^{2}\) image as input; for 3D languages, it takes a \(32^{3}\) voxel grid. We use the same transformer architecture for all experiments, varying only the last layer output size to model the different number of commands in each language. **Metrics:** We measure reconstruction accuracy with two metrics: Intersection Over Union (IoU) and point cloud Chamfer-Distance (CD). We use \(64^{2}\) and \(32^{3}\) resolution occupancy grid for calculating 2D and 3D IoU respectively. We follow the same methodology as CSGNet [26] for measuring 2D CD; 3D CD is measured between \(2048\) points sampled on the ground-truth ShapeNet meshes and the meshes produced by executing the inferred programs. **Training details:** Following [26, 15, 8], we pretrain our models on a large corpus of synthetically generated programs until it converges. We generate the synthetic programs via the sampling procedure proposed in PLAD [15]. After pretraining, the model is finetuned on the target distribution \(S\), following the procedure outlined in Section 3.2. During each _Rewrite_ phase, we apply _PO_, _CP_ and _CG_ to \(50\%\), \(50\%\), \(15\%\) of programs respectively. _CG_ is applied to only \(15\%\) of data due to its higher computational cost. For our training objective \(\mathcal{O}\) (c.f. equation 2), we fix \(\alpha\) to \(0.015\). For 3D we set \(\mathcal{R}\) to IoU, and for 2D we set \(\mathcal{R}\) to CD. For test-time rewriting (TTR), we perform interleaved application of _each_ rewriter thrice, unless specified otherwise (i.e. for Table 4). ### Training with SIRI We first evaluate the benefit of intermittent rewriting for bootstrapped learning. We compare our method SIRI against 2 baselines, namely PLAD [15], and _Naive Rewrite Integration_ (NRI) which naively integrates the rewriters into PLAD (cf. Section 5.2). Note that prior work on integrating code rewriting [7, 9] follow this strategy. As shown in Table 1, SIRI outperforms both the baselines on all domains. While PLAD performs well on simple domains such as 2D CSG, it is less effective on harder domains such as ShapeAssembly. Naively integrating the rewriters (NRI) can in fact even be detrimental to bootstrapped learning, as can be seen for 2D & 3D CSG (w.r.t. IoU). Excessive use of the rewriters and the lack of training data diversity leads the model trained with NRI to overfit to a local minima, i.e. the _Search_ and _Rewrite_ phases become ineffective at generating useful better programs to learn from. SIRI resolves this issue by its frugal usage of the rewriters and by training \(p_{\theta}\) on a diverse set of programs obtained from both the _Search_ and _Rewrite_ phases. We see a similar trend on other visual languages, which we discuss \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Chamfer Distance (\(\downarrow\))} & \multicolumn{3}{c}{IoU (\(\uparrow\))} \\ \hline & 2D CSG & 3D CSG & ShapeAssembly & 2D CSG & 3D CSG & ShapeAssembly \\ \hline PLAD [15] & \(0.24\) & \(1.75\) & \(1.98\) & \(90.8\) & \(74.65\) & \(63.1\) \\ NRI & \(0.36\) & \(1.25\) & \(1.53\) & \(88.4\) & \(74.43\) & \(66.89\) \\ \hline SIRI & \(\mathbf{0.22}\) & \(\mathbf{1.10}\) & \(\mathbf{1.44}\) & \(\mathbf{91.7}\) & \(\mathbf{76.77}\) & \(\mathbf{67.8}\) \\ \hline \hline \end{tabular} \end{table} Table 1: We report the Test-set performance across 3 VPI domains. Naively integrating the rewriters into PLAD (NRI) can detoriorate the model’s performance (IoU on 2D & 3D CSG). In contrast, SIRI consistently improves over PLAD on all the three VPI domains. Figure 5: We plot the objective \(\mathcal{O}\) (Y-axis) as a function of training time (X-axis), measured by iterations (left) and wall-clock time (including time taken for rewriting) (right) on 3D CSG domain. By amortizing the rewrite-cost, and keeping the training data diverse, SIRI achieves higher performance and does so faster. Figure 6: Due to data-independent rewriters, SIRI remains effective at training the network even with a fraction of the data. In comparison, since PLAD’s _Search_ phase is tied to the inference network’s performance, data scarcity deteriorates its performance. in the supplemental material. Finally, we note that with \(0.22\) CD on 2D CSG domain, SIRI out-performs UCSG-Net [16] (\(0.32\) CD), the previous state-of-the art method for 2D CSG. We also evaluate how SIRI impacts the training convergence in Figure 5, plotting the Objective \(\mathcal{O}\) against iterations (left) and wall-clock time (right). Wall-clock time includes the time required to execute the rewriters. Though NRI starts with a high performance, it eventually converges to a lower value of \(\mathcal{O}\) despite expending a lot of time for rewriting. In contrast, SIRI is able to achieve a higher performance while also amortizing the cost of rewriting all the programs as the model generalizes the useful patterns present in the rewritten training programs. **Data Scarcity**: Often, large datasets of example shapes are not easily available. Thus, we probe the efficacy of SIRI and PLAD under data scarcity. We present our experiment in Figure 6, plotting the validation set CD for PLAD and SIRI. With 100% data, SIRI surpasses PLAD by \(0.29\) CD, where as at 10% data, SIRI outperforms PLAD by a margin of \(1.33\) CD. Since PLAD relies solely on neurally-guided search to discover good programs, which is dependant on the dataset size, reduction in dataset size hurts their performance. In contrast, as SIRI employs rewriters such as _PO_ and _CP_ which are invariant to the training dataset size, it outperforms PLAD in low-data regimes. ### Test-Time Rewriting We now show how combining SIRI with additional rewriting at test-time allows performance that matches state-of-the-art methods specialized for 3D CSG reconstruction, while producing much more parsimonious programs. Specifically, we now compare SIRI against CSG-Stump [24], a state-of-the-art method with a neural architecture designed specifically for CSG reconstruction. We train CSG-StumpNet with the author-released code on our dataset. We compare against two versions: CSG-Stump 32 with 32 intersection and union nodes and CSG-Stump 256 with 256 intersection and union nodes. Since the authors originally trained their model independently for each class, we also compare against a class-specific (cs) version of CSG-Stump 256, where we use pretrained class-specific models released by the authors. We show our results in Table 2. We see that SIRI achieves \(0.1\) CD lower than CSG-Stump 256, despite being domain agnostic. More importantly, it achieves this with a fraction of primitives and operations. Secondly, when accompanied by test-time rewrites, SIRI achieves similar CD to CSG-Stump 256 (cs), which has models trained _individually_ for each class. SIRI achieves this while having an order of magnitude fewer primitives and operators (yielding more interpretable and editable programs). In Figure 1, we visualize the difference in inferred program size by rendering their executed shapes with colored primitives. As the programs inferred by SIRI are parsimonious, applying TTR to the inferred programs takes only \(14.6\) seconds per shape on an average. In contrast, the over parameterized programs inferred by CSG-Stump are not amenable to fast test-time-optimization (cf. supplemental). Instead, prior-works [40, 41] fine-tune the network itself for each test-shape, which, for CSG-Stump, requires \(\sim\) 30 minutes per shape [40]. Moreover, while network fine-tuning can increase reconstruction accuracy, the inferred programs remain incomprehensibly large. Test-time rewrites are beneficial for PLAD inferred programs as well. However, we find it best to both (i) train with rewriters and (ii) use them at inference time. We compare test-time rewrites on models trained with PLAD and SIRI and report the results in Table 3. For both 3D CSG and Sha \begin{table} \begin{tabular}{l c c c} & CD (\(\downarrow\)) & N. Prim. (\(\downarrow\)) & N. ops. (\(\downarrow\)) \\ \hline CSG-Stump 32 & \(1.90\) & \(19.03\) & \(5.95\) \\ CSG-Stump 256 & \(1.22\) & \(154.47\) & \(52.85\) \\ CSG-Stump 256 (cs) & \(0.78\) & \(191.38\) & \(72.52\) \\ \hline SIRI & \(1.101\) & \(3.90\) & \(2.90\) \\ SIRI + TTR & \(0.83\) & \(8.47\) & \(7.42\) \\ \end{tabular} \end{table} Table 2: SIRI outperforms CSG-Stump 256 with a fraction fewer primitives. Applying test-time-rewrites to SIRI makes its performance comparable to that of class-specific (cs) CSG-Stump while remaining relatively parsimonious. Figure 7: We present qualitative examples of our method and PLAD [15]. SIRI outperforms PLAD and test-time rewriting improves it further. peAssembly, using test-time rewrites on SIRI is more beneficial than using them on PLAD. Apart from yielding better initialization for the inferred programs, training with SIRI also equips the _CG_ rewriter with a program cache filled with useful sub-expressions; this cache bolsters _CG_'s efficacy at test-time rewriting. Note that SIRI + TTR programs have only a marginal increase in length over PLAD + TTR. As described in Section 3.3, we interleave the application of our three rewriters for test-time rewriting, allowing changes to both the structure and continuous parameters of the program. In Table 4, we evaluate the impact of each individual rewriter. Both _PO_ and _CG_ improve upon the program inferred by beam search while taking only a few seconds. Combining them further improves performance while keeping the programs relatively parsimonious. ## 6 Conclusion We introduced _Sparse Intermittent Rewrite Injection_ (SIRI), a paradigm for improving unsupervised training of visual-program inference models with code rewriting. We implemented a family of code rewriters that generalized across multiple 2D and 3D shape-program domains. With this family of code rewriters, SIRI learns better VPI networks compared with bootstrap learning methods that ignore rewriters, or use them in a naive fashion. Beyond this, we demonstrated that our rewriters can be employed in a test-time rewriting (TTR) scheme to improve predictions made by SIRI. We found that this SIRI + TTR paradigm is able to match or surpass the reconstruction performance of specially designed neural VPI architectures, while maintaining a much more parsimonious program representation. In future work, we would like to explore how additional code-rewriting operations could be effectively integrated into our family of rewrites for SIRI + TTR paradigms. While we find SIRI empirically effective for bootstrapped learning, it remains unclear how code rewriting families can best aid RL and end-to-end learning paradigms. Looking forward, we believe that principled use of code-rewriters is a promising way to guide the search of learning-based VPI models, merging domain-specific preferences with neural guidance, and would be a key component of VPI systems designed for complex, real-world domains. ## Acknowledgment We would like to thank the anonymous reviewers for their helpful suggestions. This work was funded in parts by NSF award #1941808 and a Brown University Presidential Fellowship. Daniel Ritchie is an advisor to Geopipe and owns equity in the company. Geopipe is a start-up that is developing 3D technology to build immersive virtual copies of the real world with applications in various fields, including games and architecture.
プログラムはコンパクトさと構造を持ち、視覚データのより魅力的な表現として機能します。私たちは、コードの書き換えをどのように視覚データからプログラムを推論するシステムの改善に活用できるかを探索します。まず、SIRI(Sparse Intermittent Rewrite Injection)という、無监督bootstrap learningのためのフレームワークを提案します。SIRIは、訓練プログラムのデータセット上でコードの書き換え操作をsparseに適用し、改善されたプログラムを訓練セットに注入します。私たちは、視覚プログラミングの域に適用可能な書き換えプログラムのファミリーを設計しました。それはパラメータ最適化、コードの切り消し、コードの grafts です。2Dと3Dの3つの形状プログラミング言語で、SIRIと私たちの書き換えプログラムファミリーを用いることで、性能が向上しました。より良い再構築と高速な収束速度が得られました。Bootstrap learning法を用いる場合に比べて、書き換えプログラムを用
2307.16411
Investigation on the higher twist TMD $h_3$ for proton in the light-front quark-diquark model
The higher twist T-even transverse momentum dependent distribution (TMD) $h_3(x, {\bf p_\perp^2})$ for the proton has been examined in the light-front quark-diquark model (LFQDM). By deciphering the unintegrated quark-quark correlator for semi-inclusive deep inelastic scattering (SIDIS), we have derived explicit equations of the TMD for both the scenarios when the diquark is a scalar or a vector. Average as well as average square transverse momenta have been computed for this TMD. Additionally, we have discussed its transverse momentum dependent parton distribution function (TMDPDF) $h_3(x)$.
Shubham Sharma, Harleen Dahiya
2023-07-31T05:35:54
http://arxiv.org/abs/2307.16411v1
# Investigation on the higher twist TMD \(h_{3}\) for proton in the light-front quark-diquark model ###### Abstract The higher twist T-even transverse momentum dependent distribution (TMD) \(h_{3}(x,{\bf p_{\perp}^{2}})\) for the proton has been examined in the light-front quark-diquark model (LFQDM). By deciphering the unintegrated quark-quark correlator for semi-inclusive deep inelastic scattering (SIDIS), we have derived explicit equations of the TMD for both the scenarios when the diquark is a scalar or a vector. Average as well as average square transverse momenta have been computed for this TMD. Additionally, we have discussed its transverse momentum dependent parton distribution function (TMDPDF) \(h_{3}(x)\). Introduction Physics of hadrons includes its tomography in partonic degrees of freedom. This is achieved theoretically by 3-dimensions functions like transverse momentum dependent parton distributions (TMDs) and generalized parton distributions (GPDs). TMDs encode momentum information of the parton along both longitudinal and transverse directions and is linked experimentally to semi-inclusive deep inclusive scattering (SIDIS). Higher twist TMDs have been an arising topic of interest [1]. In the present work, we have studied the twist-4 TMD \(h_{3}(x,\mathbf{p}_{\perp}^{2})\). ## 2 Light-Front Quark-Diquark Model (LFQDM) We examine the present problem by using the LFQDM where the spin-flavor \(SU(4)\) structure of proton has been stated as a composite of isoscalar-scalar diquark singlet \(|u\ S^{0}\rangle\), isoscalar-vector diquark \(|u\ A^{0}\rangle\) and isovector-vector diquark \(|d\ A^{1}\rangle\) states [2]. The general form of light-front wave functions (LFWFs) \(\varphi_{i}^{(\nu)}(x,\mathbf{p}_{\perp})\) is derived from the soft-wall AdS/QCD prediction [2]. The parameters of the model have been given in Ref. [2]. ## 3 Quark Correlator and Parameterization The unintegrated quark-quark correlator for SIDIS is defined as [3] \[\Phi^{\nu[\Gamma]}(x,\mathbf{p}_{\perp};S) = \frac{1}{2}\int\frac{dz^{-}d^{2}z_{T}}{2(2\pi)^{3}}e^{ip.z}(P;S_ {f}|\overline{\psi}^{\nu}(0)\Gamma^{\prime}\mathcal{W}_{[0,z]}\psi^{\nu}(z)|P ;S_{i}\rangle\Bigg{|}_{z^{*}=0}. \tag{1}\] The momentum of proton, quark and diquark is \(P\equiv(P^{+},\frac{M^{2}}{P^{+}},\mathbf{0}_{\perp})\), \(p\equiv(xP^{+},\frac{p^{2}+|\mathbf{p}_{\perp}|^{2}}{xP^{+}},\mathbf{p}_{\perp})\) and \(P_{X}\equiv\left((1-x)P^{+},P_{X}^{-},-\mathbf{p}_{\perp}\right)\) respectively. The value of Wilson line \(\mathcal{W}_{[0,z]}\) is chosen to be 1. The quark-quark correlator has been parameterized for the Dirac matrix structure \(\Gamma=\mathbf{i}\sigma^{i-}\gamma_{5}\) as [3] \[\Phi^{\nu[i\sigma^{i-}\gamma_{5}]} = \frac{M^{2}}{(P^{+})^{2}}[\mathbf{S}_{T}^{i}\mathbf{h}_{3}+\lambda \frac{\mathbf{p}_{T}^{i}}{M}h_{3L}^{\perp}+\frac{(\mathbf{p}_{T}^{i}\mathbf{p}_{T}^{j}- \frac{1}{2}\mathbf{p}_{T}^{2}g_{T}^{ij})\mathbf{S}_{Tj}}{M^{2}}h_{3T}^{\perp}-\frac{ \epsilon_{T}^{ij}\mathbf{p}_{Tj}}{M}h_{3}^{\perp}]. \tag{2}\] ## 4 Result and Discussion After solving Eq. (1) and (2) for TMD \(h_{3}^{\nu}(x,\mathbf{p}_{\perp}^{2})\), we get \[x^{2}\ h_{3}^{\nu}(x,\mathbf{p}_{\perp}^{2}) = \frac{1}{16\pi^{3}}\Bigg{(}C_{S}^{2}N_{s}^{2}-\frac{1}{3}C_{A}^{2} |N_{0}^{\nu}|^{2}\Bigg{)}\Bigg{|}\frac{m}{M}|\varphi_{1}^{\nu}|^{2}+\frac{ \mathbf{p}_{\perp}^{2}}{M^{2}x}|\varphi_{2}^{\nu}|\Bigg{]}^{2}. \tag{3}\] In Fig. 1, the TMD \(x^{2}h_{3}^{\nu}(x,\mathbf{p}_{\perp}^{2})\) has been plotted for both the \(u\) and \(d\) quarks. It has been observed that the magnitude of TMD \(x^{2}h_{3}^{\nu}(x,\mathbf{p}_{\perp}^{2})\) is maximum at \(x\sim 0.1\), \(p_{\perp}^{2}\sim 0.08\ \mathrm{GeV}^{2}\) and it decreases as we move away from it. The maximum possibility of obtaining the helicity combination of proton corresponding to \(x^{2}h_{3}^{\nu}(x,\mathbf{p}_{\perp}^{2})\) is when the quark carry 10 to 15% of proton's longitudinal momenta. The value of average transverse momentum \(\langle p_{\perp}\rangle\) and the average square transverse momentum \(\langle p_{\perp}^{2}\rangle\) of TMD \(h_{3}^{u}(x,\mathbf{p}_{\perp}^{2})\) for \(u(d)\) quarks computed comes to be 0.26 GeV (0.27 GeV) and 0.07 \(\mathrm{GeV}^{2}\) (0.08 \(\mathrm{GeV}^{2}\)) respectively. The trend of TMDPDF \(x^{2}h_{3}^{\nu}(x)\) is identical to TMD plot at 0.2 \(\mathrm{GeV}^{2}\).
``` プロトンに対するTMD$h_3(x, {\bf p_\perp^2})$の、高 twist の解析は、ライトフロントクォーク-ディクォークモデル (LFQDM) で行われました。半同位性深大散乱 (SIDIS) で、不整合クォーク-クォーク correlation を解読することで、ディクォークがScalar または Vector の場合の TMD の明示的方程式を導出しました。このTMDの平均値と平均平方横軸運動量を計算しました。さらに、この TMD の横軸運動量依存のパートオン分布関数 (TMDPDF) $h_3(x)$ について議論しました。 ``` Please let me know if you would like me to translate any other sentences.
2306.17426
Leveraging Watch-time Feedback for Short-Video Recommendations: A Causal Labeling Framework
With the proliferation of short video applications, the significance of short video recommendations has vastly increased. Unlike other recommendation scenarios, short video recommendation systems heavily rely on feedback from watch time. Existing approaches simply treat watch time as a direct label, failing to effectively harness its extensive semantics and introduce bias, thereby limiting the potential for modeling user interests based on watch time. To overcome this challenge, we propose a framework named Debiased Multiple-semantics-extracting Labeling(DML). DML constructs labels that encompass various semantics by utilizing quantiles derived from the distribution of watch time, prioritizing relative order rather than absolute label values. This approach facilitates easier model learning while aligning with the ranking objective of recommendations. Furthermore, we introduce a method inspired by causal adjustment to refine label definitions, thereby directly mitigating bias at the label level. We substantiate the effectiveness of our DML framework through both online and offline experiments. Extensive results demonstrate that our DML could effectively leverage watch time to discover users' real interests, enhancing their engagement in our application.
Yang Zhang, Yimeng Bai, Jianxin Chang, Xiaoxue Zang, Song Lu, Jing Lu, Fuli Feng, Yanan Niu, Yang Song
2023-06-30T06:40:11
http://arxiv.org/abs/2306.17426v2
# Leveraging Watch-time Feedback for Short-Video Recommendations: A Causal Labeling Framework ###### Abstract. With the proliferation of short video applications, the significance of short video recommendations has vastly increased. Unlike other recommendation scenarios, short video recommendation systems heavily rely on feedback from watch time. Existing approaches simply treat watch time as a direct label, failing to effectively harness its extensive semantics and introduce bias, thereby limiting the potential for modeling user interests based on watch time. To overcome this challenge, we propose a framework named Debiased Multiple-semantics-extracting Labeling (DML). DML constructs labels that encompass various semantics by utilizing quantiles derived from the distribution of watch time, prioritizing relative order rather than absolute label values. This approach facilitates easier model learning while aligning with the ranking objective of recommendations. Furthermore, we introduce a method inspired by causal adjustment to refine label definitions, thereby directly mitigating bias at the label level. We substantiate the effectiveness of our DML framework through both online and offline experiments. Extensive results demonstrate that our DML could effectively leverage watch time to discover users' real interests, enhancing their engagement in our application. Recommender system; Debiasing; Causal Recommendation + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: thanks: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + For example, completing a video playback indicates a relatively strong user preference, which is not adequately emphasized by direct labeling methods. Additionally, using watch time as a direct label is susceptible to various biases, particularly duration bias, as it is directly affected by factors beyond user-item matching (_e.g._, video duration) without reflecting user preference (Han et al., 2017; Wang et al., 2018). This necessitates debiasing techniques during training (Wang et al., 2018; Wang et al., 2018). However, current debiasing techniques often require modifying the model architecture or training methods (Chen et al., 2018), limiting their applicability to readily available recommender systems. This study proposes a novel approach that utilizes watch time to generate explicit labels that capture different semantic aspects, enabling more accurate modeling of user preferences. The approach involves two types of labels: Watch-time Percentile Rank (WPR) labels and binary playback progress-related labels. The WPR label represents watch time but is discretized based on the percentile rank in the watch-time distribution, following the approach outlined by Roscoe (Rosenberg et al., 2019). This discretization ensures label balance and robustness against outliers. The binary playback progress-related labels are created by binarizing the playback progress using a threshold based on quantiles. By adjusting the quantile threshold, we can create labels that capture different semantics, such as whether a video has been effectively watched or completely watched, among others. Importantly, most of these labels are defined from a distribution perspective and emphasize relative ranking orders. This alignment with recommendation objectives facilitates effective learning. However, these labels can potentially inherit the bias of watch time, derived from the impact of factors beyond user-item matching, such as the duration of the video. To address the bias issue, we propose a causal adjustment-inspired method (Kraus et al., 2019) to refine the labels and mitigate the impact of these factors. The method involves grouping videos based on biased factors and independently applying our labeling process to each group. Within each group, we assume that the influence of biased factors is negligible, as biased factors have an equal impact on all videos within the group. Additionally, our distribution-level definition of labels ensures that similar label distributions are generated for each group, regardless of the presence of biased factors. This allows us to disregard the impact of biased factors across groups. By mitigating bias during the labeling process, we eliminate the need for debiasing during model training, making these methods applicable to existing models. Considering the inclusion of debiasing in our labeling framework, we refer to it as _Debiased Multi-semantics-extracting Labeling_ (DML). The main contributions of this work are summarized as follows: * We propose the incorporation of multiple label categories reflecting the distribution of watch time, aimed at augmenting the utility of watch-time data. In addition, we introduce a causality-based methodology that refines these categories while simultaneously addressing potential bias. * We introduce a novel framework for relabeling and debiasing that transforms unprocessed organic user feedback into a range of training labels while directly countering label bias issues. This framework aims to maximize the exploitation of information-dense feedback generated by users. * We conduct extensive offline and online evaluations, effectively demonstrating the superiority of our method over baselines. ## 2. Methodology In this section, we first provide an overview of our Debiased Multi-semantic-extracting Labeling framework, and then elaborate on the proposed labeling and debiasing method. ### Workflow Overview Within the context of short-video applications, the primary form of user feedback is the watch time for each video. This feedback is integral to the development of recommender models. Instead of using this feedback in its original form as a data label, we transform it into a spectrum of labels that underscore its multifaceted semantic dimensions. These labels are subsequently integrated into a multi-task learning architecture to precisely map user preferences. Furthermore, we directly implement debiasing during the labeling phase, thus eliminating the requirement for further debiasing during model training. This refinement enhances the practicability of our approach. The comprehensive system workflow, as illustrated in Figure 1, is divided into three principal components. \(\bullet\)**Data pre-processing.** This section is dedicated to discussing the user log pre-processing phase. Our focus is on refining data quality by reducing noise and applying feature engineering, including statistical feature generation. These initial steps are crucial for data preparation before advanced analyses or model generation. \(\bullet\)**Labeling.** This section emphasizes the transformation of watch time into discrete watch-time labels and binary labels related to playback progression, underscoreing unique semantics. The discrete watch-time labels, representing watch time, are discretized and balanced and and can be utilized as labels for regression tasks. Contrarily, the binary labels related to playback progression indicate whether the user's video consumption has surpassed a specific threshold. Importantly, we incorporate debiasing directly into label generation at the label level, leveraging a causal methodology. This approach eliminates the necessity for debiasing during training, which often involves modifying model or training architectures. \(\bullet\)**Learning.** Once we have obtained the debiased labels, we employ the Multi-gate Mixture-of-Experty (MMoE) architecture, a widely used architecture in recommendation, to construct our recommender model (Rosenberg et al., 2019). Each label serves as an individual learning task, and we optimize the model to fit each task independently. By leveraging the diverse information captured by the different labels, we enhance the modeling of user interests. During the serving phase, we generate multiple ranking lists using the predictions from Figure 1. Overview of the system workflow for applying our DML framework, involving three main components: data pre-processing, labeling, and multi-task learning. individual tasks and the fused predictions. These diverse ranking lists serve as alternative recommendation routes. The data pre-processing and learning components resemble existing work. Hence, the core novelty lies in our labeling component. We now proceed to describe the methodology for acquiring the two distinct types of debiased labels within our labeling framework. ### Debiased Multi-Semantic-Extracting Labeling Framework In this section, we explain the proposed discrete watch time label (Watch-time Percentile Rank) and provide details on binary labels. #### 2.2.1. Watch-time Percentile Rank (WPR) Watch time has the potential to serve as a natural indicator. However, it often exhibits a substantial range and follows a characteristic long-tail distribution. In our scenario, the majority of videos exhibit a watch time of less than 120 seconds. However, there exist certain outliers that might extend beyond an hour, attributable to factors such as repetitive playback or association with particular video genres. To tackle this challenge, we propose the transformation of watch time into a label called "_Watch-time Percentile Rank_" (WPR). This new label provides a balanced and discrete representation, effectively mitigating the impact of outliers. \(\bullet\)_Definition of WPR_. The concept of utilizing quantiles as a robust measure against outliers, as demonstrated by Aravkin et al. (2018), serves as an inspiration for our approach in transforming watch time. We propose the use of special quantiles known as percentiles to convert watch time into a discrete and range-limited label called "Watch-time Percentile Rank" (WPR). The WPR is defined as the percentile rank (Srivastava et al., 2017) of a specific watch time within the overall population. It signifies the percentage of all watch times that are equal to or lower than the given watch time in its frequency distribution. As a result, the WPR label does not solely focus on the absolute value of watch time, but rather on its relative rank within the entire population. To generate labels for training examples, we follow these four steps: * **Step 1**: We rank all training examples based on their watch time feedback in ascending order. * **Step 2**: We choose the number of groups \(N\), and set the ratio of each group relative to the entire dataset as \(q_{n}\) (\(n\in[1,N]\)), subject to the constraint that \(q_{1}+\cdots+q_{N}=1\). * **Step 3**: We split the ranked data into \(N\) groups with the order kept, such that the ratio of data falling in \(n\)-th groups is equal to \(q_{n}\), _i.e.,_\(\frac{|\mathcal{D}_{n}|}{|\mathcal{D}|}=q_{n}\). Here \(\mathcal{D}_{n}\) denotes the data that falls in the \(n\)-th group and \(\mathcal{D}\) denotes all data. **Step 4**: For each training example \((x,y)\) in \(D_{n}\), we take \(y_{wpr}=\sum_{n^{\prime}=1}^{n}q_{n^{\prime}}\) as the WPR label1. Footnote 1: Original percentile rank would multiply by 100, _i.e.,_\(y_{wpr}\times 100\). Here we omit “\(\times 100\)”. Let us define the label generation process using the function \(g_{wpr}(\cdot)\). For each \((x,y)\in\mathcal{D}\), the transformed label is computed as \[y_{wpr}=g_{wpr}(x,y;\mathcal{D};\{q_{1},\ldots,q_{N}\}), \tag{1}\] where \(y\) denotes the original watch time. Notably, \(y_{wpr}\) is constrained within the range of \([0,1]\) and can only take on one of \(N\) possible prefix sum values, determined by the set \(\{q_{1},\ldots,q_{N}\}\). In order to achieve a balanced distribution of labels, one approach is to set \(q_{1}=q_{2}=\cdots=q_{N}=1/N\). This configuration is referred to as the "native WPR" label. However, using this naive method can lead to densely populated labels in areas associated with short watch times. For example, in the Kuaishou scenario with \(N=300\), there are 41 unique label values within the range of 0-3 seconds, which is the same as the range of 40-90 seconds. Treating each possible label equally during training can result in the model exerting the same effort to differentiate between watch time differences of 0-3 seconds and 40-90 seconds, which is not desirable. To address this issue, we adopt a progressive finer-grained partitioning strategy, where the intervals between labels within short watch time are set larger, _i.e.,_ we enforce \(q_{1}\geq q_{2}\geq\cdots\geq q_{N}\). The specific values of \(q_{n}\) are determined based on empirical experience2. Footnote 2: In our scenarios, we select \(\{q_{1},\ldots,q_{N}\}\), making the watch time (\(y\)) and the WPR (\(y_{wpr}\)) satisfy: \(1/y_{wpr}=a[ln(y)]+b[ln(y)]+c\), where \(a,b\) and \(c\) are hyper-parameters and determined through empirical experience. \(\bullet\)_Label Debiasing_. As discussed by previous work (Han et al., 2018; Han et al., 2018), the watch-time feedback is not only affected by user-item matching but also directly by the video duration. We could abstract the causal relations with the causal graph in Figure 2, with \(M\to Y\) and \(V_{d}\to Y\) to represent the influence of user-item matching \(M\) and the direct influence of video duration \(V_{d}\) on \(Y\). Here we assume the matching of a video to \(U\) is determined by video features \(V_{r}\)(Han et al., 2018; Han et al., 2018), representing by \((U,V_{r})\to M\); and we take \(V_{d}\leftrightarrow V_{r}\) to represent the item features could potentially affect each other or have a common cause, _e.g.,_ the video creator (Han et al., 2018). According to the graph, \(V_{d}\) is a confounder between \(V_{r}\) and \(Y\), its confounding effects would result in bias. Meanwhile, its direct effects are entangled with the effects of user preference, which would mislead the learning of user preference, bringing bias. If directly creating the WPR, it undoubtedly inherits such biases. To mitigate such biases, the key lies in cutting down the edge \(V_{d}\to Y\) to model the causal effect of \(M\), which could be estimated with the causal adjustment (Han et al., 2018; Han et al., 2018) as follows: \[\sum_{a_{d}\in V_{d}}P(Y|U,V_{r},u_{d})P(u_{d}), \tag{2}\] where \(V_{d}\) denotes the all possible video duration. The adjustment works with two mechanisms (Han et al., 2018; Han et al., 2018). Firstly, for sub-populations with the same video duration \(V_{d}=v_{d}\), the influence of \(v_{d}\) is assumed to be the same. Within each sub-population, relative user preference can be directly obtained through prediction result comparison. Secondly, all possible values of \(V_{d}\) are enumerated with the same \(P(V_{d})\) for each sub-population in a counterfactual manner, which ensures that the influence of \(v_{d}\) is the same across all sub-populations. This enables the relative user preference could also be obtained via direct comparison across sub-populations. Figure 2. Causal graph to describe the generation process of watch time feedback. \(Y\) is not only affected by \(M\), but also directly by \(V\) without reflecting user preference. We try to simulate the adjustment process during generating the WPR label to achieve debiasing. To achieve this, we first split the interactions into different groups according to the video duration. Then, we simulate the first mechanism of the adjustment by independently running the WPR generation process for each group. To simulate the second mechanism of the adjustment, it is hard to forcibly change the video duration during generating the label to perform enumeration. However, for each video duration group, the generated WPR label obeys the same distribution. We thus could roughly think that the influence of the video duration is the same over different groups, achieving the similar result of the second step of the adjustment. Formally, let \(\mathcal{D}^{a_{d}}\) denote the group data with the video duration of \(a_{d}\), for each training example \((x,y)\) in \(\mathcal{D}^{a_{d}}\), we generate the debiased WPR label as follows: \[y^{d}_{wpr}=g_{wpr}\left(x,y;\mathcal{D}^{a_{d}};\{q_{1},\dots,q_{N}\}\right), \tag{3}\] where \(x\) could also be represented by \((u,v,q_{d})\). Compared to the original generation process, the only difference is that we perform the process in \(\mathcal{D}^{a_{d}}\) instead of \(\mathcal{D}\). #### 2.2.2. Binary Labels We next consider creating binary labels to emphasize whether the watching of videos reaches a specific degree, such as whether a video has been effectively watched. To create such labels, we binarize the watch time with a specific percentile threshold. By varying the threshold, we could create different labels with different semantics. Here, we mainly introduce two labels: **Effective View (EV).** This label is used to reflect whether a video has been effectively viewed. Specifically, if the watch time on a video exceeds that of 50% of training examples, we think the video is effectively viewed. Formally, for a training example \((x,y)\in\mathcal{D}\), we define the effective view label \(y_{ev}\) as follows: \[y_{ev}=\mathbb{I}(y\geq t_{50}(\mathcal{D})), \tag{4}\] where \(\mathbb{I}(\cdot)\) denotes the indicator function, and \(t_{50}(\mathcal{D})\) denotes the 50th percentile (_i.e.,_ median) of watch time about \(\mathcal{D}\). **Long View (LV).** Similarly, we define the long view label to reflect whether the watch time of a training example has exceeded 75% of watch time in the training dataset. Formally, for a training example \((x,y)\) in \(\mathcal{D}\), the long view label \(y_{Io}\) is defined as: \[y_{Io}=\mathbb{I}(y\geq t_{75}(\mathcal{D})), \tag{5}\] where \(t_{75}(\mathcal{D})\) denotes the 75th percentile of watch time for \(\mathcal{D}\). \(\bullet\)_Label Debiasing._ It is evident that the EV and LV labels also suffer from duration bias, such as longer videos watched by users are easier to obtain positive labels. We next present how to mitigate the bias for these labels. **Duration Bias.** As the EV and LV are generated based on watch time, the source of duration bias should be the same as that for the WPR label. We directly apply the same debiasing strategy, _i.e.,_ independently running the labeling process for groups split by the video duration. Taking the EV label as an example, for a training example \((x,y)\), we generate the debiased label as follows: \[y^{d}_{ev}=\mathbb{I}\left(y\geq t_{50}(\mathcal{D}^{a_{d}})\right), \tag{6}\] where \(v_{d}\) denotes the video duration of the sample, \(\mathcal{D}^{a_{d}}\) denotes all data with video duration of \(v_{d}\) in \(\mathcal{D}\), and \(t_{50}(\mathcal{D}^{a_{d}})\) denotes the median of watch time in \(\mathcal{D}^{a_{d}}\). **Other Biases.** According to previous work (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019), other user-side errors like video popularity could also play a similar causal role to the video duration, bringing bias. For the WPR label, we believe that duration bias plays the dominant role, and we thus ignore the biases brought by these biased factors. However, the EV and LV labels could be more susceptive to biases. For example, the EV and LV label of some videos/users could easily always be negative3 even after mitigating duration bias, especially for unpopular videos/inactive users. This would result in over-suppressing unpopular videos and unsatisfying inactive users. Footnote 3: We usually do not suffer from the issue for the WPR label, since we take a more fine-grained labeling strategy for it. To address this issue, a similar debiasing strategy to duration bias can be adopted, but a more fine-grained group split is needed based on the combination of all possible biased factors (Wang et al., 2019). However, such biased factors may be difficult to detect fully, and splitting based on their combination could result in very sparse groups. To overcome this challenge, we propose to use all available features of a user or video as a substitute for such factors and separately perform debiasing on the user side and item (_i.e.,_ video) side. Formally, for a training example \((x,y)\), we generate debiased labels from two sides as follows (with the EV label as an example): \[\text{item side}:y^{p}_{ev}=\mathbb{I}(y\geq t_{50}(\mathcal{D}^{p})),\] \[\text{user side}:y^{u}_{ev}=\mathbb{I}(y\geq t_{50}(\mathcal{D}^{u})), \tag{7}\] where \(v\) denotes video features containing \(v_{d}\) and \(v_{r}\), _i.e.,_\(v=[v_{d},v_{r}]\), \(\mathcal{D}^{a}\) (\(\mathcal{D}^{u}\)) denotes all samples with the video (user) features of \(v\) (\(u\)), and \(t_{50}(\mathcal{D}^{a})\) (\(t_{50}(\mathcal{D}^{u})\)) denotes its watch time median. ## 3. Experiments In this section, we conduct both offline and online experiments on real industrial data to evaluate our proposed method, answering the following three research questions: **RQ1:** How is the performance of the proposed DML compared with existing methods? **RQ2:** How do the design choices in DML affect its efficacy? **RQ3:** Can the proposed DML improve real-time video consumption on the Kuaishou video recommendation platform? ### Experimental Settings \(\bullet\)_Data Information._ We conduct experiments on industrial data from Kuaishou, one of the top short-video sharing platforms in China. As shown in Table 1, the size of daily active users on Kuaishou is around 346 million. Every day 45 million short videos are posted and these videos are played 46 billion times in total. On average, each user watches 133.7 short videos per day. To utilize rich behavior information, we collect full user historical behaviors from older logs back months ago. On average, each user watched 14,500 \begin{table} \begin{tabular}{l l l} \hline \hline Data & Field & Size \\ \hline \multirow{4}{*}{Daily Log Info} & Users & 345.5 million \\ & Videos & 45.1 million \\ & Samples & 46.2 billion \\ & Average User Actions & 133.7 / day \\ \hline \multirow{4}{*}{Historical Behaviors} & Average User Behaviors & 14.5 thousand \\ & Max User Behaviors & 100 thousand \\ \cline{1-1} \cline{2-3} & & \\ \hline \hline \end{tabular} \end{table} Table 1. Statistics of the industrial dataset constructed on the daily collected user logs from Kuaishou. videos in the past six months. We cut off the maximum user behavior sequence length to 100,000, which is about the annual total number of views for heavy users. Specifically, each data sample is associated with an important feature "video duration" denoting the total length of the video, and the feedback of "watch time" denoting the total seconds that the user played this video. \(\bullet\)_Baselines_. To verify the effectiveness of our DML, we compare it with the following methods that focus on leveraging watch time: * **TR [(19)].** This refers to the traditional regression method using watch time as the label directly and learning a regression model by minimizing mean squared error (MSE). * **WLR [(9)].** This refers to Youtube's Weighted Logistic Regression, which learns a logistic regression model by fitting all interactions, reweighted by watch times, and uses the learned odds to estimate watch time during prediction. * **OR [(22)].** This refers to an approach that uses ordinal regression [(22)] to fit watch time, which places emphasis on the relative order of watch times during the learning process. * **D2Q [(45)].** This is a SOTA method in watch time prediction. It uses backdoor adjustment to address duration bias and involves fitting duration-dependent quantiles of watch time using MSE. To ensure a fair comparison, we implement all methods within Kuaishou's multi-task learning framework, developed based on the MMoE algorithm [(26)] and including other tasks such as click prediction and complete watching prediction. For all methods, we employ the AdaGrad optimizer [(25)] in the embedding layer with a learning rate of 0.05. The DNN parameters are updated using the Adam optimizer [(18)] with a learning rate of 5e-6, and the batch size is set to 8192. These choices are based on empirical observations. ### Offline Evaluation (RQ1 & RQ2) **Offline Setting.** We use samples collected over a 23-hour period as the training data, with samples collected during the subsequent hour as the testing data. We evaluate watch-time prediction performance using two types of evaluation metrics: ranking metrics (AUC and GAUC, which is AUC averaged over users) and accuracy metrics (MAE, RMSE, and MAPE [(10)], which refers to Mean Absolute Percentage Error). We would convert the prediction results back into watch time when computing these metrics if quantile-based labels are employed, following [(45)]. #### 3.2.1. Performance Comparison (RQ1) The comparison results are summarized in Table 2, where we draw the following observations: * Our DML outperforms all baselines on all metrics, highlighting its effectiveness in unlocking the potential of watch time for modeling real user interest. This success can be attributed to its ability to extract diverse semantics contained in watch time and perform label debiasing. * D2Q outperforms other baselines in most of the metrics. This can be mainly attributed to its additional considerations for debiasing, which highlights the importance of addressing bias issues when leveraging watch time. However, compared to DML, although D2Q also conducts debiasing, it consistently performs worse. This is because D2Q fails to consider the diverse semantics of watch time that DML could capture. * TR exhibits the worst performance among all compared methods, likely due to its direct utilization of watch time as the sole label. Although OR utilizes watch time similarly, it outperforms TR, which may be attributed to its emphasis on the relative ranking order of watch time for learning user preference. #### 3.2.2. Ablation Studies (RQ2) We next conduct experiments to study the influence of design choices in the WPR labels and the necessity of binary labels (taking the EV label as an example). **Studies on the WPR Label.** There are two critical components when generating the WPR label: debiasing and progressive percentile splitting. Our debiasing relies on two main designs: 1) grouping based on duration and 2) using the percentile-based label to keep similar label distributions across different duration groups. To examine these designs, we conducted two variants of DML: DML without duration-based grouping (w/o DG), and DML with finishing playing rate (w/o WPR), which uses the playing rate label instead of the percentile-based label WPR. To verify the progressive percentile splitting, we introduce two additional variants: DML with equal-frequency WPR (w/ EF-WPR), _i.e._, making \(q_{k}=1/N\) in Equation (1), and DML with equal-width WPR (w/ EW-WPR), which makes each group of watch time span an equal width4. Figure 3 summarizes the results of DML and its variants on GAUC and MAE metrics. Footnote 4: The EW-WPR also cannot bring similar label distributions across duration groups. From the figure, we have two main observations. Firstly, we observe distinct declines in performance upon removing the duration-based grouping (w/o DG) or replacing the WPR with the playing rate (w/o WPR). These findings suggest that debiasing is crucial for capturing real user preference from watch time and that both debiasing designs are necessary. Secondly, the variants with equal-frequency WPR and equal-width WPR \begin{table} \begin{tabular}{c c c c c c} \hline \hline Methods & AUC\(\uparrow\) & GAUC\(\uparrow\) & MAE\(\downarrow\) & MAPE\(\downarrow\) & RMSE\(\downarrow\) \\ \hline TR & 0.6597 & 0.6397 & 24.1743 & 3.6892 & 45.4747 \\ WLR & 0.6711 & 0.6551 & 23.5743 & 3.1528 & 43.3776 \\ OR & 0.6727 & 0.6474 & 22.8930 & 3.5017 & 44.3891 \\ D2Q & 0.6732 & 0.6581 & 22.6728 & 2.7342 & 45.5293 \\ DML & **0.6763** & **0.6617** & **21.7657** & **2.6039** & **42.7735** \\ \hline \hline \end{tabular} \end{table} Table 2. Offline results regarding watch time prediction, where the best results are highlighted in bold and sub-optimal results are underlined. Figure 4. Ablation studies for the binary labels. Figure 3. Ablation studies for the WPR label. the original WPR. The equal-frequency WPR pursues a balanced label distribution but places excessive attention on shorter watch times, while DML w/ EW-WPR pursues equal treatment of different watch times without maintaining balance. The original WPR could strike a balance between the two variants, achieving better results. **Studies on the Binary Label.** We used the effective view (EV) label as an example to study the influence of binary labels. To better utilize the EV label, we primarily focus on resolving duration bias (\(y^{d}_{\text{ee}}\)), as well as other biases from the item side (\(y^{p}_{\text{ee}}\)) and user side (\(y^{p}_{\text{ee}}\)). To study the influence of these debiasing designs, we conduct experiments by removing the corresponding debiased labels, i.e., removing \(y^{d}_{\text{ee}}\), \(y^{p}_{\text{ee}}\), and \(y^{p}_{\text{ee}}\), respectively. The results are presented in Figure 4. It is evident that removing bias from each perspective is helpful for improving performance, with mitigating duration bias having the most significant influence on the effectiveness of DML. ### Online Evaluation (RQ3) **Online Setting.** We conduct an online A/B test to further evaluate the proposed method. Specifically, we run DML, WLR and D2Q, trained with offline data, for 5 consecutive days on Kuaishou's two recommendation channels: Featured-Video Tab and Slide Tab. We then report the average performance over the 5 days. Here, we use WLR as the control group, and implement DML by gradually adding different labels to study their influence, where each implementation is denoted by "DML(labels)". For example, DML (\(y^{d}_{\text{wpp}}\)) means only using the debiased WPR label. We utilize two evaluation metrics: Watch Time, measuring total user video consumption within a segment, and App Usage, offering insights into how frequently users access and engage with the Kuaishou App. **Performance Comparison.** The performance of the evaluated methods, as tested concurrently on the Kuaishou App, is presented in Table 3, where we draw three observations. Firstly, the improvement in performance upon utilizing the WPR label over WLR and D2Q demonstrates the effectiveness of utilizing our percentile rank-based labels. Secondly, the enhancement of both Watch Time and App Usage metrics when employing the debiased WPR label verifies the efficacy of the debiasing approach. Lastly, the incorporation of multiple EV or LV labels, addressing biases originating from video duration, user-side, and item-side factors, resulted in a rise in both metrics. This substantiates that these labels can capture distinct information and address various forms of biases. ## 4. Related Work \(\bullet\)**Watch-time Feedback Utilization.** Watch-time feedback is a crucial signal for user preferences in video recommendation (Wang et al., 2018). Leveraging this feedback to build effective recommenders has gained significant attention (Kraus et al., 2017; Chen et al., 2018; Li et al., 2018; Li et al., 2019; Li et al., 2019; Wang et al., 2018; Wang et al., 2018). Many of these approaches focus on creating a watch time prediction task to leverage this feedback (Kraus et al., 2017; Chen et al., 2018; Li et al., 2019; Li et al., 2019). For instance, the pioneering work WLR (Li et al., 2019) use the feedback to adjust clicks' weights during training, implicitly predicting watch time. Others directly use the feedback as labels for training prediction models (Kraus et al., 2017; Li et al., 2019; Li et al., 2019), potentially introducing bias towards longer videos. To address biases, DVR (Wang et al., 2018) introduces the Watch Time Gain metric, while D2Q (Wang et al., 2018) uses quantile-based labels based on backdoor adjustment (Wang et al., 2018). Unlike D2Q, our method adopts a progressive label-splitting strategy to form percentile-based labels (Section 2.2.1). Apart from watch time prediction tasks, some studies convert watch time into binary labels (Kraus et al., 2017; Li et al., 2019; Li et al., 2019), focusing on complete video watching, overlooking other aspects. In contrast, our approach captures diverse watch time semantics simultaneously, using multi-task learning to enhance recommendations mutually. Among these works, only DCR (Li et al., 2019) employs causal estimation to tackle biases but involves model modifications. Unlike DCR, our method performs debiasing during label generation without modifying the model. \(\bullet\)**Recommendation Debiasing.** Recommender systems grapple with biases (Li et al., 2019), including position bias (Kraus et al., 2017; Li et al., 2019; Li et al., 2019), selection bias (Li et al., 2019; Li et al., 2019; Li et al., 2019), and popularity bias (Wang et al., 2018; Wang et al., 2018)_etc_. Three key research avenues have emerged to address these biases. The first approach centers on reweighting (Kraus et al., 2017; Li et al., 2019; Li et al., 2019), with inverse propensity scores (Li et al., 2019) as a representative technique. This method modifies the training distribution via reweighting to achieve debiasing. However, estimating weights can pose challenges (Li et al., 2019). The second approach employs unbiased data for model learning (Kraus et al., 2017; Chen et al., 2018; Li et al., 2019; Li et al., 2019), though obtaining such data is often costly. A third avenue targets biases from a causal standpoint, categorized into intervention (Kraus et al., 2017; Li et al., 2019; Li et al., 2019; Wang et al., 2018) and counterfactual methods (Kraus et al., 2017; Li et al., 2019). The intervention method incorporates causal adjustments (Kraus et al., 2017; Li et al., 2019) like backdoor adjustment or other deconfounding strategies (Kraus et al., 2019; Li et al., 2019), such as representation balancing, to tackle bias from confounding (Wang et al., 2018). The counterfactual method employs counterfactual reasoning to debias, typically by comparing factual and counterfactual scenarios (Kraus et al., 2017; Li et al., 2019). These methods generally perform debiasing during model learning, potentially altering model structures or training processes. In contrast, our causal method operates in full pre-processing, debiasing during label generation or refinement. ## 5. Conclusion This study delves into leveraging watch time for improved and unbiased modeling of user preferences. We introduce an innovative causal labeling framework, DML, which employs watch time to generate various labels encompassing diverse semantic aspects while incorporating debiasing techniques. Both online and offline experiments are executed on real-world data, offering valuable insights into our approach's efficacy. Results highlight DML's potential in enhancing online video consumption, with its deployment already underway in Kuaishou. However, the current label definition and debiasing process rely on prior knowledge and human expertise. In the future, we plan to develop automated methods to streamline this and align better with online recommendations. ## Acknowledgments This work is supported by the National Natural Science Foundation of China (62272437), and the CCCD Key Lab of Ministry of Culture and Tourism. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Business Scenarios} & \multicolumn{2}{c}{Featured-Video Tab} & Slide Tab \\ \cline{2-4} & WT & AU & WT & AU \\ \hline WLR & - & - & - & - \\ D2Q & +0.273\% & +0.114\% & +0.358\% & +0.135\% \\ DML(\(y_{\text{wpp}}\)) & +0.694\% & +0.207\% & +0.648\% & +0.228\% \\ DML(\(y^{d}_{\text{ee}}\)) & +1.048\% & +0.332\% & +1.080\% & +0.412\% \\ DML(\(y^{d}_{\text{ee}}\),\(y^{p}_{\text{ee}}\),\(y^{p}_{\text{ee}}\),\(y^{p}_{\text{ee}}\)) & +2.230\% & +0.773\% & +2.057\% & +0.709\% \\ DML(\(y^{d}_{\text{ee}}\),\(y^{p}_{\text{ee}}\),\(y^{p}_{\text{ee}}\),\(y^{p}_{\text{ee}}\)) & +1.549\% & +0.566\% & +1.284\% & +0.555\% \\ \hline \hline \end{tabular} \end{table} Table 3. Online A/B test results. “WT” denotes the metric Watch Time, and “AU” denotes the metric App Usage.
短動画アプリケーションの proliferation に伴い、短動画推奨の意義は大きく高まりました。他の推奨シナリオとは異なり、短動画推奨システムは視聴時間からのフィードバックを大きく依存します。既存の方法は視聴時間を直接のラベルとして扱うため、その広範なsemanticsを効果的に活用することができず、バイアスを導入してしまうため、視聴時間に基づいてユーザーの興味をモデル化する能力を制限しています。この課題を克服するために、私たちは「DebiasedMultiple-semantics-extracting Labeling」 (DML) というフレームワークを提案しました。DML は視聴時間の分布から導出した量薬を用いて、様々なsemanticsをカバーするラベルを構築します。絶対的なラベル値ではなく、相対的な順位を重視することで、より易いモデル学習を促進します。さらに、因果調整に基づく手法を導入することで、ラベル定義を精査することで、ラベルレベルでのバイアスを直接軽減します。DMLフレームワーク
2309.08383
Dynamical Analysis of an Allelopathic Phytoplankton Model with Fear Effect
This paper is the first to propose an allelopathic phytoplankton competition ODE model influenced by a fear effect based on natural biological phenomena. It is shown that the interplay of this fear effect and the allelopathic term cause rich dynamics in the proposed competition model, such as global stability, transcritical bifurcation, pitchfork bifurcation, and saddle-node bifurcation. We also consider the spatially explicit version of the model and prove analogous results. Numerical simulations verify the feasibility of the theoretical analysis. The results demonstrate that the primary cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. Allelopathy only affects the density of non-toxic species. The discussion provides guidance for the conservation of species and the maintenance of biodiversity.
Shangming Chen, Fengde Chen, Vaibhava Srivastava, Rana D. Parshad
2023-09-15T13:16:36
http://arxiv.org/abs/2309.08383v1
# Dynamical Analysis of an Allelopathic Phytoplankton Model ###### Abstract This paper is the first to propose an allelopathic phytoplankton competition ODE model influenced by a fear effect based on natural biological phenomena. It is shown that the interplay of this fear effect and the allelopathic term cause rich dynamics in the proposed competition model, such as global stability, transcritical bifurcation, pitchfork bifurcation, and saddle-node bifurcation. We also consider the spatially explicit version of the model, and prove analagous results. Numerical simulations verify the feasibility of the theoretical analysis. The results demonstrate that the primary cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. Allelopathy only affects the density of non-toxic species. The discussion provides guidance for the conservation of species and the maintenance of bio-diversity. Alleopathy; Competition; Global Stability; Transcritical Bifurcation; Pitchfork Bifurcation; Saddle-node Bifurcation; Reaction-diffusion system. + Footnote †: Author for correspondence ## 1 Introduction Phytoplankton are at the base of aquatic food webs and of global importance for ecosystem functioning and services (Winder & Sommer, 2012). Moreover, phytoplankton also contribute significantly to economic growth, which is advantageous for the biotechnology, pharmaceutical, and nutraceuticalal sectors [Pradhan & Ki, 2022]. Hence, the investigation of phytoplankton species density holds significant academic significance. A unique phenomenon among phytoplankton species is when secondary metabolites generated by one phytoplankton have an inhibiting influence on the development or physiological operation of another phytoplankton [Legrand _et al._, 2003]. This behavior is often called alleopathy when phytoplankton engage in competitive interactions with peers by releasing toxic compounds. Numerous studies have shown that allelopathy plays a crucial role in the competitive dynamics of phytoplankton. For example, Mulderij _et al._[Mulderij _et al._, 2006] investigated the allebotathic potential of exudates from the aquatic macrophytte _Stratiotes aloides_ on the growth of phytoplankton. The study results show that _Stratiotes aloides_ exerts a chemossentizing effect on phytoplankton, inhibiting the growth of other algae by releasing toxins. Maynard-Smith [Maynard-Smith, 1974] added the allebotathic term into the classical two-species Lotka-Volterra competition model in order to account for the harmful impacts exerted by one species on the other: \[\left\{\begin{aligned} &\frac{\mathrm{d}N_{1}(t)}{\mathrm{d}t}=N_{1}(t) \left[\alpha_{1}-\beta_{1}N_{1}(t)-v_{1}N_{2}(t)-\gamma_{1}N_{1}(t)N_{2}(t) \right],\\ &\frac{\mathrm{d}N_{2}(t)}{\mathrm{d}t}=N_{2}(t)\left[\alpha_{2} -\beta_{2}N_{2}(t)-v_{2}N_{1}(t)-\gamma_{2}N_{1}(t)N_{2}(t)\right],\end{aligned}\right. \tag{1}\] where \(N_{i}(t)\) (\(i=1,2\), the same below) is the density of two competing phytoplankton species, \(\alpha_{i}\) represents the rate of daily cell proliferation, \(\beta_{i}\) denotes the intraspecific competition rate of the i-th species, \(v_{i}\) stands for the rate of interspecific competition, \(\gamma_{i}\) represents the toxicity release rate from the other species to species \(i\)-th. The initial conditions \(N_{i}(0)>0\). Based on the work of Maynard-Smith, many scholars have considered the situation where only one species releases toxins. Chen _et al._ [Chen _et al._, 2013] proposed a discrete system for toxin release from single species: \[\left\{\begin{aligned} x_{1}(n+1)&=x_{1}(n) \mathrm{exp}\left[r_{1}(n)-a_{11}(n)x_{1}(n)-a_{12}(n)x_{2}(n)-b_{1}(n)x_{1}(n )x_{2}(n)\right],\\ x_{2}(n+1)&=x_{2}(n)\mathrm{exp}\left[r_{2}(n)-a_{2 1}(n)x_{1}(n)-a_{22}(n)x_{2}(n)\right].\end{aligned}\right. \tag{2}\] The authors proved the extinction and global stability conditions for system (2). It was found that the extinction of system (2) is not affected at low rates of toxin release, meaning that the toxic species cannot extinguish non-toxic species. Further studies on single toxic species were conducted in [Chen _et al._, 2016, 2023]. However, in reality, a non-toxic species can go extinct even if it is only affected by lower concentrations of toxins. In other words, what factors other than degradation by actual toxins might affect the density of competing phytoplankton species, without additional external factors interfering? Since the effect of allelopathy is based on the classical Lotka-Volterra competition model, we will consider competitive fear. In 2016, Wang _et al._ [Wang _et al._, 2016] considered the fear effect for the first time based on the classical two-species Lotka-Volterra predator-prey model: \[\left\{\begin{aligned} &\frac{dx}{dt}=rxf(k,y)-dx-ax^{2}-g(x)y,\\ &\frac{dy}{dt}=-my+cg(x)y,\end{aligned}\right. \tag{3}\] where \(a\) represents the mortality rate due to intraspecific competition of the prey, \(g(x)\) is the functional predation rate of the predator, and \(f(k,y)=\dfrac{1}{1+ky}\) represents the anti-predation response of the prey due to the fear of the predator, i.e., the fear effect function. The researchers found that under conditions of Hopf bifurcation, an increase in fear level may shift the direction of Hopf bifurcation from supercritical to subcritical when the birth rate of prey increases accordingly. Numerical simulations also suggest that animals' anti-predator defenses increase as the predator attack rate increases. Further research on the fear effect of the predator-prey model can be seen in [Lai _et al._, 2020; Liu _et al._, 2022]. By studying the fear effect of the predator-prey model, scholars generally agree that the non-consumptive effect of fear on the density of bait species is more significant than depredation on them. In connection with natural biological phenomena, prey perceives the risk of predation and respond with a range of anti-predatory responses, such as changes in habitat selection and foraging behavior [Polis _et al._, 1989; Peckarsky _et al._, 2008]. These changes in various forms, may ultimately affect the overall reproductive rate of the prey population. The effect of fear on predator-prey systems has been extensively studied, but fear has been considered far less in competition systems. However, there is strong evidence that fear exists in purely competitive systems without predation effects or where predation effects are negligible [Chesson & Kuang, 2008; Wiens_et al._, 2014]. The Barred Owl (_Strix varia_) is a species of Owl native to eastern North America. During the last century, they have expanded their range westward and have been recognized as an invasion of the western North American ecosystem--their range overlaps with the Spotted Owl (_Strix occidentalis_). The Spotted Owl is native to northwestern and western North America, which has led to intense competition between two species [Long & Wolfe, 2019]. The Barred Owl has a strong negative impact on the Spotted Owl, and field observations have reported that barred owls frequently attack spotted owls [Van Lanen _et al._, 2011]. Evidence also shows that barred owls actively and unilaterally drive spotted owls out of shared habitat [Wiens_et al._, 2014]. Such evidence motivates us to consider the fear effect in a purely competitive two-species model, in which one competitor causes fear to the other. Thus, Srivastava _et al._[Srivastava _et al._, 2023] considered the classical two-group Lotka-Volterra competition model with only one competitor causing fear to the other competitor: \[\left\{\begin{aligned} \frac{du}{dt}&=a_{1}u-b_{1}u^{2}-c_{1}uv,\\ \frac{dv}{dt}&=\frac{a_{2}v}{1+ku}-b_{2}v^{2}-c_{2} uv.\end{aligned}\right. \tag{4}\] Their study found that the fear effect leads to exciting dynamics such as saddle-node bifurcation and transcritical bifurcation in system (4). This is not found in the classical Lotka-Volterra competition model. In extension to this work, Chen _et al._[Chen _et al._, 2023] also proved several interesting dynamics for a two-species competitive ODE and PDE systems where an Allee and the fear effect are both present. Inspired by the above works, we aim to investigate how the fear parameter affects competitive allebot-pathic planktonic systems by introducing a fear effect term, where the non-toxic species is "fearful" of the toxic population. Thus we propose the following model: \[\left\{\begin{aligned} \frac{\mathrm{d}x_{1}}{\mathrm{d}\tau}& =x_{1}\left(r_{1}-\alpha_{1}x_{1}-\beta_{1}x_{2}\right),\\ \frac{\mathrm{d}x_{2}}{\mathrm{d}\tau}&=x_{2}\left( \frac{r_{2}}{1+\eta x_{1}}-\alpha_{2}x_{2}-\beta_{2}x_{1}-\xi x_{1}x_{2} \right),\end{aligned}\right. \tag{5}\] where \(\eta\) is the fear effect parameter and \(\xi\) represents the toxic release rate. In the current manuscript we perform a complete dynamical analysis of system (5) with the following innovations: * System (5) has at most two positive equilibria, while the global stability of positive equilibria is influenced by the fear effect parameter \(\eta\) and the interspecific competition rate \(\beta_{1}\). * Changing the values of the fear effect \(\eta\) and the interspecific competition rate \(\beta_{1}\) will cause system (5) to experience a transcritical bifurcation at the boundary. At the same time, the toxic release rate \(\xi\) will transform the transcritical bifurcation into a pitchfork bifurcation. * The toxic release rate \(\xi\) causes system (5) to undergo a saddle-node bifurcation in the quadrant. * The toxic release rate \(\xi\) only affects the non-toxic species density, while the fear effect \(\eta\) can lead to the extinction of non-toxic species. * In the spatially explicit system or the PDE case, we analogously see that attraction to boundary equilibrium or an interior equilibrium are both possible depending on parametric restrictions and initial conditions, see theorems 14 & 15. Furthermore strong competition type dynamics are also possible, again depending on parametric restrictions and initial conditions, see theorem 16. The rest of this paper is organized as follows: The conditions for the system's permanence are laid forth in Section 2, which also demonstrates the solution's positivity and boundness. We examine the existence and types of all equilibria in Section 3 and Section 4. Also, the global stability of positive equilibria is studied in Section 5. In Section 6, we analyze the bifurcation of the system around the equilibria. Numerical simulations are performed in Section 7 to verify the theoretical analysis's feasibility, showing how fear effect and toxin release rate can affect species density. We end this paper with a brief conclusion. ## 2 Preliminaries In order to reduce the parameters of system (5), the following dimensionless quantities are applied to the non-dimensionalize model system (5) \[t=r_{2}\tau,\quad\frac{x_{1}}{k_{1}}=x,\quad\frac{x_{2}}{k_{2}}=y,\quad\eta k_{ 1}=k,\quad\frac{\xi k_{1}k_{2}}{r_{2}}=m,\quad\frac{\beta_{2}k_{1}}{r_{2}}=a, \quad\frac{r_{1}}{r_{2}}=b,\quad\frac{\beta_{1}k_{2}}{r_{1}}=c,\] then system (5) becomes the following system: \[\left\{\begin{aligned} &\frac{\mathrm{d}x}{\mathrm{d}t}=bx\left(1-x-cy \right)=xf(x,y)\equiv F(x,y),\\ &\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy \right)=yg(x,y)\equiv G(x,y),\end{aligned}\right. \tag{6}\] all parameters in system (6) are positive. Based on biological considerations, the initial condition of system (6) satisfies \[x(0)>0,y(0)>0. \tag{7}\] ### Positivity and boundedness of the solutions **Theorem 1**: _All solutions of system (6) are positive._ Since \[x(t)=x(0)\mathrm{exp}\left[\int_{0}^{t}f(x(s),y(s))\mathrm{d}s\ \right]>0,\] and \[y(t)=y(0)\mathrm{exp}\left[\int_{0}^{t}g(x(s),y(s))\mathrm{d}s\ \right]>0.\] So all solutions of system (6) with initial condition (7) are positive. This completes the proof. **Lemma 1**: _[_Chen_, 2005_]_ _If \(a,b>0\) and \(x(0)>0\),_ * \(\limsup_{t\to+\infty}x(t)\leq\frac{a}{b}\) _when_ \(x^{{}^{\prime}}(t)\leq x(t)(a-bx(t))\)_,_ * \(\liminf_{t\to+\infty}x(t)\geq\frac{a}{b}\) _when_ \(x^{{}^{\prime}}(t)\geq x(t)(a-bx(t))\)_._ **Theorem 2**: _The solutions of system (6) are bounded._ _Proof._ According to the first equation of system (6), \[\frac{\mathrm{d}x}{\mathrm{d}t}=bx\left(1-x-cy\right)\leq x(b-bx),\] by applying Lemma 1 to the above inequality, we have \[\limsup_{t\rightarrow+\infty}x(t)\leq\frac{b}{b}=1. \tag{8}\] Similarly, according to the second equation of system (6), we have \[\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy\right)\leq y(1- y),\] so \[\limsup_{t\rightarrow+\infty}y(t)\leq 1. \tag{9}\] This completes the proof. ### Permanence of the system **Definition 2.1**.: System (6) is considered to be permanent if there are two positive constants, denoted as \(m\) and \(M\), which are not dependent on the solutions of system (6), such that each positive solution \((x(t,x_{0},y_{0}),y(t,x_{0},y_{0}))\) of system (6) with the initial condition \((x_{0},y_{0})\in Int(R_{+}^{2})\) satisfies \[m\leq\liminf_{t\rightarrow+\infty}x(t,x_{0},y_{0})\leq\limsup_{t\rightarrow+ \infty}x(t,x_{0},y_{0})\leq M,\] \[m\leq\liminf_{t\rightarrow+\infty}y(t,x_{0},y_{0})\leq\limsup_{t\rightarrow+ \infty}y(t,x_{0},y_{0})\leq M.\] **Theorem 3**.: _System (6) is permanent if \(0<k<k^{*}\) and \(0<c<1\)._ _Proof._ From (8) and (9), for \(\varepsilon>0\) small enough without loss of generality, there is \(T>0\) such that, for \(t>T\), we have \[x(t)\leq 1+\varepsilon,\quad y(t)\leq 1+\varepsilon.\] According to the first equation of system (6), \[\frac{\mathrm{d}x}{\mathrm{d}t}=x\left[(b-bcy)-bx\right]\geq x\left[(b-bc(1+ \varepsilon))-bx\right],\] by applying Lemma 1 to above differential inequality, we have \[\liminf_{t\rightarrow+\infty}x(t)\geq 1-c(1+\varepsilon).\] Setting \(\varepsilon\to 0\) in above inequality leads to \[\liminf_{t\rightarrow+\infty}x(t)\geq 1-c. \tag{10}\] Similarly, according to the second equation of system (6), \[\frac{\mathrm{d}y}{\mathrm{d}t}=y\left(\frac{1}{1+kx}-y-ax-mxy\right)\geq y \left[(\frac{1}{1+k(1+\varepsilon)}-a(1+\varepsilon))-(1+m(1+\varepsilon)) y\right],\] by applying Lemma 1 to above differential inequality, we have \[\liminf_{t\rightarrow+\infty}y(t)\geq\frac{\frac{1}{1+k(1+\varepsilon)}-a(1+ \varepsilon)}{1+m(1+\varepsilon)}.\] Setting \(\varepsilon\to 0\) in above inequality leads to \[\liminf_{t\rightarrow+\infty}y(t)\geq\frac{\frac{1}{1+k}-a}{1+m}. \tag{11}\] In summary, we select \(M=1\), \(m=\min\left\{1-c,\frac{\frac{1}{1+k}-a}{1+m}\right\}\), which obviously independent of the solution of system (6). Let \(\frac{1}{a}-1\triangleq k^{*}\). Then, (8), (9), (10) and (11) show that system (6) is permanent under the assumption of Theorem 3. This completes the proof. ## 3 Boundary Equilibria and Their Types It is obvious that system(6) includes two boundary equilibria \(E_{1}(1,0)\), \(E_{2}(0,1)\), as well as a constant equilibrium point \(E_{0}(0,0)\). In the following, we will examine the types of them. The Jacobian matrix of system (6) is given by \[J(E)=\begin{bmatrix}-b(2x+cy-1)&-bcx\\ -y\left[\frac{k}{(1+kx)^{2}}+a+my\right]\frac{1}{1+kx}-(2my+a)x-2y\end{bmatrix} \triangleq\begin{bmatrix}B_{1}\ B_{2}\\ B_{3}\ B_{4}\end{bmatrix}. \tag{12}\] From this, we can obtain \[J(E_{0})=\begin{bmatrix}b\ 0\\ 0\ 1\end{bmatrix}, \tag{13}\] \[J(E_{1})=\begin{bmatrix}-b&-bc\\ 0&\frac{1}{1+k}-a\end{bmatrix}, \tag{14}\] \[J(E_{2})=\begin{bmatrix}b(-c+1)&0\\ -a-k-m\ -1\end{bmatrix}. \tag{15}\] Then we get the following theorem. **Theorem 4**: _The types of boundary equilibria are illustrated in the following:_ 1. \(E_{0}\) _is always a source._ 2. \(E_{1}\) _is a hyperbolic stable node when_ \(k>k^{*}\)_._ 3. _When_ \(k=k^{*}\)_,_ 1. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the upper half-plane if_ \(m>m^{*}\) _(Fig._ 1_(a))._ 2. \(E_{1}\) _is an attracting saddle-node, and the parabolic sector is on the lower half-plane if_ \(0<m<m^{*}\) _(Fig._ 1_(b))._ 3. \(E_{1}\) _is a nonhyperbolic saddle if_ \(m=m^{*}\) _(Fig._ 1_(c))._ 4. \(E_{1}\) _is a hyperbolic saddle when_ \(0<k<k^{*}\)_._ 5. \(E_{2}\) _is a hyperbolic stable node when_ \(c>1\)_._ 6. _When_ \(c=1\)_,_ 1. \(E_{2}\) _is an attracting saddle-node, and the parabolic sector is on the right half-plane if_ \(0<m<m^{**}\) _(Fig._ 2_(a))._ 2. \(E_{2}\) _is an attracting saddle-node, and the parabolic sector is on the left half-plane if_ \(m>m^{**}\) _(Fig._ 2_(b))._ 3. \(E_{2}\) _is a degenerate stable node if_ \(m=m^{**}\) _(Fig._ 2_(c))._ _._ 3. \(E_{2}\) _is a hyperbolic saddle if_ \(0<c<1\)_._ Due to \(\lambda_{1}^{E_{0}}=b>0\), \(\lambda_{2}^{E_{0}}=1>0\), so \(E_{0}\) is always a source. For \(E_{1}\), \(\lambda_{1}^{E_{1}}=-b<0\). When \(\lambda_{2}^{E_{1}}<0\), i.e., \(k>k^{*}\), \(E_{1}\) is a hyperbolic stable node. When \(\lambda_{2}^{E_{1}}>0\), i.e., \(0<k<k^{*}\), \(E_{1}\) is a hyperbolic saddle. When \(\lambda_{2}^{E_{1}}=0\), i.e., \(k=k^{*}\), \(E_{1}\) is a degenerate equilibrium point. We then have the following debate. The equilibrium point \(E_{1}\) is translated to the origin by applying the transformation \((X,Y)=(x-1,y)\). We perform a Taylor expansion around the origin, then system (6) becomes \[\left\{\begin{aligned} \frac{\mathrm{d}X}{\mathrm{d}t}& =-bX-bcY-bcXY-bX^{2},\\ \frac{\mathrm{d}Y}{\mathrm{d}t}&=-\left(1+m\right)Y ^{2}+a\left(-2+a\right)XY+a\left(-1+a\right)^{2}X^{2}Y-mXY^{2}+P_{1}(X,Y),\end{aligned}\right.\] where \(P_{i}(X,Y)\) are power series in \((X,Y)\) with terms \(X^{I}Y^{J}\) satisfying \(I+J\geq 4\) (the same below). Figure 1: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. The value of the toxin release rate \(m\) affects the solution orbit near the boundary equilibrium point \(E_{1}\). In the next step, we make the following transformations to the above system \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}-bc&-b\\ b&0\end{bmatrix}\begin{bmatrix}X_{1}\\ Y_{1}\end{bmatrix},\] and letting \(\tau=-bt\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\begin{cases}\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{20}X_{1}^{2}+a_{11}X_{1}Y _{1}+a_{30}X_{1}^{3}+a_{21}X_{1}^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2},\\ \dfrac{\mathrm{d}Y_{1}}{\mathrm{d}t}=Y_{1}+b_{20}X_{1}^{2}+b_{11}X_{1}Y_{1}+b_ {02}Y_{1}^{2}+b_{30}X_{1}^{3}+b_{21}X_{1}^{2}Y_{1}+b_{12}X_{1}Y_{1}^{2}+P_{2}(X _{1},Y_{1}),\end{cases} \tag{16}\] where \[\begin{split} a_{20}&=\left(a^{2}c-2ac+m+1\right),\quad a_{11 }=a(-2+a),\quad a_{30}=-bc\left(a^{3}c-2a^{2}c+ac+m\right),\\ a_{21}&=-b\left(2a^{3}c-4a^{2}c+2ac+m\right),\quad a_{12}=-a\left(-1+a \right)^{2}b,\quad b_{20}=-\left(a^{2}c-2ac+m+1\right)c,\\ b_{11}&=c\left(a^{2}-2a+b\right),\quad b_{02}=-b,\quad b_{30}=bc^{2}\left(a ^{3}c-2a^{2}c+ac+m\right),\quad b_{12}=a\left(-1+a\right)^{2}bc,\\ b_{21}&=b\left(2a^{3}c-4a^{2}c+2ac+m\right)c.\end{split}\] Therefore, according to Theorem 7.1 in Chapter 2 of [22], if \(a_{02}>0\), i.e., \(m>-1+\left(-a^{2}+2a\right)c\triangleq m^{*}\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the upper half-plane (Fig. 1(a)). If \(a_{02}<0\), i.e., \(0<m<m^{*}\), \(E_{1}\) is an attracting saddle-node, and the parabolic sector is on the lower half-plane (Fig. 1(b)). If \(a_{02}=0\), i.e., \(m=m^{*}\), system (16) becomes \[\begin{cases}\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{11}X_{1}Y_{1}+a_{30}X_{1 }^{3}+a_{21}X_{1}^{2}Y_{1}+a_{12}X_{1}Y_{1}^{2},\\ \dfrac{\mathrm{d}Y_{1}}{\mathrm{d}t}=Y_{1}+b_{11}X_{1}Y_{1}+b_{02}Y_{1}^{2}+b_ {30}X_{1}^{3}+b_{21}X_{1}^{2}Y_{1}+b_{12}X_{1}Y_{1}^{2}+P_{2}(X_{1},Y_{1}). \end{cases} \tag{17}\] By the existence theorem of the implicit function, it follows that \(Y_{1}=\phi(X_{1})\) can be solved from the second equation of system (17) in a sufficiently small domain at the origin \((0,0)\) and satisfies \(\phi(0)=\phi^{{}^{\prime}}(0)=0\). Substituting \[Y_{1}=\phi(X_{1})=-b_{30}X_{1}^{3}+\cdots\cdots.\] into the first equation of system (17), we get \[\dfrac{\mathrm{d}X_{1}}{\mathrm{d}t}=a_{30}X_{1}^{3}+\cdots\cdots.\] where \[a_{30}=-bc\left(a^{3}c-3a^{2}c+3ac-1\right).\] From \(m^{*}=-1+\left(-a^{2}+2a\right)c>0\), we get \[-bc\left(a^{3}c-3a^{2}c+3ac-1\right)<-abc^{2}(a-1)^{2}<0.\] According to Theorem 7.1 again, \(E_{1}\) is a nonhyperbolic saddle since \(a_{03}<0\) (Fig. 1(c)). For \(E_{2}\), \(\lambda_{2}^{E_{2}}=-1<0\). When \(\lambda_{1}^{E_{2}}<0\), i.e., \(c>1\), \(E_{2}\) is a hyperbolic stable node. When \(\lambda_{1}^{E_{2}}>0\), i.e., \(0<c<1\), \(E_{2}\) is a hyperbolic saddle. When \(\lambda_{1}^{E_{2}}=0\), i.e., \(c=1\), \(E_{2}\) is a degenerate equilibrium point. Then we conduct the following discussion. We move equilibrium \(E_{2}\) to the origin by transforming \((X_{2},Y_{2})=(x,y-1)\) and make Taylor's expansion around the origin, then system (6) becomes \[\begin{cases}\dfrac{\mathrm{d}X_{2}}{\mathrm{d}t}=-bX_{2}^{2}-bX_{2}Y_{2},\\ \dfrac{\mathrm{d}Y_{2}}{\mathrm{d}t}=-\left(a+k+m\right)X_{2}-Y_{2}+k^{2}X_{2 }^{2}-\left(2m+a+k\right)X_{2}Y_{2}-Y_{2}^{2}-k^{3}X_{2}^{3}+k^{2}X_{2}^{2}Y_{ 2}-mX_{2}Y_{2}^{2}+P_{3}(X_{2},Y_{2}).\end{cases}\] In the next step, we make the following transformations to the above system \[\begin{bmatrix}X_{2}\\ Y_{2}\end{bmatrix}=\begin{bmatrix}-\dfrac{1}{a+k+m}\;0\\ 1\end{bmatrix}\begin{bmatrix}X_{3}\\ Y_{3}\end{bmatrix},\] and letting \(\tau=-t\), for which we will retain \(t\) to denote \(\tau\) for notational simplicity, we get \[\left\{\begin{aligned} &\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=-\dfrac{b \left(-1+a+k+m\right)}{\left(a+k+m\right)^{2}}{X_{3}}^{2}-\dfrac{b}{a+k+m}X_{3} Y_{3},\\ &\frac{\mathrm{d}Y_{3}}{\mathrm{d}t}=Y_{3}+c_{20}X_{3}^{2}+c_{11}X _{3}Y_{3}+Y_{3}^{2}+c_{30}X_{3}^{3}+c_{21}X_{3}^{2}Y_{3}+c_{12}X_{3}Y_{3}^{2}+ P_{4}(X_{3},Y_{3}),\end{aligned}\right. \tag{18}\] where \[c_{20} =\frac{ma+k^{2}+mk+m^{2}}{\left(a+k+m\right)^{2}},\quad c_{11}= \frac{a+k}{a+k+m},\quad c_{21}=-\frac{2ma+k^{2}+2mk+2m^{2}}{\left(a+k+m\right)^ {2}},\quad c_{12}=-\frac{m}{a+k+m},\] \[c_{30} =-\frac{a^{2}m+k^{2}a+2akm+2a\,m^{2}+2k^{3}+2k^{2}m+2k\,m^{2}+m^{ 3}}{\left(a+k+m\right)^{3}}.\] Figure 2: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. The value of the toxin release rate \(m\) affects the solution orbit near the boundary equilibrium point \(E_{2}\). We define \(m^{**}=1-a-k\). Hence by Theorem 7.1, if \(0<m<m^{**}\), \(E_{2}\) is an attracting saddle-node, and the parabolic sector is on the right half-plane (Fig. 2(a)). If \(m>m^{**}\), \(E_{2}\) is an attracting saddle-node, and the parabolic sector is on the left half-plane (Fig. 2(b)). If \(m=m^{**}\), system (18) becomes \[\left\{\begin{aligned} &\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=-bX_{3}Y_{3},\\ &\frac{\mathrm{d}Y_{3}}{\mathrm{d}t}=Y_{3}+c_{20}X_{3}^{2}+c_{11 }X_{3}Y_{3}+Y_{3}^{2}+c_{30}X_{3}^{3}+c_{21}X_{3}^{2}Y_{3}+c_{12}X_{3}Y_{3}^{2} +P_{4}(X_{3},Y_{3}).\end{aligned}\right. \tag{19}\] By using the second equation of system (19), we obtain the implicit function \[Y_{3}=-c_{20}X_{3}^{2}+(c_{11}c_{20}-c_{30})X_{3}^{3}+\cdots\cdots\] and \[\frac{\mathrm{d}X_{3}}{\mathrm{d}t}=bc_{20}X_{3}^{3}+\cdots\cdots\,,\] where \[bc_{20}=\frac{b(ma+k^{2}+mk+m^{2})}{\left(a+k+m\right)^{2}}>0.\] According to Theorem 7.1 again, \(E_{2}\) is a degenerate stable node due to the negative time transformations (Fig. 2(c)). Remark 1: The biological significance of the parameters \(k\) and \(c\) are the fear effect of non-toxic (\(y\)) species and the interspecific competition rate of toxic (\(x\)) species, respectively. By analyzing the type of boundary equilibria, non-toxic and toxic species will become extinct when \(k>k^{*}\) and \(c>1\), respectively. Figure 3: Schematic representation of the biological significance of parameters \(k\) and \(c\). Range of parameters: \(a\in(0,1)\), \(k\in(1,2)\), \(c\in(0,2)\). ## 4 Positive Equilibria and Their Types The intersections of two isoclines \(f(x,y)=0\), \(g(x,y)=0\) in the first quadrant is the point of positive equilibria. Denote the positive equilibria of system (6) as \(E_{i*}(x_{i},y_{i})\) (i=1, 2, 3), from \(f(x,y)=g(x,y)\), we obtain \[u(x)=A_{1}x^{3}+A_{2}x^{2}+A_{3}x+A_{4}, \tag{20}\] \[v(x)=u^{\prime}(x)=3A_{1}x^{2}+2A_{2}x+A_{3}, \tag{21}\] where \[A_{1}=km>0,\] \[A_{2}=(-ac-m+1)k+m=(A_{3}+k)k+m,\] \[A_{3}=-ac-k-m+1,\] \[A_{4}=c-1.\] Denote the discriminant of (21) as \(\Delta=4A_{2}^{2}-12A_{1}A_{3}\). When \(\Delta>0\), (21) has two real roots, which can be expressed as follows: \[x_{v1}=\frac{(ac+m-1)k-m-\sqrt{\Delta}}{3km},\quad x_{v2}=\frac{(ac+m-1)k-m+ \sqrt{\Delta}}{3km}.\] Let \(u(x)=0\), we have \[m=\frac{ackx^{2}+acx-kx^{2}+kx-c-x+1}{(-1+x)\left(kx+1\right)x}. \tag{22}\] Substituting (22) into \(\det(J(E))\) and \(v(x)\), we get \[\det(J(E))=-\frac{x\left(-1+x\right)b}{(kx+1)}v(x). \tag{23}\] The positive of system (6) is \((x_{i},y_{i})\) where \( y_{i}=\frac{1-x_{i}}{c}\). Let \( m_{1}\triangleq 1-ac-k\) and \( m_{2}\triangleq\frac{2ack+ac-k-1}{1+k}\). From Theorem 1 and 2, we know that \( 0<x(t)<1\) and \( 0<y(t)<1\). By a simple analysis, we can obtain the following theorem. **Theorem 5**: _The existence of positive equilibria for system (6) is shown below:_ 1. \(m=m_{1}\) _(Fig._ 4_(a))_ 1. _For_ \(0<c<1\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when_ \(0<k<k^{*}\)_._ 2. \(m>m_{1}\)__ 1. _For_ \(c>1\) _(Fig._ 4_(b)),_ 1. _If_ \(u(x_{v2})=0\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{3*}\) _when_ \(m>m_{2}\)_._ 2. _If_ \(u(x_{v2})<0\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(m>m_{2}\) _and_ \(k\geq k^{*}\)_._ 2. _System (_6_) has two positive equilibria_ \(E_{1*}\) _and_ \(E_{2*}\) _when_ \(m>m_{2}\) _and_ \(0<k<k^{*}\)_._ 3. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(m=m_{2}\)_._ 3. _System (_6_) has a unique positive equilibrium_ \(E_{1*}\) _when_ \(0<m<m_{2}\) _and_ \(k>k^{*}\)_._ 2. _For_ \(0<c\leq 1\) _(Fig._ 4_(c)),_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when_ \(0<k<k^{*}\) 3. \(0<m<m_{1}\) _(Fig._ 4_(d))_ 1. _For_ \(0<c<1\)_,_ 1. _System (_6_) has a unique positive equilibrium_ \(E_{2*}\) _when and_ \(0<k<k^{*}\)_._ Next, we analyze the types of positive equilibria. Since \(-\dfrac{x\left(-1+x\right)b}{\left(kx+1\right)c}>0\), we can easily determine the sign of \(\det(J(E_{*}))\) by (23). We conclude that \(\det(J(E_{1*}))<0\), \(\det(J(E_{2*}))>0\), \(\det(J(E_{3*}))=0\). Therefore \(E_{1*}\) is a saddle point. For \(E_{2*}\), we have \[\det(J(E_{2*}))=B_{1}B_{4}-B_{2}B_{3}>0.\] The signs of \(B_{2}\), \(B_{3}\), and \(B_{4}\) have been determined, and we can thus know that \(B_{1}<0\). Finally, we can determine that \(\operatorname{tr}(J(E_{2*}))=B_{1}+B_{4}<0\) by the above analysis. From \(\det(J(E_{2*}))>0\), \(\operatorname{tr}(J(E_{2*}))<0\), we know that \(E_{2*}\) is a stable node. Since \(\det(J(E_{3*}))=0\), the positive equilibrium point \(E_{3*}\) is clearly a degenerate equilibrium point. Next, we analyze the specific type of degenerate equilibrium point \(E_{3*}\). First, it is clear from Theorem 4.1 and Fig. 4(b) that if \(E_{3*}\) exists then the parametric condition needs to satisfy \(u(E)=v(E)=0\), where \(E=x_{v2}\). From this, we get \[a=\dfrac{3kmx^{2}-2kmx+2kx+2mx-k-m+1}{k^{2}mx^{4}+2kmx^{3}+k^{2}x^{2}+mx^{2}+2 kx+1}\triangleq a^{*},\] \[c=\dfrac{k^{2}mx^{4}+2kmx^{3}+k^{2}x^{2}+mx^{2}+2kx+1}{2kx+1}\triangleq c^{*}.\] Figure 4: The number of positive real roots of \(u(x)\). We move equilibrium \(E_{3*}\) to the origin by transforming \((X,Y)=(x-E,y-\dfrac{1-E}{c})\), make Taylor's expansion around the origin, and substitute \(a=a^{*}\), \(c=c^{*}\), then system (6) becomes \[\left\{\begin{aligned} \dfrac{\mathrm{d}X}{\mathrm{d}t}& =e_{10}X+e_{01}Y+e_{20}X^{2}+e_{11}XY,\\ \dfrac{\mathrm{d}Y}{\mathrm{d}t}&=d_{10}X+d_{01}Y+d_ {20}X^{2}+d_{02}Y^{2}+d_{11}XY+P_{5}(X,Y),\end{aligned}\right. \tag{24}\] where \[e_{10} =-bE,\quad e_{01}=-\dfrac{bE\left(Ek+1\right)^{2}\left(E^{2}m+1 \right)}{2Ek+1},\quad e_{20}=-b,\quad e_{11}=-\dfrac{b\left(Ek+1\right)^{2} \left(E^{2}m+1\right)}{2Ek+1},\] \[d_{10} =\dfrac{\left(Em+1\right)\left(2Ek+1\right)^{2}\left(-1+E\right) }{\left(Ek+1\right)^{4}\left(E^{2}m+1\right)^{2}},\quad d_{01}=\dfrac{\left(-1 +E\right)\left(Em+1\right)\left(2Ek+1\right)}{\left(Ek+1\right)^{2}\left(E^{2} m+1\right)},\] \[d_{20} =-\dfrac{\left(-1+E\right)\left(2Ek+1\right)k^{2}}{\left(Ek+1 \right)^{5}\left(E^{2}m+1\right)},\quad d_{02}=-Em-1,\quad d_{11}=-\dfrac{ \left(m+1\right)\left(2Ek+1\right)}{\left(Ek+1\right)^{2}\left(E^{2}m+1\right)}.\] We make the following transformations to system (24) \[\begin{bmatrix}X\\ Y\end{bmatrix}=\begin{bmatrix}e_{01}&e_{10}\\ -e_{10}&d_{10}\end{bmatrix}\begin{bmatrix}X_{4}\\ Y_{4}\end{bmatrix},\] and letting \(\tau=Lt\), where \[L= -\dfrac{E^{5}b\,k^{2}m+2E^{4}bkm+E^{3}b\,k^{2}+E^{3}bm-2E^{3}km+2E ^{2}bk+2E^{2}km}{\left(Ek+1\right)^{2}\left(E^{2}m+1\right)}\] \[+\dfrac{+2E^{2}k+E^{2}m-bE-2Ek-Em+E-1}{\left(Ek+1\right)^{2} \left(E^{2}m+1\right)},\] \begin{table} \begin{tabular}{c c c c c c} \hline \(m\sim m_{1}\) & \(c\) & \(u(E)\) & \(m\sim m_{2}\) & \(k\) & Positive Equilibria \\ \hline \(m=m_{1}\) & \(0<c<1\) & / & / & \(0<k<k^{*}\) & \(E_{2*}\) \\ \hline \multirow{4}{*}{\(m>m_{1}\)} & \multirow{4}{*}{\(c>1\)} & \multirow{4}{*}{\(u(E)<0\)} & \multirow{4}{*}{\(m>m_{2}\)} & \multirow{4}{*}{\(k\geq k^{*}\)} & \multirow{4}{*}{\(E_{1*}\)} \\ \cline{3-3} & & & & \(0<k<k^{*}\) & & \(E_{1*}\) \\ \cline{3-3} & & & & \(m=m_{2}\) & / & \(E_{1*}\) \\ \cline{3-3} & & & \(0<m<m_{2}\) & \(k>k^{*}\) & \(E_{1*}\) \\ \cline{3-3} & & & & \(0<c\leq 1\) & / & \(0<k<k^{*}\) & \(E_{2*}\) \\ \hline \multirow{4}{*}{\(0<m<m_{1}\)} & \multirow{4}{*}{\(0<c<1\)} & \multirow{4}{*}{\(/\)} & \multirow{4}{*}{\(/\)} & \multirow{4}{*}{\(0<k<k^{*}\)} & \multirow{4}{*}{\(E_{2*}\)} \\ \cline{3-3} & & & & & \(0<c<1\) & / & \(0<k<k^{*}\) \\ \cline{1-1} \cline{3-3} & & & & & \(0<c<1\) & / & \(0<k<k^{*}\) \\ \hline \end{tabular} _Note:_\(E_{1*}\) is a saddle, \(E_{2*}\) is a stable node, and \(E_{3*}\) is a saddle-node, where \(m_{1}=1-ac-k\), \(m_{2}=\frac{2ac+ac-k-1}{1+k}\), \(k^{*}=\frac{1}{a}-1\). \end{table} Table 1: Positive Equilibria of System (6). for which we will retain \(t\) to denote \(\tau\) for notational simplicity. We get \[\left\{\begin{aligned} \frac{\mathrm{d}X}{\mathrm{d}t}& =g_{20}X^{2}+g_{02}Y^{2}+g_{11}XY,\\ \frac{\mathrm{d}Y}{\mathrm{d}t}&=Y+f_{20}X^{2}+f_{0 2}Y^{2}+f_{11}XY+P_{6}(X,Y),\end{aligned}\right. \tag{25}\] where \[g_{20}=\frac{\left(E^{2}m+1\right)^{2}\left(Ek+1\right)^{3}\left(-1+E\right) \left(3E^{2}k^{2}m+3Ekm+k^{2}+m\right)E^{2}b^{2}}{H^{2}\left(2Ek+1\right)},\] and \[H= E^{5}bk^{2}m+2E^{4}bkm+E^{3}bk^{2}+E^{3}bm-2E^{3}km+2E^{2}bk\] \[+2E^{2}km-2E^{2}k-E^{2}m+bE+2Ek+Em-E+1,\] please see Appendix A for the rest of the parameters. We note that \(g_{20}<0\). Hence by Theorem 7.1 in Chapter 2 in [22], \(E_{3*}\) is a saddle-node. In summary, together with Theorem 5, we obtain Table 1. ## 5 Global Stability of Positive Equilibria **Lemma 2**: _Bendixson-Dulac Criteria [14]: If in a single connected domain \(O\), there exists a function \(B(x,y)\in C^{1}(O)\), such that_ \[\frac{\partial(BF)}{\partial x}+\frac{\partial(BG)}{\partial y}\geq 0(\leq 0), \quad\forall(x,y)\in O,\] _and is not constant to zero in any subregion of O. Then system (8) does not have closed trajectories that all lie within O and singular closed trajectories with finitely many singular points. The function \(B(x,y)\) is often called the Dulac function._ **Theorem 6**: _System (6) cannot have any limit cycle in the interior of the positive quadrant \(R_{+}^{2}\)._ We use the _Bendixson-Dulac_ criteria [14] to prove Theorem 6. Construct a Dulac function \(B(x,y)=\dfrac{1}{xy}\). Then it is clear that \(B(x,y)\) is positive and so is smooth in a connected domain: \[\mathrm{Int}(R_{+}^{2})=\left\{(x,y)\in R^{2}\mid x>0,y>0\right\}.\] Let \[\Delta(x,y)=\frac{\partial(BF)}{\partial x}+\frac{\partial(BG)}{\partial y}=- \frac{b}{y}+\frac{-mx-1}{x}<0.\] Thus, \(\Delta(x,y)\) is neither identically zero nor changing sign in the interior of the positive quadrant of the \(xy\)-plane. Using the _Bendixson-Dulac_ criteria [14], system (6) has no closed trajectory, so there is no periodic solution in the first quadrant. The proof of Theorem 6 is finished. From Theorem 5 and Table 1, when system (6) satisfies \(0<c<1\), \(0<k<k^{*}\), the boundary equilibria are all unstable, and there is a unique stable positive equilibrium \(E_{2*}\) in system (6). Since Theorem 6 has proved that system (6) cannot have any limit cycle in the interior of the positive quadrant, we can obtain the following theorem. **Theorem 7**: _The locally stable positive equilibria \(E_{2*}\) is globally stable when \(0<c<1\), \(0<k<k^{*}\)._ ## 6 Bifurcation Analysis ### Transcritical bifurcation In proving Theorem 5, we found an interesting phenomenon: when \(u(1)=0\), i.e., \(k=k^{*}\), the positive equilibrium point \(E_{2*}\) will merge with the boundary equilibrium point \(E_{1}\). Also, the stability of the boundary equilibrium point \(E_{1}\) will change when the parameter \(k\) is in different intervals \((0,\frac{1}{\alpha}-1)\) and \((\frac{1}{\alpha}-1,+\infty)\), respectively. Moreover, we find a similar phenomenon for the boundary equilibrium point \(E_{2}\). From this, we conjecture that system (7) experiences transcritical bifurcations around \(E_{1}\) and \(E_{2}\). We proceed to a rigorous proof below. **Theorem 8**: _System (6) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(k_{TR}=k^{*}\) when \(u(E)<0\) and \(m\neq-a^{2}c+2ac-1\) (Fig. 5)._ From Theorem 4, we know that the eigenvalues of \(J(E_{1})\) are \(\lambda_{1}^{E_{1}}=-b\), \(\lambda_{2}^{E_{1}}=0\) if \(k=k_{TR}=k^{*}\). Now, let \(\mathbf{V_{1}}=(v_{1},v_{2})^{T}\) and \(\mathbf{W_{1}}=(w_{1},w_{2})^{T}\) be the eigenvectors of \(J(E_{1})\) and \(J^{T}(E_{1})\) corresponding to Figure 5: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. System (6) undergoes a transcritical bifurcation around \(E_{1}\). \(\lambda_{1}^{E_{1}}=0\), respectively. By calculating, we obtain \[\mathbf{V_{1}}=\begin{bmatrix}v_{1}\\ v_{2}\end{bmatrix}=\begin{bmatrix}-c\\ 1\end{bmatrix},\mathbf{W_{1}}=\begin{bmatrix}w_{1}\\ w_{2}\end{bmatrix}=\begin{bmatrix}0\\ 1\end{bmatrix}. \tag{26}\] We assume that \[Q(x,y)=\begin{bmatrix}F(x,y)\\ G(x,y)\end{bmatrix}=\begin{bmatrix}bx\left(-cy-x+1\right)\\ y\left(\dfrac{1}{kx+1}-y-ax-mxy\right)\end{bmatrix}.\] Furthermore, \[Q_{k}(E_{1};k_{TR})=\begin{bmatrix}\dfrac{\partial F}{\partial k}\\ \dfrac{\partial G}{\partial k}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\] \[DQ_{k}(E_{1};k_{TR})\mathbf{V_{1}}=\left[\begin{bmatrix}0\\ \dfrac{2yak}{\left(kx+1\right)^{3}}-\dfrac{y}{\left(kx+1\right)^{2}}-\dfrac{x }{\left(kx+1\right)^{2}}\end{bmatrix}\right]\Bigg{|}_{(E_{1};k_{TR})}\begin{bmatrix} -c\\ 1\end{bmatrix}=\begin{bmatrix}0\\ -a^{2}\end{bmatrix},\] \[D^{2}Q(E_{1};k_{TR})(\mathbf{V_{1}},\mathbf{V_{1}})=\begin{bmatrix}\dfrac{ \partial^{2}F}{\partial x^{2}}v_{1}^{2}+2\dfrac{\partial^{2}F}{\partial x \partial y}v_{1}v_{2}+\dfrac{\partial^{2}F}{\partial y^{2}}v_{2}^{2}\\ \dfrac{\partial^{2}G}{\partial x^{2}}v_{1}^{2}+2\dfrac{\partial^{2}G}{\partial x \partial y}v_{1}v_{2}+\dfrac{\partial^{2}G}{\partial y^{2}}v_{2}^{2}\end{bmatrix} \Bigg{|}_{(E_{1};k_{TR})}=\begin{bmatrix}0\\ \left(-2a^{2}+4a\right)c-2m-2\end{bmatrix}.\] Thus, we have \[\mathbf{W_{1}}^{T}Q_{k}(E_{1};k_{TR})=0,\] \[\mathbf{W_{1}}^{T}\left[DQ_{k}(E_{1};k_{TR})\mathbf{V_{1}}\right]=-a^{2}\neq 0,\] \[\mathbf{W_{1}}^{T}\left[D^{2}Q(E_{1};c_{TR})(\mathbf{V_{1}},\mathbf{V_{1}}) \right]=\left(-2a^{2}+4a\right)c-2m-2\neq 0.\] Based on _Sotomayor's Theorem_[20], all the transversality conditions for system (6) to experience a transcritical bifurcation are satisfied. Consequently, system (6) undergoes a transcritical bifurcation around \(E_{1}\) at the bifurcation parameter threshold \(k_{TR}=k^{*}\). **Theorem 9**.: _System (6) undergoes a transcritical bifurcation around \(E_{2}\) at the bifurcation parameter threshold \(c_{TR}=1\) when \(u(E)<0\) and \(m\neq 1-a-k\) (Fig. 6)._ _Proof._ From Theorem 4, we know that the eigenvalues of \(J(E_{1})\) are \(\lambda_{1}^{E_{2}}=-1\), \(\lambda_{2}^{E_{2}}=0\) if \(c=c_{TR}=1\). Now, let \(\mathbf{V_{2}}=(v_{3},v_{4})^{T}\) and \(\mathbf{W_{2}}=(w_{3},w_{4})^{T}\) be the eigenvectors of \(J(E_{2})\) and \(J^{T}(E_{2})\) corresponding to \(\lambda_{2}^{E_{2}}=0\), respectively. By calculating, we obtain \[\mathbf{V_{1}}=\begin{bmatrix}v_{3}\\ v_{4}\end{bmatrix}=\begin{bmatrix}-\dfrac{1}{a+k+m}\\ 1\end{bmatrix},\mathbf{W_{1}}=\begin{bmatrix}w_{3}\\ w_{4}\end{bmatrix}=\begin{bmatrix}1\\ 0\end{bmatrix}. \tag{27}\] Furthermore, \[Q_{c}(E_{2};c_{TR})=\begin{bmatrix}\dfrac{\partial F}{\partial c}\\ \dfrac{\partial G}{\partial c}\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix},\] \[DQ_{c}(E_{2};c_{TR})\mathbf{V_{2}}=\left[\begin{array}{c}-by\ -bx\\ 0\ Based on _Sotomayor's Theorem_[Perko, 2013], all the transversality conditions for system (6) to experience a transcritical bifurcation are satisfied, so system (6) undergoes a transcritical bifurcation around \(E_{2}\) at the bifurcation parameter threshold \(c_{TR}=1\). ### Pitchfork bifurcation According to Theorem 9, the third transversality condition about transcritical bifurcation on \(E_{2}\) will equal \(0\) when \(m=1-a-k\), i.e., \(m=m^{**}\). Also by Theorem 4, \(E_{2}\) is a degenerate stable node when \(m=m^{**}\). We select \(a=0.2\), \(b=0.2\), \(k=0.2\), \(m=0.6\), \(c=1\pm 0.1\). By numerical simulation, we find that the number of equilibria near \(E_{2}\) undergoes a \(1-1-3\) transformation. From this we conclude that system (6) will experience a pitchfork bifurcation around \(E_{2}\) when \(c=c_{TR}\) and \(m=m^{**}\) (Fig. 7). ### Saddle-node bifurcation Under the condition \(m>m_{1}\), \(c>1\), and \(0<k<k^{*}\), we note that when \(u(E)>0\), \(u(E)=0\), and \(u(E)<0\), system (6) has \(0\), \(1\), and \(2\) positive equilibria, respectively. Therefore we consider system (6) undergoing a saddle-node bifurcation around the positive equilibrium point \(E_{3*}\). We selected the toxic release rate \(m\) as Figure 7: System (6) undergoes a pitchforkbifurcation around \(E_{2}\) when \(c=c_{TR}\) and \(m=1-a-k\). the bifurcation parameter. By calculating \(u(E)=v(E)=0\), we obtain the bifurcation parameter threshold \(m=\frac{-E^{2}k^{2}+2Eck-2Ek+c-1}{E^{2}(E^{2}k^{2}+2Ek+1)}\triangleq m_{SN}\), and \(a=\frac{E^{4}k^{2}-2E^{3}k^{2}+2E^{3}k+3E^{2}ck+E^{2}k^{2}-4E^{2}k-2Eck+E^{2}+2Ec +2Ek-2E-c+1}{c\,E^{2}(E^{2}k^{2}+2Ek+1)}\triangleq a_{1}\). Next, we use _Sotomayor's Theorem_[2013] to verify that the transversality conditions for saddle-node bifurcation are satisfied. **Theorem 10**.: _System (6) undergoes a saddle-node bifurcation around \(E_{3*}\) at the bifurcation parameter threshold \(m=m_{SN}\) when \(m>m_{1}\), \(0<k<k^{*}\), \(c>1\), and \(c\neq\frac{E^{3}k^{3}+3E^{2}k^{2}+3Ek+1}{3E^{2}k^{2}+3Ek+1}\) (Fig. 8, Fig. 9)._ _Proof_. According to (12), we know that the Jacobi matrix of the positive equilibrium point \(E_{3*}\) can be expressed in the following form by substituting \(m=m_{SN}\), \(a=a_{1}\), and one of the eigenvalues is \(\lambda=0\). \[J(E_{3*})=\left[\begin{array}{c}-bE\\ \frac{(E^{3}k^{2}+(-k^{2}+2k)E^{2}+(1+(2c-2)k)E+c-1)(-1+E)}{E(Ek+1)^{2}c^{2}} \end{array}\frac{\left(E^{3}k^{2}+\left(-k^{2}+2k\right)E^{2}+(1+(2c-2)k)E+c-1 \right)(-1+E)}{cE(Ek+1)^{2}}\end{array}\right].\] Now, let \(\mathbf{V_{3}}=(v_{5},v_{6})^{T}\) and \(\mathbf{W_{3}}=(w_{5},w_{6})^{T}\) be the eigenvectors of \(J(E_{3*})\) and \(J^{T}(E_{3*})\) corresponding to Figure 8: Red, green, pink, and orange points indicate stable node, saddle, saddle-node, and unstable node (source), respectively. System (6) undergoes a saddle-node bifurcation around \(E_{3*}\). \(\lambda=0\), respectively. By calculating, we obtain \[\mathbf{V_{3}}=\begin{bmatrix}v_{5}\\ v_{6}\end{bmatrix}=\begin{bmatrix}-c\\ 1\end{bmatrix},\mathbf{W_{3}}=\begin{bmatrix}w_{5}\\ w_{6}\end{bmatrix}=\begin{bmatrix}\frac{\left(E^{3}k^{2}+\left(-k^{2}+2k\right) E^{2}+\left(1+\left(2c-2\right)k\right)E+c-1\right)\left(-1+E\right)}{E^{2}b \,c^{2}\left(Ek+1\right)^{2}}\\ 1\end{bmatrix}. \tag{28}\] Furthermore, \[Q_{m}(E_{3*};m_{SN})=\begin{bmatrix}\frac{\partial F}{\partial m }\\ \frac{\partial G}{\partial m}\end{bmatrix}=\begin{bmatrix}0\\ -\frac{\left(1-E\right)^{2}E}{c^{2}}\end{bmatrix},\] \[D^{2}Q(E_{3*};m_{SN})(\mathbf{V_{3}},\mathbf{V_{3}})=\begin{bmatrix} \frac{\partial^{2}F}{\partial x^{2}}v_{5}^{2}+2\frac{\partial^{2}F}{\partial x \partial y}v_{5}v_{6}+\frac{\partial^{2}F}{\partial y^{2}}v_{6}^{2}\\ \frac{\partial^{2}G}{\partial x^{2}}v_{5}^{2}+2\frac{\partial^{2}G}{\partial x \partial y}v_{5}v_{6}+\frac{\partial^{2}G}{\partial y^{2}}v_{6}^{2}\end{bmatrix} =\begin{bmatrix}0\\ \frac{2\left(-1+E\right)\left(E^{3}k^{3}-3k^{2}\left(c-1\right)E^{2}-3k\left(c- 1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\end{bmatrix},\] Thus, we have \[\mathbf{W_{3}}^{T}Q_{m}(E_{3*};m_{SN})=-\frac{\left(1-E\right)^{2}E}{c^{2}} \neq 0,\] \[\mathbf{W_{3}}^{T}\left[D^{2}Q(E_{3*};q_{SN})(\mathbf{V_{3}},\mathbf{V_{3}}) \right]=\frac{2\left(-1+E\right)\left(E^{3}k^{3}-3k^{2}\left(c-1\right)E^{2} -3k\left(c-1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\neq 0.\] According to _Sotomayor's Theorem_[Perko, 2013], all the transversality conditions for system (6) to experience a saddle-node bifurcation are satisfied, so system (6) undergoes a saddle-node bifurcation around \(E_{3*}\) at the bifurcation parameter threshold \(m_{SN}=\frac{2\left(-1+E\right)\left(E^{3}k^{2}-3k^{2}\left(c-1\right)E^{2} -3k\left(c-1\right)E-c+1\right)}{\left(Ek+1\right)^{3}E^{2}}\). \(\blacksquare\) _Remark 2_.: This section discusses all possible bifurcations of system (6). Through the above analysis, we find that by varying the value of the fear effect \(k\) for non-toxic species or the interspecific competition rate \(c\) for toxic species, both cause system (6) to undergo a transcritical bifurcation on the boundary. When a particular value is taken for the toxin release rate \(m\), this may also cause system (6) to experience a pitchfork bifurcation on the boundary. In addition, parameter \(m\) will also lead system (6) to undergo a saddle-node bifurcation in the first quadrant. Thus we can determine that the fear effect and toxic release rate can cause complex dynamics in the classical Lotka-Volterra competition model. ## 7 Effect of Toxic Release Rate and Fear Through the studies in Section 6, we learned that the toxic release rate \(m\) and the fear effect \(k\) produce rich bifurcations in system (6). Then returning to the biological significance, how exactly do \(m\) and \(k\) affect the species? Observing Table 1, we note that whenever \(k\) falls in the interval \((0,k^{*})\), there must be a stable positive equilibrium point \(E_{2*}\) in system (6). Therefore we conclude that, regardless of the value of the toxic release rate, the only factor that can affect the survival of non-toxic species is the competition fear. Next, we use numerical simulation to verify this through the time-course plots of solutions. **Example 7.1**.: For \(m>m_{1}\) and \(0<c<1\). We select \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.4\), \(m=0.5\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 10). When \(m>m_{1}\), the value of fear effect \(k\) leads to the extinction of the non-toxic species (\(x\)) if it satisfies \(k>k^{*}\), and is not in contrast. **Example 7.2**.: For \(m>m_{1}\) and \(c>1\). We select \(a=0.3\), \(b=0.5\), \(c=1.1\), \(k_{1}=1.1\), \(k_{2}=4\), \(m=0.15\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 11). We note that although toxic species are subject to an interspecific competition rate \(c>1\), they still survive by releasing toxins and by causing fear in competitor. **Example 7.3**.: For \(0<m<m_{1}\). We select \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.3\), \(m=0.1\). Through numerical simulations, we obtain time-course plots of solutions (Fig. 12). _Remark 3_.: Comparing Example 7.1-3, we find that non-toxic species can survive regardless of the level of toxic release rate. When the fear effect on the non-toxic species is too large, it leads to extinction. Numerical simulation effectively verifies the correctness of our above analysis. ## 8 The PDE Case We will now cover several preliminary concepts that will pave the way for proving the global existence of solutions to (32). To achieve this objective, it is sufficient to establish a uniform estimate on the \(\mathbb{L}^{p}\) norms of the right-hand side of (32), where \(p\) exceeds \(\frac{p}{2}\). By doing so, we can then apply classical theory, as outlined in [1], to guarantee global existence. In this context, the standard norms in the spaces \(\mathbb{L}^{p}(\Omega)\), \(\mathbb{L}^{\infty}(\Omega)\), and \(\mathbb{C}(\overline{\Omega})\) are denoted as follows: Figure 10: \(a=0.8\), \(b=0.5\), \(c=0.5\), \(k_{1}=0.2\), \(k_{2}=0.4\), \(m=0.5\). (a) Non-toxic species survives when \(0<k_{1}<k^{*}\). (b) Non-toxic species become extinct when \(k_{2}>k^{*}\). \[\|u\|_{p}^{p}=\int_{\Omega}|u(x)|^{p}\,dx,\ \left\|u\right\|_{\infty}=\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! _then_ \[[\forall i=1,...,m,u_{i0}\geq 0]\implies[\forall i=1,...,m,\ \forall t\in[0,T^{*}),u_{ i}(t)\geq 0].\] **Lemma 4**.: _Using the same notations and hypotheses as in Lemma 3, suppose moreover that \(f\) has at most polynomial growth and that there exists \(\mathbf{b}\in\mathbb{R}^{m}\) and a lower triangular invertible matrix \(P\) with nonnegative entries such that_ \[\forall r\in[0,+\infty)^{m},\quad Pf(r)\leq\Bigg{[}1+\sum_{i=1}^{m}r_{i} \Bigg{]}\mathbf{b}.\] _Then, for \(u_{0}\in L^{\infty}(\Omega,\mathbb{R}^{m}_{+}),\) the system (30) has a strong global solution._ Under these assumptions, the following local existence result is well known, see [Henry, 2006]. **Theorem 11**.: _The system (30) admits a unique, classical solution \((u,v)\) on \([0,T_{\max}]\times\Omega.\) If \(T_{\max}<\infty\) then_ \[\lim_{t\nearrow T_{\max}}\Big{\{}\left\|u(t,.)\right\|_{\infty}+\left\|v(t,.) \right\|_{\infty}\Big{\}}=\infty, \tag{31}\] _where \(T_{\max}\) denotes the eventual blow-up time in \(\mathbb{L}^{\infty}(\Omega).\)_ The next result follows from the application of standard theory [Kishimoto & Weinberger, 1985]. **Theorem 12**.: _Consider the reaction-diffusion system (30). For spatially homogenous initial data \(u_{0}\equiv c,v_{0}\equiv d\), with \(c,d>0\), then the dynamics of (30) and its resulting kinetic (ODE) system, when \(d_{1}=d_{2}=0\) in (30), are equivalent._ Our current aim is to explore the scenario where the fear function exhibits spatial heterogeneity. This perspective finds motivation in various ecological and sociological contexts. For instance, it is quite common for prey to exhibit higher fear levels in proximity to a predator's lair but lower fear levels in regions of refuge, as mentioned in [Zhang _et al._, 2019]. Additionally, areas with high population density may lead to reduced fear due to group defense mechanisms, as discussed in [Sasmal & Takeuchi, 2020]. Given these considerations, it is plausible to assume that the fear coefficient \(k\) is not a constant but varies across the spatial domain \(\Omega\), i.e., \(k=k(x).\) The specific form of \(k(x)\) may differ depending on the particular application, aligning with the concept of the Landscape of Fear (LOF) [Brown _et al._, 1999]. Consequently, we now consider the following spatially explicit version of (6), featuring a heterogeneous fear function \(k(x)\), which results in the following reaction-diffusion system: \[\left\{\begin{aligned} & u_{t}=d_{1}\Delta u+bu\left(1-u-cv\right), \quad x\in\Omega,\\ & v_{t}=d_{2}\Delta v+v\left(\frac{1}{1+k(x)u}-v-au-muv\right), \quad x\in\Omega,\\ &\frac{\partial u}{\partial\nu}=\frac{\partial v}{\partial\nu}= 0,\quad\text{on}\quad\partial\Omega.\\ & u(x,0)=u_{0}(x)\equiv c>0,\quad v(x,0)=v_{0}(x)\equiv d>0,\\ \end{aligned}\right. \tag{32}\] Furthermore, we impose the following restrictions on the fear function \(k(x)\), \[\begin{aligned} &(i)\quad k(x)\in C^{1}(\Omega),\\ &(ii)\quad k(x)\geq 0,\\ &(iii)\quad\text{If }k(x)\equiv 0\text{ on }\Omega_{1}\subset\Omega,\text{ then }|\Omega_{1}|=0.\\ &(iv)\quad\text{If }k(x)\equiv 0\text{ on }\cup_{i=1}^{n}\Omega_{i}\subset\Omega,\text{ then }\Sigma_{i=1}^{n}|\Omega_{i}|=0.\end{aligned} \tag{33}\] _Remark 4_.: If \(k(x)\equiv 0\) on \(\Omega_{1}\subset\Omega\), with \(|\Omega_{1}|>\delta>0\), or \(q(x)\equiv 0\) on \(\cup_{i=1}^{n}\Omega_{i}\subset\Omega\), with \(\Sigma_{i=1}^{n}|\Omega_{i}|>\delta>0\), that is, on non-trivial parts of the domain, the analysis is notoriously difficult, as one now is dealing with a _degenerate_ problem. See [Du, 2002A,B] for results on this problem. This case is not in the scope of the current manuscript. Since the nonlinear right hand side of (32) is continuously differentiable on \(\mathbb{R}^{+}\times\)\(\mathbb{R}^{+}\), then for any initial data in \(\mathbb{C}\left(\overline{\Omega}\right)\) or \(\mathbb{L}^{p}(\Omega),\ p\in(1,+\infty)\), it is standard to estimate the \(\mathbb{L}^{p}-\)norms of the solutions and thus deduce global existence. The standard theory will apply even in the case of a bonafide fear function \(k(x)\) because due to our assumptions on the form of \(k\), standard comparison arguments will apply [Gilbarg & Trudinger, 1977]. Thus applying the classical methods above, via Theorem 11, and Lemmas 3-4, we can state the following lemmas: **Lemma 5**.: _Consider the reaction-diffusion system (32), for \(k(x)\) such that the assumptions via (33) hold. Then, the solutions to (32) are non-negative as long as they initiate from positive initial conditions._ **Lemma 6**.: _Consider the reaction-diffusion system (32). For \(k(x)\) such that the assumptions via (33) hold. The solutions to (32) are classical. That is for \((u_{0},v_{0})\in\mathbb{L}^{\infty}(\Omega)\), \((u,v)\in C^{1}(0,T;C^{2}(\Omega))\), \(\forall T\)._ Our goal in this section is to investigate the dynamics of (32). Herein, we will use the comparison technique and compare it to the ODE cases of classical competition or the constant fear function case, where the dynamics are well known. _Remark 5_.: This section's analysis primarily focuses on the choice of spatially homogenous (flat) initial data. Let's define some PDE systems, \[\begin{split}&\overline{u}_{t}=d_{1}\overline{u}_{xx}+b\overline{ u}\left(1-\overline{u}-c\overline{v}\right),\\ &\overline{v}_{t}=d_{2}\overline{v}_{xx}+\overline{v}\left(1-v- au-muv\right),\end{split} \tag{34}\] \[\begin{split}&\widehat{u}_{t}=d_{1}\widehat{u}_{xx}+b\widehat{u} \left(1-\widehat{u}-c\widehat{v}\right),\\ &\widehat{v}_{t}=d_{2}\widehat{v}_{xx}+\widehat{v}\left(\frac{1 }{1+\widehat{\mathbf{k}}\widehat{u}}-\widehat{v}-a\widehat{u}-m\widehat{u} \widehat{v}\right),\end{split} \tag{35}\] \[\begin{split}&\widetilde{u}_{t}=d_{1}\widetilde{u}_{xx}+bu\left(1 -\widetilde{u}-c\widehat{v}\right),\\ &\widetilde{v}_{t}=d_{2}\widetilde{v}_{xx}+\widehat{v}\left(\frac {1}{1+\widehat{\mathbf{k}}\widehat{u}}-\widehat{v}-a\widehat{u}-m\widehat{u} \widehat{v}\right),\end{split} \tag{36}\] \[\begin{split}&\tilde{u}_{t}=d_{1}\tilde{u}_{xx}+b\tilde{u}\left(1 -\tilde{u}-c\widehat{v}\right),\\ &\tilde{v}_{t}=d_{2}\tilde{v}_{xx}+\widehat{v}\left(\frac{1}{1+ \widehat{\mathbf{k}}}-\tilde{v}-a\tilde{u}-m\tilde{u}\widehat{v}\right),\end{split} \tag{37}\] where \[\widehat{\mathbf{k}}=\min_{x\in\Omega}k(x),\qquad\widetilde{\mathbf{k}}=\max_ {x\in\Omega}k(x). \tag{38}\] We assume Neumann boundary conditions for all of the reaction diffusion systems (34)-(37). Also, we prescribe spatially homogenous (flat) initial conditions in each system: \(u(x,0)=u_{0}(x)\equiv c>\ 0,\quad v(x,0)=v_{0}(x)\equiv d>0\). **Theorem 13**.: _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\), as well as the reaction-diffusion systems (35)-(36). Then the following point-wise comparison holds,_ \[\widetilde{v}\leq v\leq\widehat{v}.\] _Proof._ From the positivity of the solutions to reaction-diffusion systems (35)-(37) and via comparison of (32) to the logistic equation to get upper bound for second species, i.e., \(v\leq 1\). Hence, we have \[\frac{1}{1+\widetilde{\mathbf{k}}}\leq\frac{1}{1+\widetilde{\mathbf{k}}}\; \widetilde{u}\leq\frac{1}{1+k(x)u}\leq\frac{1}{1+\widetilde{\mathbf{k}}}\; \widehat{u}\leq 1,\quad x\in\Omega.\] Hence, the result follows from the standard comparison theory [Gilbarg & Trudinger, 1977]. ### Attraction to boundary or interior equilibrium **Theorem 14**: _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction_ \[\widehat{\mathbf{k}}>\frac{1}{a}-1, \tag{39}\] _then there exits some flat initial data such that solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system given by equation (35). Since the parameter \(\widehat{\mathbf{k}}\) satisfies the specified condition, we can apply Theorem 4. This allows us to select initial values \([u_{0},v_{0}]\) where \(v_{0}\) is significantly smaller than \(u_{0}\) point wise, resulting in the convergence \((\widehat{u},\widehat{v})\) towards \((1,0)\). Furthermore, for the reaction-diffusion system given by Equation (36), due to the inequality \(\widetilde{\mathbf{k}}>\widehat{\mathbf{k}}\), the parameter \(\widetilde{\mathbf{k}}\) also adheres to the imposed conditions. Consequently, Theorem 4 is applicable again, leading to the conclusion that for the same initial values \([u_{0},v_{0}]\) with \(v_{0}\) much smaller than \(u_{0}\) point wise, the system \((\widetilde{u},\widetilde{v})\) converges to \((1,0)\). Moreover, employing Lemma 13, we establish the relation \(\widetilde{v}\leq v\leq\widehat{v}\). This implies: \[\lim_{t\to\infty}(\widetilde{u},\widetilde{v})\leq\lim_{t\to\infty}(u,v)\leq \lim_{t\to\infty}(\widehat{u},\widetilde{v}),\] and consequently: \[(1,0)\leq\lim_{t\to\infty}(u,v)\leq(1,0).\] By employing a squeezing argument, as \(t\) tends towards infinity, for initial data \([u_{0},v_{0}]\), we can deduce the uniform convergence of solutions for the Equation (32). This leads to the assertion that: \[(u,v)\to(1,0)\] as \(t\) approaches infinity. **Theorem 15**: _For the reaction diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\), and \(c>1\), then there exits some flat initial data such that solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system (35). Since \(c>1\) satisfies the parametric restriction, from Theorem 4, we can pick some initial data \([u_{1},v_{1}](u_{1}\ll v_{1}\) pointwise) such that \[(\widehat{u},\widetilde{v})\to(0,1).\] Similarly consider the reaction-diffusion system (36), from Theorem 4, for same set of initial data \([u_{1},v_{1}](u_{1}\ll v_{1}pointwise)\), we have \[(\widetilde{u},\widetilde{v})\to(0,1).\] Moreover, on using Lemma 13 we have, \[\widetilde{v}\leq v\leq\widetilde{v},\] which entails, \[\lim_{t\to\infty}(\widetilde{u},\widetilde{v})\leq\lim_{t\to\infty}(u,v)\leq\lim _{t\to\infty}(\widehat{u},\widetilde{v}),\] subsequently, \[(0,1)\leq\lim_{t\to\infty}(u,v)\leq(0,1).\] Now using a squeezing argument, in the limit that \(t\to\infty\), for initial data \([u_{1},v_{1}](u_{1}\ll v_{1}pointwise)\), we have uniform convergence of solutions of (32), i.e., \[(u,v)\to(0,1)\] as \(t\to\infty\). Fig. 14: Numerical simulation of (32) for the case of competition exclusion in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.4,b=0.2,c=2.1\) and \(m=0.4\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.4,2]\) (b) \([u_{0},v_{0}]=[0.4,1.2]\). Fig. 13: Numerical simulation of (32) for the case of competition exclusion in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.3,b=0.2,c=1.1\) and \(m=0.15\). The initial data are chosen (a) \([u_{0},v_{0}]=[2,0.4]\) (b) \([u_{0},v_{0}]=[1.2,0.4]\). _Remark 6._ We see that via theorems 14 & 15, that attraction to boundary equilibrium is possible for certain initial data. For other (positive) initial data, depending on parametric restrictions, one could have attraction to an interior state as well. ### A case of strong competition **Theorem 16.** _For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction_ \[m>1-ac-{\bf k},\quad c>1,\quad u(E)<0,\quad m>\frac{2ac{\bf k}+ac-{\bf k}+1}{1+{ \bf k}},\quad{\bf k}\geq\frac{1}{a}-1,\] _for \({\bf k}=\widehat{{\bf k}},\widetilde{{\bf k}}\), and \(u\) is a cubic polynomial given by (20). Then there exists sufficiently small initial data \([u_{0}(x),v_{0}(x)]\)\((v_{0}(x)<<u_{0}(x)\) pointwise), such that the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((1,0)\) as \(t\to\infty\), while there exits also sufficiently large initial data \([u_{1}(x),v_{1}(x)]\)\((u_{1}(x)<<v_{1}(x)\) pointwise) for which the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\)._ _Proof._ Consider the reaction-diffusion system (35). Since the \(\widehat{{\bf k}}\) satisfies the parametric restriction, from Theorem 5, there exists a interior saddle equilibrium \(E_{1*}\) to the kinetic (ODE) system (35). On making use of the stable manifold theorem [20], i.e., \(\exists\;\;W^{1}_{s}(E_{1*})\in{\cal C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen right to \(W^{1}_{s}(E_{1*})\) the solution \((\widetilde{u},\widetilde{v})\to(1,0)\) and for initial data chosen left to \(W^{1}_{s}(E_{1*})\), \((\widetilde{u},\widetilde{v})\to(0,1)\). Moreover, notice that \(\frac{1}{1+\widetilde{{\bf k}}u}\leq\frac{1}{1+\widetilde{{\bf k}}u}\), we have that for the kinetic (ODE) system (36), we still remain in the strong competition case, and via standard theory again, \(\exists\;\;W_{s}(E_{1**})\in{\cal C}^{1}\) separatrix, such that for initial data \((\widetilde{u}_{0},\widetilde{v}_{0})\) chosen left to \(W_{s}(E_{1**})\) the solution \((\widetilde{u},\widetilde{v})\to(0,1)\) and for initial data chosen right to \(W_{s}(E_{1**})\), \((\widetilde{u},\widetilde{v})\to(1,0)\). Here \(E_{1**}\) is the interior saddle equilibrium to the kinetic (ODE) system for (36). Now since \(\frac{1}{1+\widetilde{{\bf k}}u}\leq\frac{1}{1+\widetilde{{\bf k}}u}\), the \(v\) component of \(E_{1**}\) is more than the \(v\) component of \(E_{1*}\). Now using the \({\cal C}^{1}\) property of the separatricies \(W^{1}_{s}(E_{1*}),W_{s}(E_{1**})\), we have the existence of a wedge \(\mathbb{V}\) emanating from \(E_{1*}\), s.t within \(\mathbb{V}\) we have \(W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\). Note via Lemma 13 we have \(\widetilde{v}\leq v\leq\widehat{v}\). Let us consider positive initial data \((u_{0},v_{0})\) chosen small enough, within \(\mathbb{V}\) s.t. \((u_{0},v_{0})<W^{1}_{s}(E_{1*})\leq W_{s}(E_{1**})\), we will have \[\Big{\{}(1,0)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(1,0)\Big{\}}.\] On the other hand, for sufficiently large initial data \((u_{1},v_{1})\) via an analogous construction we will have \[\Big{\{}(0,1)\Big{\}}\leq\Big{\{}(u,v)\Big{\}}\leq\Big{\{}(0,1)\Big{\}}.\] This proves the theorem. ### The weak competition case The Theorems 5 and 7, along with numerical simulations (Fig 18) motivate the following conjecture: Conjecture 1. For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction \[m>1-ac-{\bf k},\quad 0<c<1,\quad 0<{\bf k}<\frac{1}{a}-1,\] for \({\bf k}=\widehat{{\bf k}},\widetilde{{\bf k}}\). Then for any positive set of initial data \([u_{0}(x),v_{0}(x)]\), the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((u^{*},v^{*})\) as \(t\to\infty\). ### The case of multiple interiors The numerical simulations Fig 19 motivate the following conjecture: Conjecture 2. For the reaction-diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) that satisfies the parametric restriction \[m>1-ac-\mathbf{k},\quad m>\frac{2ac\mathbf{k}+ac-\mathbf{k}+1}{1+\mathbf{k}}, \quad u(E)<0,\quad c>1,\quad 0<\mathbf{k}<\frac{1}{a}-1,\] for \(\mathbf{k}=\widehat{\mathbf{k}},\widetilde{\mathbf{k}}\), and \(u\) is a cubic polynomial given by (20). Then there exists sufficiently small initial data \([u_{0}(x),v_{0}(x)]\) (\(v_{0}(x)<<u_{0}(x)\) pointwise), such that the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((u^{*},v^{*})\) as \(t\to\infty\), while there exits also sufficiently large initial data \([u_{1}(x),v_{1}(x)]\) (\(u_{1}(x)<<v_{1}(x)\) pointwise) for which the solution \((u,v)\) to (32) converges uniformly to the spatially homogeneous state \((0,1)\) as \(t\to\infty\). Figure 16: Phase plots showing various dynamics under strong competition parametric restrictions. The parameters are chosen as \(a=0.2,b=0.2,c=1.1\) and \(m=0.15\). Figure 15: Numerical simulation of (32) for the case of strong comp in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=1.1\) and \(m=0.15\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.01,1.5]\) (b) \([u_{0},v_{0}]=[1.5,0.5]\). ## 9 Numerical Simulations The MATLAB R2021b software was employed to conduct a PDE simulation for a reaction-diffusion system (32) modeling Allelopathic Phytoplankton. This simulation considered spatially heterogeneous fear functions, denoted as \(k(x)\). The solution was obtained using the pdepe function to solve 1-D initial boundary value problems in a single spatial dimension. The computational task was performed on an 8-core CPU within an Apple M1 Pro-based workstation, taking approximately \(5-7\) seconds to complete when applied to the spatial domain interval \([0,\pi]\), which was divided into 1000 sub-intervals. Our theoretical findings and conjectures, specific to the spatially explicit context, were substantiated through a time series analysis conducted over an extended duration. Simulations were executed with parameters conforming to the constraints established by the theorems. In the spatially explicit setting, we used the standard comparison theory to derive point-wise constraints on the fear function \(k(x)\). This Figure 17: Numerical simulation for the reaction diffusion system (32) of Allelopathic Phytoplankton with a fear function \(k(x)\) in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=1.1\) and \(m=0.15\). Equilibria: \(E_{1*}=(0.029,0.882),E_{1**}=(0.022,0.888),E_{2}=(0,1),E_{1}=(1,0)\) and \(E_{0}=(0,0)\). \(W_{s}^{2}(E_{1*})\) (\(k=4\)) and \(W_{s}(E_{1**})\) (\(k=5\)) are two separtrices passing through \(E_{1*}\) and \(E_{1**}\), respectively. The \(C^{1}\) property of the separtrices, \(W_{s}^{1}(E_{1*}),W_{s}(E_{1*})\), shows a wedge \(V\) emanating from \(E_{1*}\), such that within \(\nabla\) we have \(W_{s}^{1}(E_{1*})\leq W_{s}(E_{1**})\). The \(u\)-nullcline is in red for \(k=4\) and \(k=5\). For \(k=4\), \(v\)-nullcline is in orange. For \(k=5\), \(v\)-nullcline is in magenta. Figure 18: Numerical simulation of (32) for the case of weak comp in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.2,b=0.2,c=0.9\) and \(m=1.6\). The initial data are chosen (a) \([u_{0},v_{0}]=[4,4]\) (b) \([u_{0},v_{0}]=[0.1,0.1]\). analysis observed competitive exclusion, strong competition, and multiple equilibria-type dynamics within the reaction-diffusion system featuring a spatially heterogeneous fear function. The outcomes of Theorems 3, 3, 3, and Conjectures 1, 2 provided clear evidence of these phenomena. To further validate our numerical results, we utilized Figures [13, 14, 15, 18, 19]. Theoretical results were rigorously validated through numerical experiments employing various heterogeneous fear functions. Each figure caption includes details regarding the parameters used for these simulations and their relevance to specific theorems. It is important to note that all parameter choices remained within the range \([0,5]\), consistent with the model and its comparison to the logistic equation, indicating that any species' population cannot exceed unity. ## 10 Summary and Conclusion In this paper, we are the first to propose an allebotathic phytoplankton competition model influenced by the fear effect, where the parameters \(k\) and \(m\) denote the fear effect and the toxic release rate, respectively. Our study shows that \(k\) and \(m\) perturb the classical Lotka-Volterra competition model to cause rich dynamics. Meanwhile, \(k\) and \(m\) can significantly impact species density biologically. First, we give the conditions for persistence for system (6). When the persistence condition is satisfied, the two species will coexist. System (6) has three boundary equilibria. To study the positive equilibria, we construct a cubic function (20). By analyzing the original image of this function as well as the image of the corresponding derivative function, we find that there are at most two positive equilibria of system (6) and give the existence conditions for the corresponding cases. The next step is to analyze the stability of the equilibria. We investigate the Jacobian matrix corresponding to the boundary equilibria \(E_{0}\), \(E_{1}\), and \(E_{2}\), respectively. By analyzing the traces and the determinant of the matrix, it is found that \(E_{0}\) is always a source. The fear effect \(k\) and the interspecific competition rate \(c\) which the toxic species is subjected will affect \(E_{1}\) and \(E_{2}\), respectively. Furthermore, when the toxin release rate \(m\) reaches a certain threshold, either E1 or E2 will turn into a degraded equilibrium point. For the positive equilibria, we have used (23) to study the relationship between the determinant of its Jacobian matrix as well as the slope of the tangent line and further obtain that \(E_{1\star}\) is a saddle point and \(E_{2\star}\) is a stable node. At the point \(E_{3\star}\), we obtained that its determinant equals 0, so we translated this point to the origin and performed Taylor's expansion. Finally, we used Theorem 3.2 in Chapter 2 to prove that \(E_{3\star}\) is a saddle-node. In particular, we prove no closed orbit for system (6). Combined with the persistence condition, the locally stable positive equilibria \(E_{2\star}\) is also globally stable in system (6). In addition, by varying the fear effect \(k\) or the interspecific competition rate \(c\) to which toxic species is subjected, system (6) will experience transcritical bifurcation around \(E_{1}\) or \(E_{2}\). If the toxic release rate Figure 19: Numerical simulation of (32) for the case of bi-stability in \(\Omega=[0,\pi]\). The parameters are chosen as \(d_{1}=1,d_{2}=1,a=0.3,b=0.2,c=1.1\) and \(m=0.5\). The initial data are chosen (a) \([u_{0},v_{0}]=[0.05,2]\) (b) \([u_{0},v_{0}]=[2,2]\). \(m=1-a-k\), the transcritical bifurcation experienced around \(E_{2}\) will turn into a pitchfork bifurcation. When the toxic release rate \(m\) is used directly as a bifurcation parameter, it results in a saddle-node bifurcation of system (6) in the first quadrant. In essence, these results are seen in the spatially explicit case as well. For large fear coefficient extinction (for certain initial data) is seen for the non-toxic fearful species, see Theorem 14. Depending on the interplay of other parameters, one sees a strong competition type setting, see Theorem 16. Future work will explore a spatially heterogeneous toxic release rate \(m\) (perhaps even one that causes degeneracy), as well as different forms of this rate, including the non-smooth case (Parshad, 2021; Antwi-Fordjour, 2020). We will also explore global stability of the interior equilibrium in the PDE case, as well as the existence of non-constant steady states. To summarize all of the above analysis, the two species can coexist only if the fear effect \(k\) is within the interval \((0,k^{*})\). As for the toxic release rate \(m\), it does not directly change the survival of the non-toxic species but only affects the species' density. We can conclude that in the allelopathic phytoplankton competition model, the real cause of the extinction of non-toxic species is the fear of toxic species compared to toxins. This article has some guidance for the conservation of species diversity. ## Appendix A \[g_{11}= \frac{\left(E^{2}m+1\right)Q_{1}b}{Q_{2}^{2}},\quad g_{02}=\frac{ \left(2Ek+1\right)\left(-1+E\right)Q_{3}}{Q_{2}^{2}\left(Ek+1\right)^{4} \left(E^{2}m+1\right)^{2}},\quad f_{20}=-\frac{Q_{4}}{Q_{2}^{2}\left(2Ek+1 \right)^{2}},\] \[f_{11}= -\frac{bE\left(Ek+1\right)^{2}\left(E^{2}m+1\right)Q_{5}}{Q_{2}^ {2}\left(2Ek+1\right)},\quad f_{02}=-\frac{Q_{6}}{Q_{2}^{2}\left(Ek+1\right)^{ 2}\left(E^{2}m+1\right)},\] \[Q_{1}= 4E^{6}bk^{3}m-6E^{5}bk^{3}m+7E^{5}b^{2}m-12E^{4}bk^{2}m+4E^{4} bkm+4E^{4}k^{2}m-2E^{3}bk^{3}-3E^{3}bk^{2}-8E^{3}bkm\] \[-4E^{3}k^{2}m+E^{3}bm+4E^{3}k^{2}+4E^{3}km-2E^{2}bk^{2}-4E^{2}bk-2 E^{2}bm-4E^{2}k^{2}-4E^{2}km+4E^{2}k\] \[+E^{2}m-bE-4Ek-Em+E-1,\] \[Q_{2}= E^{5}bk^{2}m+2E^{4}bkm+E^{3}bk^{2}+E^{3}bm-2E^{3}km+2E^{2}bk+2E^{2 }km-2E^{2}k-E^{2}m+bE\] \[+2Ek+Em-E+1,\] \[Q_{3}= 3E^{11}b^{2}k^{5}m^{3}+2E^{10}b^{2}k^{5}m^{2}+12E^{10}b^{2}k^{4} m^{3}+7E^{9}b^{2}k^{5}m^{2}+9E^{9}b^{2}k^{4}m^{2}+19E^{9}b^{2}k^{3}m^{3}-4E^{9} bk^{4}m^{3}\] \[+4E^{8}b^{2}k^{5}m+27E^{8}b^{2}k^{4}m^{2}+16E^{8}b^{2}k^{3}m^{2}+ 15E^{8}b^{2}k^{2}m^{3}-12E^{8}bk^{4}m^{2}-12E^{8}bk^{3}m^{3}+5E^{7}b^{2}k^{5}m\] \[+18E^{7}b^{2}k^{4}m+41E^{7}b^{2}k^{3}m^{2}+14E^{7}b^{2}k^{2}m^{2} +6E^{7}b^{2}km^{3}-8E^{7}bk^{4}m-36E^{7}bk^{3}m^{2}-13E^{7}bk^{2}m^{3}\] \[+8E^{7}k^{3}m^{3}+2E^{6}b^{2}k^{5}+18E^{6}b^{2}k^{4}m+32E^{6}b^{2 }k^{3}m+31E^{6}b^{2}k^{2}m^{2}-8E^{6}b\,k^{4}m-8E^{6}k^{3}m^{3}+E^{5}b^{2}k^{5}\] \[+6E^{6}b^{2}k^{2}m^{2}+E^{6}b^{2}m^{3}-24E^{6}b\,k^{3}m-39E^{6}b \,k^{2}m^{2}-6E^{6}b\,k^{3}m^{3}+24E^{6}k^{3}m^{2}+12E^{6}k^{2}m^{3}+9E^{5}b^{2 }k^{4}\] \[+25E^{5}b^{2}k^{3}m+4E^{5}b^{4}m+28E^{5}b^{2}k^{2}m+12E^{5}b^{2}k \,m^{2}-8E^{5}b\,k^{4}-24E^{5}b\,k^{3}m-24E^{5}k^{3}m^{2}\] \[-12E^{5}k^{2}m^{3}+3E^{4}b^{2}k^{4}+E^{5}b^{2}m^{2}-26E^{5}b\,k^{ 2}m-18E^{5}bk\,m^{2}-E^{5}b\,m^{3}+24E^{5}k^{3}m+36E^{5}k^{2}m^{2}\] \[+6E^{5}k\,m^{3}+16E^{4}b^{2}k^{3}+17E^{4}b^{2}k^{2}m+4E^{4}b\,k^{4 }+12E^{4}b\,k^{3}m+12E^{4}b^{2}km+2E^{4}b^{2}m^{2}-24E^{4}b\,k^{3}\] \[-26E^{4}b\,k^{2}m-24E^{4}k^{3}m-36E^{4}k^{2}m^{2}-6E^{4}b\,m^{3}+3 E^{3}b^{2}k^{3}-12E^{4}bkm-3E^{4}b\,m^{2}+8E^{4}k^{3}\] \[+36E^{4}k^{2}m+18E^{4}k\,m^{2}+E^{4}m^{3}+14E^{3}b^{2}k^{2}+6E^{3} b^{2}km+12E^{3}b^{3}+13E^{3}b^{2}m+2E^{3}b^{2}m\] \[-26E^{3}b\,k^{2}-12E^{3}bkm-8E^{3}k^{3}-36E^{3}k^{2}m-18E^{3}k\,m^{ 2}-E^{3}m^{3}+k^{2}b^{2}E^{2}-2E^{3}bm+12E^{3}k^{2}\] \[+18E^{3}km+3E^{3}m^{2}+6E^{2}b^{2}k+E^{2}b^{2}m+13E^{2}b\,k^{2}+6E^ {2}bkm-12E^{2}bk-2E^{2}bm-12E^{2}k^{2}\] \[-18E^{2}km-3E^{2}m^{2}+6E^{2}k+3E^{2}m+E\,b^{2}+6Ebk+Ebm-2bE-6Ek-3 Em+E+b-1,\] \[Q_{4}= \left(E^{2}m+1\right)^{3}\left(Ek+1\right)^{5}\left(-1+E\right) \left(3E^{2}k^{2}m+3Ekm+k^{2}+m\right)E^{2}b^{2},\] \[Q_{5}= E^{6}b^{2}k^{4}m^{2}+4E^{8}b^{2}k^{3}m^{2}+2E^{7}b^{2}k^{4}m+6E^{7}b^ {2}k^{2}m^{2}+8E^{6}b^{2}k^{3}m-2E^{6}b\,k^{3}m^{2}+4E^{6}b^{2}km^{2}\] \[-4E^{6}b\,k^{3}m-3E^{6}b\,k^{2}m^{2}+E^{5}b^{2}k^{4}+12E^{5}b^{2}k^ {2}m+4E^{5}b\,k^{3}m-2E^{5}b\,k^{2}m^{2}+E^{5}b^{2}m^{2}-10E^{5}b\,k^{2}m\] \[-4E^{5}bk\,m^{2}+8E^{5}k^{2}m^{2}+4E^{4}b^{2}k^{3}-4E^{4}b\,k^{3}m+8E^{ 4}b^{2}km-4E^{4}b\,k^{3}+4E^{4}b\,k^{2}m-12E^{4}k^{2}m^{2}\] \[-8E^{4}bkm-E^{4}b\,m^{2}+12E^{4}k^{2}m+8E^{4}k\,m^{2}+6E^{3}b^{2}k^{ 2}+4E^{3}b\,k^{3}-4E^{3}b\,k^{2}m+4E^{3}k^{2}m^{2}+2E^{3}b^{2}m\] \[-10E^{3}b\,k^{2}-16E^{3}k^{2}m-12E^{3}k\,m^{2}-2E^{2}b\,k^{3}-2E^{ 3}bm+4E^{3}k^{2}+12E^{3}km+2E^{3}m^{2}+4E^{2}b^{2}k\] \[+7E^{2}b\,k^{2}+4E^{2}k^{2}m+4E^{2}k\,m^{2}-8E^{2}bk-4E^{2}k^{2}-16E ^{2}km-3E^{2}m^{2}-2Eb\,k^{2}+4E^{2}k+3E^{2}m\] \[+b^{2}E+4Ebk+4Ekm+E\,m^{2}-2Eb-4Ek-4Em+E+b+m-1,\] \[Q6= E^{14}b^{3}k^{m}3+6E^{13}b^{3}k^{5}m^{3}+3E^{12}b^{3}k^{6}m^{2}+15 E^{12}b^{3}k^{4}m^{3}-E^{12}b^{2}k^{5}m^{3}+18E^{11}b^{3}k^{5}m^{2}+E^{11}b^{2}k ^{5}m^{3}\] \[+20E^{11}b^{3}k^{3}m^{3}-2E^{11}b^{2}k^{5}m^{2}-6E^{11}b^{2}k^{4}m^ {3}+3E^{10}b^{3}k^{6}m+45E^{10}b^{3}k^{4}m^{2}+E^{10}b^{2}k^{5}m^{2}\] \[+6E^{10}b^{2}k^{4}m^{3}+15E^{10}b^{3}k^{2}m^{3}-9E^{10}b^{2}k^{4}m^ {2}-13E^{10}b^{2}k^{3}m^{3}+18E^{9}b^{3}k^{5}m+E^{9}b^{2}k^{5}m^{2}+60E^{9}b^{3} k^{3}m^{2}\] \[-4E^{9}b^{2}k^{5}m+13E^{9}b^{2}k^{3}m^{3}-4E^{9}b\,k^{4}m^{3}+E^{8 }b^{3}k^{6}+6E^{9}b^{3}k^{3}m^{3}-16E^{9}b^{2}k^{3}m^{2}-13E^{9}b^{2}k^{2}m^{3}\] \[-4E^{9}b\,k^{4}m^{2}+45E^{8}b^{3}k^{4}m+5E^{8}b^{2}k^{5}m+9E^{8}b^ {2}k^{4}m^{2}+4E^{8}b\,k^{4}m^{3}+45E^{8}b^{3}k^{2}m^{2}-18E^{8}b^{2}k^{4}m\] \[-7E^{8}b^{2}k^{3}m^{2}+13E^{8}b^{2}k^{2}m^{3}-12E^{8}b\,k^{3}m^{3} +6E^{7}b^{3}k^{5}-E^{7}b^{2}k^{5}m+E^{8}b^{3}m^{3}-14E^{8}b^{2}k^{2}m^{2}\] \[-6E^{8}b^{2}k^{3}m^{4}-4E^{8}b\,k^{4}m-12E^{8}b\,k^{3}m^{2}+8E^{8}b ^{3}m^{3}+60E^{7}b^{3}k^{3}m-22E^{7}b^{2}k^{5}+18E^{7}b^{2}k^{4}m\] \[+23E^{7}b^{2}k^{3}m^{2}+12E^{7}b\,k^{3}m^{3}+18E^{7}b^{3}k^{m}-23E ^{7}b^{2}k^{3}m-11E^{7}b^{2}k^{2}m^{2}+6E^{7}b^{2}k\,m^{3}-13E^{7}b\,k^{2}m^{3}\] \[-16E^{7}k^{3}m^{3}+15E^{6}b^{3}k^{4}+3E^{6}b^{2}k^{5}+4E^{6}b^{4}m ^{2}-6E^{7}b^{2}k^{2}m^{2}-E^{7}b^{2}m^{3}-12E^{7}b\,k^{3}m-13E^{7}b\,k^{2}m^{2}\] \[+24E^{7}k^{3}m^{2}+12E^{7}k^{2}m^{3}+45E^{6}b^{3}k^{2}m-9E^{6}b^{2 }k^{4}+25E^{6}b^{2}k^{3}m+25E^{6}b^{2}k^{2}m^{2}+13E^{6}b\,k^{2}m^{3}\] \[+8E^{6}k^{3}m^{3}-E^{5}b^{2}k^{5}+3E^{6}b^{3}m^{2}-28E^{6}b^{2}k^{ 2}m-6E^{6}b^{2}k^{2}m^{2}+E^{6}b^{2}m^{3}-4E^{6}b\,k^{4}-6E^{6}b\,k^{3}m^{3}\] \[-48E^{6}k^{3}m^{2}-24E^{6}k^{2}m^{3}+20E^{5}b^{3}k^{3}+12E^{5}b^{2 }k^{4}+7E^{5}b^{2}k^{3}m+4E^{5}b\,k^{4}m+12E^{5}b\,k^{3}m^{2}-E^{6}b^{2}m^{2}\] \[-13E^{6}b\,k^{2}m-6E^{6}b\,m^{2}+24E^{6}k^{3}m+36E^{6}k^{2}m^{2}+6 E^{6}k\,m^{3}+18E^{5}b^{3}km-16E^{5}b^{2}k^{3}+17E^{5}b^{2}k^{2}m\] \[+12E^{5}b^{2}k^{2}m^{2}+4E^{5}b\,k^{4}+6E^{5}b\,km^{3}+24E^{5}k^{3} m^{2}+12E^{5}k^{2}m^{3}-3E^{4}b^{2}k^{4}-12E^{5}b^{2}km-E^{5}b^{2}m^{2}\] \[-12E^{5}b\,k^{3}-E^{5}b\,m^{3}-48E^{5}k^{3}m-72E^{5}k^{2}m^{2}-12E ^{5}k\,m^{3}+15E^{4}b^{3}k^{2}+19E^{4}b^{2}k^{3}+11E^{4}b^{2}k^{2}m\] \[+12E^{4}b\,k^{3}m+13E^{4}b\,k^{2}m^{2}-6E^{5}bkm-E^{5}b\,m^{2}+8E^{ 5}k^{3}+36E^{5}k^{2}m+18E^{5}k\,m^{2}+E^{5}m^{3}+3E^{4}b^{3}m\] \[-14E^{4}b^{2}k^{2}+6E^{4}b^{2}km+2E^{4}b^{2}m^{2}+12E^{4}b\,k^{3}+E ^{4}b\,m^{3}+24E^{4}k^{3}m+36E^{4}k^{2}m^{2}+6E^{4}k\,m^{3}\] \[-3E^{3}b^{2}k^{3}-2E^{4}b^{2}m-13E^{4}b\,k^{2}-16E^{4}k^{3}-72E^{ 4}k^{2}m-36E^{4}k\,m^{2}-2E^{4}m^{3}+6E^{3}b^{3}k+15E^{3}b^{2}k^{2}\] \[+6E^{3}b^{2}km+13E^{3}b\,k^{2}m+6E^{3}bk\,m^{2}-E^{4}bm+12E^{4}k^{ 2}+18E^{4}km+3E^{4}m^{2}-6E^{3}b^{2}k+E^{3}b^{2}m\] \[+13E^{3}b\,k^{2}+8E^{3}k^{3}+36E^{3}k^{2}m+18E^{3}k\,m^{2}+E^{3}m^{ 3}-E^{2}b^{2}k^{2}-6E^{3}bk-24E^{3}k^{2}-36E^{3}km-6E^{3}m^{2}\] \[+E^{2}b^{3}+6E^{2}b^{2}k+E^{2}b^{2}m+6E^{2}bkm+E^{2}b\,m^{2}+6E^{3}k+3E ^{3}m-E^{2}b^{2}+6E^{2}bk+12E^{2}k^{2}+18E^{2}km\]
この論文は、自然生物学的な現象に基づいた恐怖効果に基づいて、アレロパシーの浮遊植物競争モデルを提案する初めての論文です。この恐怖効果とアレロパシーの用語が競争モデルにおける相互作用を明らかにし、その結果、世界的な安定性、転換bifurcation、ピッチforkbifurcation、およびサドルノードbifurcationなどの特徴を示しました。このモデルの空間的に明確なバージョンを検討し、同様の結果を示しました。数値シミュレーションは、理論的分析の実行可能性を検証しています。結果として、非毒性の種が毒性の種に恐怖されるため、毒性種が非毒性種を抑制する主な原因であることが示唆されています。アレロパシーは非毒性種の人口密度にのみ影響を与えています。この議論は、種の保全と生物多様性の維持のためのガイダンスを提供しています。
2309.09923
An HST survey of 33 T8 to Y1 brown dwarfs: NIR photometry and multiplicity of the coldest isolated objects
We present results from a Hubble Space Telescope imaging search for low-mass binary and planetary companions to 33 nearby brown dwarfs with spectral types of T8-Y1. Our survey provides new photometric information for these faint systems, from which we obtained model-derived luminosities, masses and temperatures. Despite achieving a deep sensitivity to faint companions beyond 0.2-0.5'', down to mass ratios of 0.4-0.7 outside ~5 au, we find no companions to our substellar primaries. From our derived survey completeness, we place an upper limit of f < 4.9% at the 1-sigma level (< 13.0% at the 2-sigma level) on the binary frequency of these objects over the separation range 1-1000 au and for mass ratios above q = 0.4. Our results confirm that companions are extremely rare around the lowest-mass and coldest isolated brown dwarfs, continuing the marginal trend of decreasing binary fraction with primary mass observed throughout the stellar and substellar regimes. These findings support the idea that if a significant population of binaries exist around such low-mass objects, it should lie primarily below 2-3 au separations, with a true peak possibly located at even tighter orbital separations for Y dwarfs.
Clemence Fontanive, Luigi R. Bedin, Matthew De Furio, Beth Biller, Jay Anderson, Mariangela Bonavita, Katelyn Allers, Blake Pantoja
2023-09-18T16:37:15
http://arxiv.org/abs/2309.09923v1
An HST survey of 33 T8 to Y1 brown dwarfs: NIR photometry and multiplicity of the coldest isolated objects ###### Abstract We present results from a Hubble Space Telescope imaging search for low-mass binary and planetary companions to 33 nearby brown dwarfs with spectral types of T8-Y1. Our survey provides new photometric information for these faint systems, from which we obtained model-derived luminosities, masses and temperatures. Despite achieving a deep sensitivity to faint companions beyond 0.2-0.5\({}^{\prime\prime}\), down to mass ratios of 0.4-0.7 outside \(\sim\)5 au, we find no companions to our substellar primaries. From our derived survey completeness, we place an upper limit of \(f<4.9\%\) at the 1-\(\sigma\) level (\(<13.0\%\) at the 2-\(\sigma\) level) on the binary frequency of these objects over the separation range 1-1000 au and for mass ratios above \(q=0.4\). Our results confirm that companions are extremely rare around the lowest-mass and coldest isolated brown dwarfs, continuing the marginal trend of decreasing binary fraction with primary mass observed throughout the stellar and substellar regimes. These findings support the idea that if a significant population of binaries exist around such low-mass objects, it should lie primarily below 2-3 au separations, with a true peak possibly located at even tighter orbital separations for Y dwarfs. keywords: brown dwarfs - binaries: visual - stars: fundamental parameters - stars: imaging ## 1 Introduction Multiplicity, as a direct outcome of formation, is a fundamental parameter in the study of populations of stars and brown dwarfs. Multiplicity studies can help disentangle different formation mechanisms that produce distinct binary outcomes. The binary properties (binary rate, orbital separation and mass ratio distributions) of field objects show a continuous decrease in binary frequency from \(\sim\)50% for Sun-like stars (Raghavan et al., 2010), to \(\sim\)30% for M dwarfs (Ward-Duong et al., 2015; Winters et al., 2019) and \(\sim\)10-20% for L-T brown dwarfs (Close et al., 2003; Burgasser et al., 2006; Reid et al., 2006; Aberasturi et al., 2014), with later-type binaries found to be more compact and with a clear tendency towards equal-mass systems. This observed smooth continuity in multiplicity properties, from massive stars, across the substellar limit, down to late-type brown dwarfs, points to a common formation mechanism between the stellar and substellar regimes (Allen, 2007). Fontanive et al. (2018) placed the first statistically-robust constraints to date on the binary fraction of \(\geq\)T8 ultra-cool dwarfs (\(T_{\rm eff}<800\) K), finding an inherently low binary rate (\(8\pm 6\%\)), with a very tight binary separation distribution peaking around \(\sim\)3 au, and a \(<\)1% binary rate beyond 10 au. While these results confirm previous trends, they remain severely limited by small sample sizes and low numbers of detections. In particular, the sample probed by Fontanive et al. (2018) only included 3 Y-type primaries (\(T_{\rm eff}<500\) K), all Y0 brown dwarfs. Larger samples are thus still required to confirm whether these tendencies also hold for the very latest spectral types and lowest primary masses, where only high mass ratio binaries (\(q\gtrsim 0.7\)) have been probed within 5-10 au separations (Opitz et al., 2016; Fontanive et al., 2018). Probing distinct stages in the lives of brown dwarfs is similarly critical to discriminate between primordial formation and subsequent dynamical evolution. A number of wide-separation, low-mass brown dwarf binaries have been discovered in young star-forming regions, including 2M1207 (25 and 5 M\({}_{\rm Jup}\), 40 au; Chauvin et al., 2005) and 2MASS 0441+2301 (20 and 10 M\({}_{\rm Jup}\), 15 au; Todorov et al., 2010) (see also recent discoveries by De Furio et al., 2022). Yet more surprising systems like the 12-M\({}_{\rm Jup}\) tertiary component orbiting at 2000 au from the more massive 2MASS 0249\(-\)0557 AB binary (48 and 44 M\({}_{\rm Jup}\); Dupuy et al., 2018), or the recently-discovered -Oph 98 system (15 and 8 M\({}_{\rm Jup}\), 200 au Fontanive et al., 2021) challenge these theories even further. The strong lack of binaries with such large orbital separations and uneven component masses within the evolved field population, compared to objects of similar masses in young associations, is seen as evidence that such weakly-bound system do not survive to field ages (Burgasser et al., 2007; Biller et al., 2011), providing valuable insights into the result of formation and evolutionary patterns. Nonetheless, some rare examples of wide, late-type binaries with low mass ratios are known to exist. For example, WISE 1217+1626 (T9+Y0, 12 and 8 M\({}_{\rm Jup}\)) and WISE J1711+3500 (T8+T9.5, 20 and 9 M\({}_{\rm Jup}\)) have separations of \(\sim\)8-15 au (Liu et al., 2012), making them close analogues to 2M1207 and 2MASS 0441+2301 in terms of binary configurations. These systems are difficult to reconcile with most formation scenarios for brown dwarfs, that only allow tight binaries to survive (e.g., ejection scenario, Reipurth & Clarke, 2001; disc fragmentation and binary disruption, Goodwin & Whitworth, 2007). Given that very few such low-mass primaries have been probed for binarity on these wide separations and down to such low mass ratios, it is unclear whether these peculiar discoveries are simply uncommon, or whether more field counterparts to the most extreme young systems exist. Nevertheless, no field counterparts to the most extreme young systems, with separations of hundreds of au, have been identified to date for brown dwarf primaries with masses below \(\sim\)40 M\({}_{\rm Jup}\). The discovery of such extreme systems at advanced ages would represent key elements to help us identify and further understand the mechanisms at play in the formation of the observed brown dwarf population. In this paper, we present _Hubble Space Telescope_ (_HST_) observations of 33 T8-Y1 brown dwarfs to search for low-mass companions to these systems. With 9 Y dwarf primaries, this survey contains the largest Y-dwarf sample probed for multiplicity to date. The goals of this campaign are (1) to refine the binary statistics of the very latest-type objects, currently poorly constrained, (2) to confirm whether widely-separated (tens of au) low mass ratio systems are indeed more common around \(>\)T8 dwarfs than around their more massive, earlier-type counterparts, and (3) to search for the first extremely wide (hundreds of au) binary companion to a very late-type field-age primary. Collectively answering these questions will place strong constraints on the frequency of wide, planetary-mass companions at old ages, providing key insights into the formation and dynamical evolution of these systems. We describe the sample and observations in Section 2. Sections 3 and 4 present the data analyses of the science targets and the companion search, respectively. Statistical results of the multiplicity survey are presented in Section 4.3, and discussed in Section 5. Our results are summarised in Section 6. ## 2 HST/WFC3 Observations ### Sample Selection The studied sample consists of 33 T8 and later-type nearby (\(<\) 30 pc) field brown dwarfs, previously identified via the _Wide-Field Infrared Survey Explorer_ (_WISE_; Wright et al., 2010). The sample includes the 22 unobserved targets from _HST_ snapshot program SNAP 12873 (PI Biller), which was published in Fontanive et al. (2018) for the 12 successfully observed systems. To these were added 11 brown dwarfs from Schneider et al. (2015) that hadn't already been targeted in a multiplicity search with comparable detection limits to the predicted sensitivity of the designed _HST_ program (GO 15201, PI Fontanive). The observed targets are listed in Table 1. The full target designations are given in the table in the form WISE Jhhmms.ss4ddmsms.ss (except for CFBDS J013302+023128). We abbreviate source names to the short form hhmm\(\pm\)ddmm hereafter. With reported spectral types \(\geq\)T8 and estimated masses \(\lesssim\)40 M\({}_{\rm Jup}\) (see Section 3.2), these objects are some of the coolest and lowest-mass known brown dwarfs in the Solar neighbourhood. The sample includes 24 late-Ts and 9 Y dwarfs, making it the largest sample of Y dwarfs targeted for binarity to date. ### Observing Strategy All targets were observed with the Infrared (IR) channel of the Wide Field Camera 3 (WFC3) instrument on the _Hubble Space Telescope_, as part of GO program 15201 (PI Fontanive). Each object was observed for a full orbit, split equally between the F127M and F139M filters, using a similar strategy to that from _HST_ SNAP 12873 and described in Fontanive et al. (2018). The combination of these bandpasses exploits a water absorption feature seen in late-type objects at 1.4 \(\mu\)m which can be used to identify low-mass objects in the field of view based on F127M-F139M colours. Indeed, subtellar spectra exhibit a deep water band covered by the F139M filter and not seen in stars with spectral types early than Mo, while the F127M filter covers the \(J\)-band peak of brown dwarfs. A considerable drop in flux between the two bandpasses can hence be used to robustly identify the targets and possible faint companions (see Fontanive et al., 2018 for details). A total of 4 exposures of \(\sim\)300 s were taken in each filter in MULTIACCUM mode, for a total exposure time of \(\sim\)1200 s in each band. The specific NSAMP and SAMP-SEQ instrument parameters were adapted based on the visibility time of each orbit, in order to maximise the integration time for each target, and are reported in Table 2. In each filter, the 4 individual images were acquired along a 4-point box pattern, using large spacings of 2-3\({}^{\prime\prime}\) between dithered positions. Table 2 provides a summary of the observations. ### Data Reduction In our reduction procedures, we used the _flt images exclusively, which are corrected via standard calibrations (bias, dark and flat field), provide fluxes in units of electron per second, but preserve the pixel data with their original sampling, a fundamental characteristic for careful stellar profile fitting in under-sampled images (Anderson & King, 2000). We measured positions and fluxes in each of these images using the hstipass code, which is a generalisation of the software img2xym_WFC, initially developed to perform Point Spread Function (PSF) fitting in the Wide Field Channel (WFC) of the Advanced Camera for Surveys (ACS) of _HST_(Anderson & King, 2006). The code perturbs a library PSF in order to empirically find the best spatially variable PSF for each image. With this PSF, hstpass then runs a single pass of source findings without performing neighbour subtraction, so as to obtain initial positions and fluxes. Positions and fluxes are then corrected for the geometric distortion (Anderson, 2016). The library PSFs and the WFC3/IR geometric distortion developed by J. Anderson are both publicly available.1 Footnote 1: [https://www.stsci.edu/~jayander/STDPSFs/](https://www.stsci.edu/~jayander/STDPSFs/) and [https://www.stsci.edu/~jayander/STDCDCs/](https://www.stsci.edu/~jayander/STDCDCs/) For each of our targets, one of these single-exposure catalogues in F127M was adopted as the common pixel-based reference coordinate system for the particular target's field. We then used well-measured unsaturated stars to derive general six-parameter linear transformations to transform stellar positions from all other images into the common reference frame for the selected target. Similarly, magnitudes were zero-pointed to the adopted common reference frame in each filter. ### Source Detection and Photometry Once this preliminary photometry had provided all the transformations from all exposures into a common coordinate and photometric reference system, we ran a more sophisticated computer program to obtain photometry, as well as Artificial Star Tests (ASTs; see Section 2.5). This software package, K92, developed by J. Anderson (Anderson et al., 2008) \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Target Name & Short Name & RA & DEC & Disc. & SpT & SpT & \(\varpi_{\rm abs}\) & Astrom. \\ & & (J2000) & (J2000) & Ref. & (IR) & Ref. & (mas) & Ref. \\ \hline WISE J000517.48+373720.5 & WISE 0005+3737 & 00:05:17.48 & +37:37:20.5 & (1) & T9.0 & (1) & \(126.9\pm 2.1\) & (10) \\ WISE J00150.587\(-\)461517.6 & WISE 0015\(-\)4615 & 00:15:05.88 & \(-\)46:15:17.7 & (2) & T8.0 & (2) & \(75.2\pm 2.4\) & (10) \\ WISE J003231.09\(-\)494651.4 & WISE 0032\(-\)4946 & 00:32:31.09 & \(-\)49:46:51.5 & (2) & T8.5 & (2) & \(60.8\pm 2.5\) & (10) \\ WISE J003829.05+275852.1 & WISE 0038+2758 & 00:38:29.06 & +27:58:52.1 & (1) & T9.0 & (1) & \(88.2\pm 2.0\) & (10) \\ WISE J004945.61+215120.0 & WISE 0049+2151 & 00:49:46.09 & +21:51:20.4 & (1) & T8.5 & (1) & \(140.4\pm 2.1\) & (10) \\ CFBDS J013302+023128 & CFBDS 0133+0231 & 01:33:02.48 & +02:31:28.9 & (3) & T8.5 & (3) & \(53.1\pm 2.6\) & (10) \\ WISE J032517.69\(-\)385454.1 & WISE 0325\(-\)3854 & 03:25:17.69 & \(-\)38:54:54.1 & (1) & T9.0 & (1) & \(60.2\pm 3.5\) & (10) \\ WISE J032504.33\(-\)504400.3 & WISE 0325\(-\)5044 & 03:25:04.52 & \(-\)50:44:03.0 & (4) & T8.0 & (4) & \(36.7\pm 2.7\) & (11) \\ WISE J035000.32\(-\)565830.2 & WISE 0350\(-\)56583 & 03:50:00.33 & \(-\)56:58:30.2 & (2) & Y1.0 & (2) & \(176.4\pm 2.3\) & (10) \\ WISE J035934.06\(-\)540154.6 & WISE 0359\(-\)5401 & 03:59:34.07 & \(-\)54:01:54.6 & (2) & Y0.0 & (2) & \(73.6\pm 2.0\) & (10) \\ WISE J0404134.48\(-\)462029.9 & WISE 04040\(-\)6420 & 04:04:43.50 & \(-\)64:20:30.0 & (4) & T9.0 & (4) & \(44.8\pm 2.2\) & (10) \\ WISE J041024.271\(+\)150248.4 & WISE 0410+1502 & 04:01:02.27 & +11:50:24.8 & (5) & Y0.0 & (5) & \(151.3\pm 2.0\) & (10) \\ WISE J041358.14\(-\)475039.3 & WISE 0413\(-\)4750 & 04:13:58.14 & \(-\)47:50:39.3 & (1) & T9.0 & (1) & \(50.7\pm 3.3\) & (10) \\ WISE J074457.25+562821.0 & WISE 0744+5628 & 07:44:57.25 & \(+\)56:28:21.0 & (6) & T8.0 & (6) & \(65.3\pm 2.0\) & (10) \\ WISE J081117.81\(-\)805141.3 & WISE 0811\(-\)8051 & 08:11:17.82 & \(-\)80:51:41.4 & (1) & T9.5 & (1) & \(99.1\pm 7.7\) & (11,12) \\ WISE J081220.04+402106.2 & WISE 0812+4021 & 08:12:20.04 & +40:21:06.3 & (1) & T8.0 & (1) & \(34.3\pm 2.71\) & (10) \\ WISE J085716.24+560407.6 & WISE 0857+5604 & 08:57:16.25 & \(+\)56:04:07.7 & (6) & T8.0 & (6) & \(85.3\pm 2.1\) & (10) \\ WISE J094305.98+360723.5 & WISE 0943+3607 & 09:43:05.99 & \(+\)36:07:23.6 & (7) & T9.5 & (7) & \(97.1\pm 2.9\) & (10) \\ WISE J105130.01\(-\)213859.7 & WISE 1051\(-\)2138 & 10:51:30.02 & \(-\)21:38:59.7 & (1) & T8.5 & (8) & \(64.0\pm 2.3\) & (10) \\ WISE J120604.38+840110.6 & WISE 1206+8401 & 12:06:04.39 & \(+\)84:01:10.7 & (4) & Y0.0 & (4) & \(84.7\pm 2.1\) & (10) \\ WISE J131833.98\(-\)175826.5 & WISE 1318\(-\)1758 & 13:18:33.98 & \(-\)17:58:26.5 & (1) & T8.0 & (8) & \(63.5\pm 2.2\) & (10) \\ WISE J154151.65\(-\)225024.9 & WISE 1541\(-\)2250 & 15:41:51.66 & \(-\)22:50:25.0 & (5) & Y1.0 & (4) & \(168.6\pm 2.2\) & (13) \\ WISE J163940.83\(-\)884738.6 & WISE 1639\(-\)6847 & 16:39:04.84 & \(-\)68:47:38.6 & (9) & Y0.0p & (4) & \(211.1\pm 0.6\) & (14) \\ WISE J173853.53\(-\)7327329.0 & WISE 1738+2732 & 17:38:35.53 & \(+\)27:32:59.1 & (5) & Y0.0 & (5) & \(130.9\pm 2.1\) & (10) \\ WISE J021902.76\(-\)114807.5 & WISE 2019\(-\)1148 & 20:19:29.70 & \(-\)11:48:07.6 & (1) & T8.0 & (1) & \(79.9\pm 2.7\) & (10) \\ WISE J205628.91+14593.2 & WISE 2056+1459 & 20:56:28.92 & \(+\)14:59:53.2 & (5) & Y0.0 & (5) & \(140.8\pm 2.0\) & (10) \\ WISE J210200.15\(-\)442919.5 & WISE 2102\(-\)4429 & 21:02:00.16 & \(-\)44:29:19.5 & (2) & T9.0 & (2) & \(92.9\pm 1.9\) & (11,12) \\ WISE J221216.33\(-\)693121.6 & WISE uses the previously obtained transformations and PSFs to simultaneously find and measure stars in all of the individual exposures, across the full fields of view of the images, and for both filters. Combining the multiple exposures, KS2 finds and measures those faint stars that would otherwise be lost in the noise of single images. Detailed descriptions and examples of the usage of KS2 are given in Bellini et al. (2017) and Nardiello et al. (2018), and in a more recent application by Scalco et al. (2021). Given the sparse nature of the studied fields, which gave less control on the exact PSF shapes, we employed a single wave of finding, therefore limiting the search to sources separated by more than a pixel from each other. Along with photometry and astrometry, KS2 also provides important diagnostic parameters, such as: the quality-fit ("QFIT") parameter, which gives a measure of how accurate the PSF fit is, the photometric root mean square ("rms") for single exposure measurements, the contamination parameter from neighbours ("o"), the local sky value ("SKY") and its root mean square ("rmsSKY"), and the "RADXS" parameter (defined as in Bedin et al., 2008) which quantifies how much the flux distribution of a source represents that of the PSF. Extended sources have large positive RADXS values, hot pixels and cosmic rays have large negative values, and point-sources have values close to 0. With detailed transformation in positions and magnitudes, we also created stacked images of the astronomical scene (one per filter) super-sampled by a factor 2 (as described in Scalco et al., 2021). Photometry was zero-pointed to the Vega-magnitude system following the recipes in Bedin et al. (2005), using encircled energy and ZP available in the \begin{table} \begin{tabular}{l c c c c c} \hline \hline Target Name & Obs. Date & \multicolumn{2}{c}{WFC3/IR F127M} & \multicolumn{2}{c}{WFC3/IR F139M} \\ \cline{3-5} & (UT) & NSAMP/SAMP-SEQ & t (s) & NSAMP/SAMP-SEQ & t (s) \\ \hline WISE 0005+3737 & 2018 Nov 17 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0015\(-\)4615 & 2018 Mar 31 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0032\(-\)4946 & 2018 May 25 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0038+2758 & 2017 Oct 25 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 0049+2151 & 2017 Oct 26 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ CFBDS 0133+0231 & 2018 Feb 10 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 0325\(-\)3854 & 2018 May 13 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0325\(-\)5044 & 2018 Mar 23 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0350\(-\)5658 & 2018 Aug 27 & 13/SPAR25 & 1311.756 & 13/SPAR25 & 1311.756 \\ WISE 0359\(-\)5401 & 2019 Jan 27 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0404\(-\)6420 & 2018 Aug 24 & 13/SPAR25 & 1311.756 & 13/SPAR25 & 1311.756 \\ WISE 0410+1502 & 2017 Dec 23 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 0413\(-\)4750 & 2018 May 27 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0744+5628 & 2019 Oct 06 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 0811\(-\)8051 & 2017 Nov 25 & 13/SPAR25 & 1311.756 & 13/SPAR25 & 1311.756 \\ WISE 0812+4021 & 2017 Nov 05 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 0857+5604 & 2018 Nov 09 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 0943+3607 & 2018 Feb 11 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 1051\(-\)2138 & 2018 Jun 25 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 1206+8401 & 2018 Apr 01 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 1318\(-\)1758 & 2018 Jun 21 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 1541\(-\)2250 & 2018 Feb 17 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 1639\(-\)6847 & 2019 Mar 11 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 1738+2732 & 2017 Nov 07 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 2019\(-\)1148 & 2017 Oct 26 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 2056+1459 & 2018 Sep 12 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ WISE 2102\(-\)4429 & 2018 Apr 15 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 2212\(-\)6931 & 2018 Aug 27 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 2255\(-\)3118 & 2017 Oct 25 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 2313\(-\)8037 & 2018 Mar 25 & 14/SPAR25 & 1311.756 & 14/SPAR25 & 1311.756 \\ WISE 2325\(-\)4105 & 2018 May 30 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 2332\(-\)4325 & 2017 Oct 25 & 13/SPAR25 & 1211.754 & 13/SPAR25 & 1211.754 \\ WISE 2354+0240 & 2018 Jun 18 & 11/STEP50 & 1196.929 & 11/STEP50 & 1196.929 \\ \hline \end{tabular} \end{table} Table 2: Summary of _HST_ observations from GO program 15201. STScI webpages for WFC3/IR2. The adopted zero-points to transform from instrumental magnitudes into calibrated magnitudes are 23.55 and 23.25 for the F127M and F139M filters, respectively. Footnote 2: [https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration](https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration) ### Artificial Star Tests Artificial star tests (ASTs) have a major role in most imaging investigations, enabling assessment of the accuracy of input and output photometry, and determining potential systematic bias functions in positions or stellar magnitudes. For our ASTs, we added to individual images artificial stars as described in Anderson et al. (2008), using our knowledge of PSFs, photometric, and astrometric transformations described in the previous section. Each artificial star was randomly attributed a position, and F127M and F139M fluxes based on adopted distributions described below, and injected accordingly in the respective datasets in each band. The software adds and measures the artificial stars one at a time, therefore not creating a fake overcrowding. For each target, we added \(4\times 10^{5}\) artificial stars uniformly distributed across the full images, and with flat distributions in the colour-magnitude plane using instrumental magnitudes randomly drawn between \(-5\) and \(+6\) in both F127M and F139M (corresponding to 18.55-29.55 in \(m_{\rm F127M}\) and 18.25-29.25 in \(m_{\rm F139M}\)). By comparing the positions and magnitudes of the injected and retrieved stars, the ASTs can be used to select adopted thresholds for the search of companions (Section 4.1), as well as to determine the completeness of the survey (Section 4.2). We explored the input and recovered positions and magnitudes of the artificial stars, together with the various measured quality factors (QFIT, RADXS), in order to determine the thresholds to be used in our source selections. We started by considering as robustly detected all sources with consistent detections in at least 3 of the 4 images in a given filter, and with a measured flux larger than the local rmsSKY value. All other detections were discarded for being dubious, below the thresholds adopted for a trustworthy detection. We then considered to be well recovered the artificial stars with a retrieved position within 1 pixel from the inserted one in both the X and Y directions, and with a measured magnitude \(<-2.5\times\log_{10}(2)\) from the injected one in a given filter. All sources not satisfying these criteria were counted as being poorly measured. From this, we examined the properties of well and poorly recovered sources in the obtained ASTs catalogues, to define criteria that would enable us to select only well-measured sources in the original datasets, while also excluding as few as possible. After exploring the numerous variables available and various combinations of the parameter space for the well and poorly-measured sources, we arrived at the following sequence: * ratio of flux from contaminating neighbours to that of the measure source "o" \(<4\) * quality of fit QFIT \(>0.9\) * \(-0.35<\) RADXS \(<0.35\) Figures 1 and Figure 2 show the performance reached by applying these criteria in the source selection process, for the example target WISE 1639\(-\)6847, chosen because the same datasets had already been extensively studied by our team in Bedin & Fontanive 2020 and Fontanive et al. 2021. In Figure 1, we show the distributions in measured F127M and F139M instrumental magnitudes of the well (blue) and poorly (red) measured artificial stars, before (dotted histograms) and after (solid histograms) applying the selection criteria. The four panels in Figure 2 show each step of the process, in the F127M-F139M vs. F127M colour-magnitude space. The numbers of well-recovered (blue) and badly-measured (red) sources remaining after each step are written in the top right corner on each panel. However, only artificial stars detected in both filters can be included in the plot, and sources were plotted in blue when they were found to have been successfully recovered in _both_ bands, and in red when the retrieved magnitudes or positions were too deviant from the injected ones in _at least one_ bandpass. This means that the plots show a larger number of poorly-recovered sources relative to the well-measured ones, compared to the same fraction within the separate samples from each band. We therefore also indicate in each panel the numbers of well and poorly measured artificial stars in the F127M and F139M data independently, which, for the first and last panels, correspond to the respective numbers of sources in the histograms from Figures 1. In the first panel, the full set of injected artificial stars is also shown, with sources that were not detected at all, or did not meet the initial selection for a robust detection, shown in grey (plotted using the injected magnitudes for lack of recovered values in many cases). Across all targets, the final criteria used provide a recoverability rate for well-measured sources of \(\sim\)80-85% in both filters, over the magnitude ranges explored in the ASTs. Most of the lost sources are at the faint end of the robustly-detected subset, at instrumental magnitudes \(>\)1-2, with \(>\)90% of well-measured sources brighter than \(m_{\rm instr.}\) of \(+1\), and \(>\)99% of those brighter than \(m_{\rm instr.}\) of \(-1\), retained in either filter after applying the above criteria. These effects can be rigorously quantified and accounted for with a completeness analysis, as performed in Section 4.2. Most importantly, the final fraction of badly-measured sources that were not excluded with the adopted criteria was found to be \(<\)4-5%, and again to mostly be for sources fainter than \(m_{\rm instr.}\) of 1 in a given filter, with similar results achieved for all targets. We therefore consider this chain of selection thresholds to successfully achieve the sought goals of identifying most well-measured stars, while rejecting the vast majority of badly-recovered sources. ## 3 Data Analyses of the Brown Dwarf Primaries ### Photometric Properties For each system, the primary brown dwarf target was identified in the list of the detected sources from the procedures described in Section 2.4, and its Vega magnitudes extracted from the resulting instrument photometry and corresponding zeropoints (ZP\({}_{\rm F127M}\) = 23.55, ZP\({}_{\rm F139M}\) = 23.25). As expected from their late spectral types, all probed objects showed a deep drop in the F139M water-band filter, due to the 1.4-\(\mu\)m water absorption feature seen in brown dwarfs. When the absorption was so strong that the target dropped below the detection level in the F139M images, an upper limit on the F139M flux of the brown dwarf was derived as a 5-\(\sigma\) threshold above the local sky background, using the local noise values derived in the procedures from Section 2.4. The measured _HST_ photometry for all targets is reported in Table 3, along with _Spitzer_ photometry from Kirkpatrick et al. (2019). Of the 33 targeted brown dwarfs, 16 T8-T9 objects were successfully recovered in the F139M filter, thanks to the long total exposure times acquired (\(\sim\)1200-1300 s compared to \(\sim\)700 s in SNAP 12873). These detections provide measurements of F127M\(-\)F139M colours for the first time for such late type objects, allowing an extension to the known trend observed with spectral type (Aberasturi et al., 2014; Fontanive et al., 2018) to later types. Only synthetic colours or minimum magnitude differences (from null detections in the F139M band) had been explored to date beyond mid-T spectral types. Figure 3 shows the _HST_ photometry as a function of spectral type for our targets with (blue stars) and without (magenta triangles) F139M detections. The newly measured colours for T8-T9 dwarfs are in good agreement with the extrapolation of the polynomial fit derived in Fontanive et al. (2018) for spectral types between T0 and T8, confirming that the water absorption band at 1.4 \(\mu\)m continues to deepen at the end of the T spectral sequence, with magnitude differences F127M\(-\)F139M of \(-\)5 to \(-\)6.5 mag for these systems, compared to values of about \(-\)3 mag for mid-Ts and around \(-\)1 mag for late Ls and early Ts (Aberasturi et al., 2014; Fontanive et al., 2018). For comparison, we also computed synthetic colours (grey circles) for sources from the SpeX Prism Library Analysis Toolkit (SPLAT; Burgasser & Splat Development Team, 2017), making use of the built-in func Figure 1: Magnitudes distributions in F127M (left) and F139M (right) of all injected stars with consistent detections in \(\geq\)3/4 images and measured fluxes above the local sky value (dotted lines), compared to those left after applying the selection criteria described in the text (solid line). The blue and red histograms represent artificial stars that were well and poorly measured by the algorithms, respectively, showing that our adopted selections successfully exclude the vast majority of poorly-measured sources. This example is shown for the analyses made on the data for WISE 1639\(-\)6847. Figure 2: Injected artificial stars in the F127M–F139M colour-magnitude diagram. Each panel shows, from the left to the right, the well (blue) and poorly (red) measured stars that are left after each step of the selection process established from our analyses. The first panel shows the full set of simulated stars, where sources that did not satisfy the initial criteria for a detection are plotted in grey. tions in SPLAT to derive synthetic photometry from SpeX spectra. Synthetic colours for late-Ts show a much wider spread than our obtained measurements, most likely due to the noisy and low signal-to-noise nature of the available SpeX spectra for the faintest objects, in particular around the 1.4-\(\mu\)m water absorption band. Figure 4 shows colour-magnitude diagrams of the observed targets, in terms of absolute F127M magnitudes against _HST_ (left), _HST\(-\)Spitzer_ (middle) and _Spitzer_ (right) colours. The parallax measurements from Table 1 were used to convert apparent F127M magnitude into absolute photometry. Clear trends are observed in all cases with absolute brightness and spectral type, shown in the colourbar. Indeed, although no object \(>\)T9 was recovered in F139M (triangle symbols), the left panel demonstrates a continuous trend to bluer colours with fainter absolute F127M magnitude for TS to T9 objects, as a result of the continuously deeper absorption level with colder effective temperatures in the 1.4-\(\mu\)m water-band feature (Figure 3). Similarly, the middle panel demonstrates a very tight relationship, with a continuous shift towards redder colours with dimmer F127M absolute flux and later spectral type (and thus cooler effective temperature). The right panel also exhibits a steady shift towards redder \(ch1-ch2\) colours for fainter and later type objects, though not as tightly defined as for the _HST\(-\)Spitzer_ colours. The \(ch1-ch2\) colours are also poorly matched by the models for given absolute F127M fluxes, despite the F127M-\(ch2\) relationship agreeing very well with theoretical predictions. This is a known effect, sometimes referred to as the "4-micron problem" (Leggett et al., 2019), in which a deep CH\({}_{4}\) ab \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Target Name & WFC3/F127M & WFC3/F139M & IRAC/\(ch1\) & IRAC/\(ch2\) & \(T_{\rm eff}\) & Luminosity & Mass \\ & (mag) & (mag) & (mag) & (mag) & (K) & \(\log(L/L_{\odot})\) & (M\({}_{\rm Jup}\)) \\ \hline WISE 0005+3737 & \(16.89\pm 0.12\) & \(23.26\pm 0.13\) & \(15.431\pm 0.024\) & \(13.282\pm 0.018\) & \(557\pm 25\) & \(-6.13\pm 0.05\) & \(27\pm 7\) \\ WISE 0015\(-\)4615 & \(17.17\pm 0.01\) & \(23.27\pm 0.47\) & \(16.096\pm 0.031\) & \(14.228\pm 0.019\) & \(626\pm 29\) & \(-5.95\pm 0.05\) & \(32\pm 8\) \\ WISE 0032\(-\)4946 & \(17.88\pm 0.01\) & \(24.44\pm 0.70\) & \(16.932\pm 0.049\) & \(14.929\pm 0.021\) & \(584\pm 34\) & \(-6.06\pm 0.08\) & \(29\pm 8\) \\ WISE 0038+2758 & \(17.93\pm 0.01\) & \(24.49\pm 0.12\) & \(16.454\pm 0.037\) & \(14.410\pm 0.020\) & \(514\pm 19\) & \(-6.26\pm 0.04\) & \(24\pm 7\) \\ WISE 0049+2151 & \(15.73\pm 0.05\) & \(21.69\pm 0.10\) & \(15.009\pm 0.022\) & \(13.043\pm 0.017\) & \(611\pm 42\) & \(-6.00\pm 0.10\) & \(31\pm 8\) \\ CFDBS 0133+0231 & \(17.56\pm 0.01\) & \(23.15\pm 0.40\) & \(16.789\pm 0.044\) & \(15.053\pm 0.023\) & \(596\pm 92\) & \(-6.05\pm 0.24\) & \(30\pm 10\) \\ WISE 0325\(-\)3854 & \(18.19\pm 0.03\) & \(>24.89\) & \(17.120\pm 0.053\) & \(14.984\pm 0.021\) & \(565\pm 31\) & \(-6.11\pm 0.07\) & \(28\pm 7\) \\ WISE 0325\(-\)5044 & \(18.39\pm 0.01\) & \(24.24\pm 0.53\) & \(17.746\pm 0.086\) & \(15.696\pm 0.025\) & \(662\pm 48\) & \(-5.87\pm 0.10\) & \(34\pm 9\) \\ WISE 0350\(-\)5658 & \(21.40\pm 0.04\) & \(>25.03\) & \(17.936\pm 0.096\) & \(14.688\pm 0.020\) & \(320\pm 12\) & \(-7.00\pm 0.06\) & \(10\pm 3\) \\ WISE 0359\(-\)5401 & \(20.95\pm 0.01\) & \(>24.93\) & \(17.553\pm 0.072\) & \(15.326\pm 0.023\) & \(409\pm 22\) & \(-6.62\pm 0.08\) & \(17\pm 5\) \\ WISE 0404\(-\)6420 & \(19.01\pm 0.01\) & \(>24.91\) & \(17.633\pm 0.082\) & \(15.418\pm 0.022\) & \(580\pm 36\) & \(-6.07\pm 0.08\) & \(29\pm 8\) \\ WISE 0410+1502 & \(18.51\pm 0.01\) & \(>24.81\) & \(16.636\pm 0.042\) & \(14.166\pm 0.019\) & \(400\pm 21\) & \(-6.65\pm 0.08\) & \(16\pm 5\) \\ WISE 0413\(-\)4750 & \(19.20\pm 0.05\) & \(>24.81\) & \(17.802\pm 0.086\) & \(15.487\pm 0.024\) & \(527\pm 47\) & \(-6.23\pm 0.13\) & \(25\pm 7\) \\ WISE 0744+5628 & \(16.74\pm 0.01\) & \(22.28\pm 0.02\) & \(16.267\pm 0.034\) & \(14.552\pm 0.020\) & \(676\pm 70\) & \(-5.84\pm 0.15\) & \(35\pm 10\) \\ WISE 0811\(-\)8051 & \(18.82\pm 0.03\) & \(>24.95\) & \(16.817\pm 0.045\) & \(14.402\pm 0.019\) & \(459\pm 25\) & \(-6.44\pm 0.07\) & \(20\pm 6\) \\ WISE 0812+4021 & \(17.38\pm 0.01\) & \(22.72\pm 0.09\) & \(16.932\pm 0.048\) & \(15.299\pm 0.024\) & \(848\pm 63\) & \(-5.48\pm 0.10\) & \(46\pm 11\) \\ WISE 0857+5604 & \(16.78\pm 0.02\) & \(22.98\pm 0.13\) & \(16.018\pm 0.030\) & \(14.134\pm 0.019\) & \(611\pm 45\) & \(-5.99\pm 0.10\) & \(31\pm 8\) \\ WISE 0943+3607 & \(19.10\pm 0.01\) & \(>24.87\) & \(16.746\pm 0.043\) & \(14.284\pm 0.019\) & \(464\pm 31\) & \(-6.42\pm 0.10\) & \(21\pm 6\) \\ WISE 1051\(-\)2138 & \(18.01\pm 0.01\) & \(24.09\pm 0.31\) & \(16.419\pm 0.036\) & \(14.598\pm 0.020\) & \(596\pm 31\) & \(-6.03\pm 0.06\) & \(30\pm 8\) \\ WISE 1206+8401 & \(19.67\pm 0.02\) & \(>24.98\) & \(17.339\pm 0.061\) & \(15.220\pm 0.022\) & \(414\pm 14\) & \(-6.60\pm 0.05\) & \(17\pm 5\) \\ WISE 1318\(-\)1758 & \(17.70\pm 0.01\) & \(23.74\pm 0.22\) & \(16.790\pm 0.045\) & \(14.728\pm 0.020\) & \(600\pm 30\) & \(-6.02\pm 0.06\) & \ sorption feature in the 3-4 \(\mu\)m region is poorly reproduced by current models, which are consistently too faint in the L-band (and equivalent 3.6-\(\mu\)m _Spitzer/\(ch1\)_ or _WISE_/\(W1\) bands) for objects cooler than \(\sim\)700 K (see also Morley et al., 2018; Phillips et al., 2020; Leggett et al., 2021). ### Primary Masses We used the _HST_ photometric data from this paper, together with the _Spitzer_ photometry of our targets from Kirkpatrick et al. (2019) and summarised in Table 3, to estimate primary masses for our observed sample. Only the F127M fluxes were used from our _HST_ measurements, since half of our targets have no F139M detection. Furthermore, as available ground-based near-infrared photometry for our probed systems is highly heterogeneous in quantity, quality, spectral coverage, and filter system, we did not include additional sparse photometric data in this analysis. We adopted the ATMO 2020 models3(Phillips et al., 2020), a new set of coupled atmosphere and evolutionary grids for T and Y dwarfs with improved modelling and updated line lists. We used the predicted cooling tracks in the specific _HST_/F127M and _Spitzer/\(ch1\)_ and _ch2_ filters to estimate luminosities, effective temperatures, and age-dependent masses. Only the equilibrium chemistry (CEQ) models are considered here, although we achieved similar results on final inferred masses from the non-equilibrium models (CNEQ) with weak (\(<\)2 M\({}_{\rm Jup}\) differences) and strong (\(<\)5 M\({}_{\rm Jup}\) differences) vertical mixing (see Phillips et al., 2020 for discussions of the various models and their fit to observations at different wavelengths), corresponding to \(<\)0.5-\(\sigma\) disparities in all cases. Tracks from the CEQ sets of models are overplotted in Figure 4 at ages of 1, 5 and 10 Gyr. Footnote 3: [http://perso.ens-lyon.fr/isabelle.baraffe/ATMO2020/](http://perso.ens-lyon.fr/isabelle.baraffe/ATMO2020/). Models in the _HST_ filters were provided by M. Phillips via private communication. Absolute magnitudes in the F127M, \(ch1\) and \(ch2\) bandpasses were computed for all targets from the apparent photometry, and adopting trigonometric distances from the parallax measurements in Table 1. We used a Monte-Carlo approach to propagate uncertainties, randomly drawing \(10^{5}\) apparent magnitudes and parallaxes from Gaussian distributions centred on the measured values, with standard deviations set to the measurement uncertainties. Age estimates for isolated field brown dwarfs are particularly challenging to obtain, and the ages of our probed systems are unknown. We therefore assumed a similar distribution in age as in the solar neighbourhood (Caloi et al., 1999), and adopted a Gaussian distribution of ages centred around \(5\pm 3\) Gyr, as was done in Fontanive et al. (2018). For each set of drawn parameters, the obtained absolute magnitudes were then interpolated in the ATMO 2020 evolutionary tracks at the generated system ages to infer corresponding luminosities, effective temperatures, and masses, thus providing 3 distributions for each inferred parameter, from the F127M, \(ch1\), and \(ch2\) data, respectively. Figure 5 shows comparisons of these individual distributions as the obtained F127M values against the \(ch1\) (blue) and \(ch2\) (magenta) values, plotted using the mean and standard deviation of each distribution from each target as the point value and its errorbar. The solid black lines mark the 1:1 relations. In general, we found the F127M and \(ch2\) results to be rather consistent with each other, with mean off Figure 3: _HST_/WFC3 F127M\(-\)F139M colours as a function of brown dwarf spectral type. Our new photometric measurements are displayed in the blue star symbols for targets recovered in the F139M band, and in the magenta triangles for water-band dropouts, for which only upper limits on the observed colours are available. The grey circles show synthetic colours for T-type sources in SPLAT. The solid black line corresponds to the polynomial fit from Fontanive et al. (2018). sets across the sample of 2.0 and 1.4 \(\sigma\) in the obtained luminosities and effective temperatures, respectively, and a particularly good agreement for the faintest targets (\(\log(L/L_{\odot})\)\(<-6\); \(T_{\rm eff}<700\) K; mass \(<35\) M\({}_{\rm Jup}\)). In comparison, parameters inferred from the \(ch1\) photometry typically resulted in brighter luminosities, warmer effective temperatures, and larger masses, with mean discrepancies between the F127M and \(ch1\) results around the 4-\(\sigma\) level for both luminosities and effective temperatures. Offsets in the resulting masses were far less significant as a result of the age-dependence of the brown dwarf luminosity-mass relationship, responsible for the much wider uncertainties in the right-most panel. In contrast, luminosity and temperature are essentially age-independent properties for a given absolute magnitude, and the final errorbars in these parameters are dominated by the uncertainties in the apparent magnitudes and parallaxes measurements. The inferred masses based on _Spitzer_ photometry were found to agree with the F127M-derived masses at the 0.6-\(\sigma\) level on average for \(ch1\) estimates (1.2 \(\sigma\) in the most discrepant case), and at 0.3 \(\sigma\) on average for \(ch2\) (1.0 \(\sigma\) at most). The larger discrepancies seen with the \(ch1\) data are consistent with the known issue of models in this band (see Section 3.1; Phillips et al., 2020). Based on the recognised problems from the models in the Figure 4: Colour-magnitude diagram of the 33 brown dwarf primaries, showing the _HST_/WFC3 F127M absolute magnitudes versus their F127M\(-\)139M _HST_ colours (left), _HST\(-\)Spitzer_ F127M\(-\)ch2 colours (centre), and infrared \(ch1-ch2\)_Spitzer_ colours (right). The colourbar indicates the spectral types of the objects. In the left panel, star symbols indicate sources successfully retrieved in the F139M band, thus plotting true colours. Triangles correspond to sources undetected in F139M for which the resulting colour value is therefore an upper limit (i.e., the true positions on the x-axis are to the left of the plotted points). In all three panels, the black lines correspond to the predicted colour-magnitude relationships from the ATMO 2020 models with equilibrium chemistry at discrete ages of 1 Gyr (dotted line), 5 Gyr (dashed line) and 10 Gyr (solid line). The poor fit of the models to the \(ch1-ch2\) colours in the last panel is due to known issues of models around 4 \(\mu\)m, in \(ch1\) band. Figure 5: Model-derived parameters for our studied targets using the ATMO 2020 models (Phillips et al., 2020), comparing results obtained for the luminosity (left), effective temperature (centre) and mass (right) with the _HST_ F127M photometry to values derived from the _Spitzer_\(ch1\) (blue) and \(ch2\) (magenta) photometry. The solid black lines in each panel show the 1:1 relationships. Results from the F127M and \(ch2\) data are in good agreement, while values based on the \(ch1\) photometry comparatively overestimate all three parameters. These differences become less significant in the right panel due to the strong age dependence of the derived masses, responsible for the much larger uncertainties. 3-4 \(\mu\)m region, only the \(ch2\) photometry was considered from _Spitzer_ to compute final values in each model-derived parameter. Final values of luminosity, effective temperature and mass are given in Table 3 for each target in the studied sample. These values and associated uncertainties for each system correspond to the mean and standard deviation from the F127M and \(ch2\) results simultaneously, i.e., combining the samples from the resulting distributions in each filter. Our estimated effective temperatures were found to be consistent with those from Kirkpatrick et al. (2019), with \(<\)30 K differences (or agreeing within \(<\)1 \(\sigma\)) for the 31 targets in the sample (out of 33) that have estimated temperatures in Kirkpatrick et al. (2019). Offsets in the resulting masses were even less significant as a result of the large uncertainties induced by the strong dependence on age. ## 4 Data Analyses for Multiplicity ### Search for Companions #### 4.1.1 Source Selection To search for resolved companions around our brown dwarf targets, we applied the source selection criteria defined in Section 2.5, from the results of the performed Artificial Star Tests. This provided us with catalogues of real sources from the F127M datasets, for which we can trust the reliability of measured fluxes and positions with a high confidence level. Bonafide cold companions may be of two types: those recovered in the F139M filter, and those dropping out in the water-band. Any object from the first category must show a comparatively deep, or yet deeper, absorption level to the primary brown dwarfs, and would be easily identifiable in the colour-magnitude space, just like the primaries themselves (see Figure 6). Given that half of the science targets themselves were not detected in F139M, only companions as bright as our brighter targets would be recovered in F139M as well, since fainter objects with the colours of a cold brown dwarf rapidly fall below the detection level of the F139M bandpass, as is also the case for our fainter primaries. For such candidates, we therefore impose the criterion that sources must be fainter than the considered primary in F127M, with a threshold on the colour of the source of F127M-F139M \(<\) F127M-F139M\(|_{\rm BD}\)\(-\) 2, i.e., at most 2 magnitudes redder than the primary itself when the primary is retrieved in F139M. Any dimmer potential companion would have to be a water-band dropout, detected in F127M but not in F139M, in order to be consistent with a true low-mass and cold substellar object. Such sources can be distinguished in the colour-magnitude diagrams (CMDs) when sufficiently bright, as outliers from the core of detected sources in the CMD, as is the case for the primaries in the sample. However, at fainter magnitudes, the minimum magnitude difference that may be measured for a dropout candidate (i.e. the upper limit on its F127M-F139M colour) approaches the blue end of the spread in F127M-F139M colours from other faint sources in the field, and thus cannot be confidently attributed to a water-band dropout candidate companion. Therefore, we imposed the additional constraints that selected candidates must be 1.5 magnitudes brighter in F127M than the local background magnitude in F139M. This ensured that retained candidates have maximum F127M-F139M colours bluer than the bulk of faint sources in the images, as it was found that the scatter in F127M-F139M colours for well-measured sources below that threshold approached the maximum colours measurable for a water-band dropout candidate (see Figure 6). Finally, any F139M detection for a source fainter than the primaries would make it highly improbable that that source is a true cold companion. We therefore also relaxed the constraints placed on the F139M source selections, since objects at the faint end may be missing from the F139M catalogues of well-measured sources (see Section 2.5), thus appearing as water-band dropouts, when in fact they should be disregarded as potential candidates for not being sufficiently blue. We hence removed the criteria placed on the F139M RADXS and QFIT parameters, and considered to also be recovered in F139M any source with detections in 3 out of the 4 individual F139M images, with a flux above the local rmsSKY. Figure 6 shows the positions in the _HST_ colour-magnitude diagram of all well-detected F127M sources satisfying the above criteria, across all datasets from the survey, marked with different colours and symbols based on whether or not they were also detected in F139M. A similar figure was made and inspected for each target individually, but we display the combined sample of sources from all images in the same plot for simplicity, providing a summary of all results in the same context. The brown dwarf primaries are shown as purple stars for those detected in the F139M filter, and plotted with purple triangles at the locations of their maximum F127M-F139M colours for those dropping out in F139M (their true positions are thus somewhere to the left of the plotted locations). The dotted black line shows where such dropout sources are expected to fall when plotted based on upper limits on colours, adopting an average F139M background magnitude value (5-\(\sigma\)). With a comparable sensitivity achieved across all datasets, all primaries undetected in F139M correctly fall along this line, and potential candidates dropping out in F139M are expected to do so as well. No sources other than the brown dwarf targets themselves were found with very blue F127M-F139M colours, among those detected in F139M. Figure 6 clearly shows the absence of such sources among those recovered in both filters, with a wide horizontal gap throughout all images between the primaries with F139M detections (purple stars) and other similarly-bright sources in F127M. No such blue candidate was identified upon visual examinations of the reduced data either, while the science targets were easily spotted in the final images. The handful of red scatter points indicate the few sources that did not satisfy the cuts for what was determined to be well-measured in F139M, but were nonetheless detected in that band, implying that they could be ruled out as possible dropout companions. Each of these sources was inspected individually and their F139M detections were visually confirmed in all cases, although determined to be poorly measured by the algorithms. The three yellow triangles are the only sources that remained as potential candidates, with faint F127M magnitudes and no F139M detections, and are discussed below. Finally, we found that bright but poorly measured F127M sources with rather blue colours (black dots above the dotted line and to the left of the core sample of detected sources, in blue) were mostly found to be non-real sources in the PSFs of bright stars, or to be galaxies, confirming that both categories of objects were successfully rejected by our source selections. #### 4.1.2 Rejection of Candidate Companions The 3 identified candidates with significant levels of water-band absorption were found around the primary targets WISE 0015\(-\)4615, WISE 1738+2732, and WISE 2019\(-\)1148. Table 4 summarises the relative astrometry and photometry of each candidate. All candidates were found at very wide angular separations (\(\sim\)25-35\({}^{\prime\prime}\)), which would correspond to extremely large projected separations of 200-450 au if real. Archival _HST_ data is available for two targets (WISE 0015\(-\)4615 and WISE 1738+2732), with images from programs GO 12972 (PI: Gelino), GO 12330 (PI: Kirkpatrick) and GO 16229 (PI: Fontanive) providing time baselines of at least 5 years to observations from our own \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Target & Sep. & Pos. Angle & F127M\({}_{\rm BD}\) & F127M\({}_{\rm cand}\) & F139M\({}_{\rm BD}\) & F139M\({}_{\rm cand}\) & Additional \\ & (arcsec) & (deg) & (mag) & (mag) & (mag) & (mag) & HST data \\ \hline WISE 0015\(-\)4615 & \(30.41\pm 0.02\) & \(275.12\pm 0.1\) & \(17.17\pm 0.01\) & \(23.13\pm 0.08\) & \(23.27\pm 0.47\) & \(>24.92\) & GO 12972 \\ WISE 1738+2732 & \(25.72\pm 0.02\) & \(178.29\pm 0.1\) & \(18.88\pm 0.02\) & \(22.79\pm 0.08\) & \(>24.95\) & \(>24.76\) & GO 12330, GO 16229 \\ WISE 2019\(-\)1148 & \(35.82\pm 0.02\) & \(22.99\pm 0.1\) & \(17.41\pm 0.03\) & \(22.96\pm 0.15\) & \(23.34\pm 0.07\) & \(>24.81\) & — \\ \hline \end{tabular} \end{table} Table 4: Summary of identified candidate companions. For the photometry, candidates are marked as \({}^{\rm\circ}\)cand\({}^{\rm\circ}\), and the corresponding primaries as \({}^{\rm\prime}\)BD\({}^{\rm\circ}\). Figure 6: F127M–F139M colours vs. F127M apparent magnitudes for all sources well detected in F127M over all images of _HST_ program GO 15201, based on the criteria defined in Section 2.5. The black dots correspond to sources that did not satisfy the added selection of being 1.5 mag above the local F139M background value. Among sources satisfying this criterion, those well-measured in F139M are marked in blue, and those poorly measured in that band are shown in red. The brown dwarf targets are plotted in purple, distinguishing those recovered in F139M (star symbols) from those dropping out (triangles). The yellow triangles indicate the 3 sources identified as water-band dropout candidates. program. Adopting astrometric solutions for the primaries from Kirkpatrick et al. (2021), comparisons of measured astrometry at these additional epochs to that from our program demonstrated both candidates to be background sources, as shown in Figures 7 and 8. The last candidate identified from its water absorption levels has no other observational epoch in the _HST_ archive and could not be astrometrically confirmed or refuted in the same manner. Based on the _HST_ work from Fontanive et al. (2018) (see Figure 3), the observed F127M\(-\)F139M maximum colours of the identified source suggest that the candidate is not compatible with a background main sequence star or early-type M-L brown dwarf -which do not show such deep water absorption levels- and would instead have a \(\geq\)T2-T1 spectral type if a star-like object in the galaxy. Ultracool brown dwarfs like our primaries and any potential candidate companion to our targets are far too cold and red to be detected at visible wavelengths by Gaia, and even too faint at near-infrared wavelengths to be detected in 2MASS. WISE 2019\(-\)1148 had been observed from the ground for spectro-photometric characterisation (Palomar/TSpec; Mace, 2014), but the field of view of the acquired observations do not cover the position of the extremely wide candidate. We did not find neither the primary nor the candidate in CFHT/MegaCam and SDSS archival images either. While detecting the source at optical wavelengths would have ruled out the candidate as real a ultracool companion, non-detections are inconclusive. We were also unable to detect the identified source near WISE 2019\(-\)1148 in public WISE data, both assuming comoving and stationary background positions relative to the primary, and the source is not listed in the CatWISE and UnWISE catalogs (Schlafly et al., 2019; Eisenhardt et al., 2020). However, its absolute F127M magnitude of F127M = \(22.47\pm 0.17\), if at the distance of the primary, would place it at the faint end of currently-known Y1\(-\)Y2 dwarfs with existing F127M _HST_ data, like the much closer WISE 0350\(-\)5658 (6 pc) and WISE 2354\(+\)0240 (8 pc) (Kirkpatrick et al., 2012; Schneider et al., 2015; Kirkpatrick et al., 2021). Given the farther distance to WISE 2019\(-\)1148 (12.5 pc), similar colours as these objects for the candidate around WISE 2019\(-\)1148 would translate to an apparent magnitude of \(W2\sim 16-17\) mag, potentially below the 16.5-mag threshold of the deep CatWISE program (Eisenhardt et al., 2020), which led to the discoveries of the faintest-known Y dwarfs to date (Meisner et al., 2020, 20). While non-detections of that source in these IR archives are hence inconclusive regarding the possible bonafide nature of the candidate, existing _Spitzer_ data of the primary (Program 70062, PI: Kirkpatrick) show no signs of infrared detections of the object either, despite a significantly deeper sensitivity in the \(ch2\) channel (down to \(ch2\sim\)17-18 mag; Martin et al., 2018; Kirkpatrick et al., 2019, 2021), where a Y-type dwarf of the expected brightness should be detected at 4.5 \(\mu\)m. We therefore conclude that the source is unlikely to be a real companion and is instead more probably a background source, possibly an extra-galactic contaminant, which have previously been found to be sources of false positives to the water-band approach (e.g., Fontanive et al., 2018). Additional observations will be required to more confidently establish the true nature of the identified source. All selected candidates were hence refuted, and we conclude that no companion was detected in this survey around the 33 late-T and Y dwarfs observed by our _HST_ program. ### Survey Completeness In order to assess the completeness of our binary companion search, we made use of Artificial Star Tests similar to those Figure 8: Common proper motion analysis of the candidate companion to WISE 1738\(+\)2732. The relative positions of the companion in archival _HST_ data from programs GO 12330 (magenta circle) and GO 16229 (yellow circle) are consistent with the expected position at those epochs of a background source (magenta and yellow crosses), confirming its background nature. Figure 7: Common proper motion analysis of the candidate companion to WISE 0015\(-\)4615. The relative position of the candidate in the F127M images from our program is shown in the blue circle, and the black line shows the expected motion of a stationary background star relative to the primary between the available epochs. The relative position of the companion in archival _HST_ data from program GO 12972 (magenta circle) is consistent with the expected position at that epoch of a background source (magenta cross), confirming its background nature. described in Section 2.5, but with a focus around the primary brown dwarfs and more relevant regions of the spatial and magnitude parameter space. For each target, we added \(2\times 10^{6}\) artificial stars across the full images, split into 4 sets. In all cases, we adopted flat distributions in the colour-magnitude plane, and used uniform radial distribution in position centred around the primary brown dwarf (i.e., giving the same number of sources at all radii, and uniformly distributed in position angle at a given radius). A first set of \(4\times 10^{5}\) sources were added from a uniform radial distribution extending from 20 pixels of the brown dwarf target to the edge of the image, and with instrumental magnitudes drawn from uniform distributions between \(-5\) and \(+6\) in both F127M and F139M (Vega magnitudes of 18.55-29.55 in F127M and 18.25-29.25 F139M). A second set of \(6\times 10^{5}\) artificial stars was then added over the same radial distribution, beyond 20 pixels of the science target, but with instrumental magnitudes taken in the range \(-2\) to \(+2\) for each filter (Vega magnitudes of 21.55-25.55 in F127M and 21.25-25.25 in F139M), in order to further explore the magnitudes over which the completeness abruptly decreases. To complement these 1 million artificial stars away from the primary, we added another 1 million stars on uniform radial distributions between 1 and 20 pixels, so as to investigate in more details the region directly around the brown dwarf (inner \(\sim\)2.5\({}^{\prime\prime}\)): a third test added \(4\times 10^{5}\) stars inside 20 pixels with instrumental F127M and F139M magnitudes uniformly drawn between \(-5\) and \(+6\), and a fourth test of \(6\times 10^{5}\) stars was drawn from instrumental magnitudes between \(-2\) and \(+2\). The source selection for the binary search in Section 4.1 was based entirely on F127M detections, where we placed thresholds to select well-measured F127M sources independently of the various adopted categories of F139M detections (well-measured, poorly-measured but detected, or dropout), that were each considered and looked into separately. We therefore only considered the F127M ASTs to gauge the completeness of our analysis, although the corresponding F139M images were still used as well. We applied the criteria and selection threshold defined in Section 2.5 to assess which of the 2 million injected artificial stars were successfully recovered, and estimate our sensitivity and recoverability levels in each F127M image. In addition to these, our source selection procedure in the binary search (Section 4.1) included an additional requirement for sources to be at least 1.5 magnitudes brighter in F127M than the local background magnitude in F139M, in order to ensure a minimally meaningful magnitude drop between the two filters. We therefore enforced this extra constraint in the source selection of the ASTs as well, so as to comprehensively determine the completeness of the binary search that was performed here. After running the source selection procedures on the 4 sets of ASTs, determining whether each artificial star was counted as well retrieved in F127M or not, we defined a grid in separation vs. magnitude space, allowing us to obtain the recoverability rate of the injected stars in each grid cell. We used a separation bin size of 20 pixels at outer radii (beyond 20 pixels from the brown dwarf target) out to 1000-pixel separations, and refined the grid to 1-pixel resolutions for the inner 20 pixels. In terms of magnitudes, we used 0.1-mag bins from \(-2\) to \(+2\) mag where the completeness changes significantly, and bin sizes of 0.5 mag outside that range. The numbers of artificial stars injected in the 4 sets described above were specifically chosen to ensure having at least around 500 added stars per grid cell with these varying resolutions. Each star was placed in its corresponding separation-magnitude grid cell based on its input position and F127M instrumental magnitude, and the final fraction of stars well recovered by the algorithms was computed. At separations larger than the closest edge of the image from the primary, a fraction of the considered annulus falls outside the field of view of the image, which means that the completeness of the program to companions on such separations should decrease accordingly. Nonetheless, the ASTs were made so that an equal number of sources would fall inside the images at any given radius from the target. This should be corrected in order to accurately reflect the completeness of the survey to the widest companions (e.g., an average recoverability rate of 0.8 at a separation where 50% of the annulus falls outside the FoV, implies a true completeness of \(0.8\times 0.5=0.4\)). Therefore, to extend the analyses beyond the radius that first reaches an edge of the image, we estimated for each separation the fraction of the annulus that fell within the FoV, and used that information to scale the obtained recoverability rates from the ASTs at that separation. This effect is reflected in the completeness grid in Figure 9, where the recoverability fraction starts to drop at outer separations (beyond \(\sim\)500 pixels for the averaged grid of the full sample). The added criterion on the minimum measurable magnitude drop between F127M and F139M for selected sources pushes the sensitivity down to instrumental magnitudes between 0 and \(-1\), even though fainter sources not fulfilling Figure 9: Average detection probability map for all 33 targets, based on the recoverability rate of artificial stars in each separation-magnitude bin. The white line indicates the 90% contour, while the black lines show the 10% and 50% level of completeness. The grey shaded area on the left represents the 1-pixel inner working angle. Axes are plotted in unit grid cells to highlight the key parts of the parameter space, and the linear scales change at the dotted lines, with a different resolution within and beyond 20 pixels, and inside and outside the \(-2\) to \(+2\) instrumental magnitude range. The fraction of a circle of given separation that lands in the image is taken into account in the final recoverability rate, resulting in a drop in the completeness at wide separations. these constraints are detectable in F127M down to instrumental magnitudes of +1 to +2. Nonetheless, the extra selections applied in our binary search ensure that we focus our search on the regions of the parameter space with high-completeness levels, as demonstrated in Section 2.5, and most importantly, where very few poorly-measured sources are expected in the output of the algorithms. Using the derived zeropoint value (ZP\({}_{\rm F127M}\) = 23.55) and the WFC3/IR pixel scale, the individual completeness grid for each brown dwarf in the sample was then converted from instrumental to Vega F127M magnitudes and \(\Delta\)F127M magnitude differences to the primaries, and transformed from pixels to arcseconds in separation. The distance to each target was subsequently used to convert from apparent to absolute F127M magnitude, and from projected angular separation in arcseconds to physical binary separation in astronomical units. Finally, the ATMO2020 CEQ models (Phillips et al., 2020) were used to derive corresponding masses at median ages of 5 Gyr, as was done in Section 3.2, and we derived corresponding mass ratio grids from the primary masses estimated at the same age. Since conversions of the original grid axes are dependent on the individual distance and mass of each target, we recombined the final completeness maps by mapping them onto common, high-resolution master grids using 2-dimensional interpolations. Figure 10 shows the final combined grids in the observed and physical parameter space. Beyond 0.5''(\(\sim\)4 pixels), we reach magnitude differences of \(\Delta\)F127M of 5 mag around 50% of the sample. The wide spread of \(>\)4 mag in achieved contrasts across the sample (top right panel) is mostly due to the range in the primaries' brightness, since only \(\sim\)1 mag difference is seen in companion apparent magnitude (top left). These background-regime sensitivities correspond to companion mass limits of 11 M\({}_{\rm Jup}\) at the 50% level, and 13.5 M\({}_{\rm Jup}\) at the 90% from \(>\)5-10 au (bottom left), or mass ratios \(q\) of 0.45 (50%) and 0.7 (90%) (bottom right). With targets distances ranging from \(<\)5 pc to 30 pc, the 1-pixel inner working angle (121 mas; shaded regions in the upper panels) corresponds to physical separations between 0.5-3.5 au across the sample. As a results, in the contrast-limited regime within \(\sim\)0.5'', we are sensitive to equal-mass companions from 1 au for the best 3 targets, from 2 au around half of the targets, and starting at 5 au for over 90% of cases. While the model-based mass conversion for the lower left panel depends on the adopted age for the systems, the resulting mass ratios in the lower right panel show little age de Figure 10: Detection probability maps for the full survey in observed (upper panels) and physical (lower panels) parameter space. The left panels show inherent companion quantities in the y-axis, while the variables plotted in the right panels are relative to the brown dwarf primaries. The black and white contours mark the 10%, 50% and 90% completeness levels. pendence. Indeed, when considering different ages, the mass sensitivity limits were found to shift from 11 M\({}_{\rm Jup}\) to 7 M\({}_{\rm Jup}\) and 14 M\({}_{\rm Jup}\) for the 50% level at 2 Gyr and 8 Gyr, respectively, and from 13.5 M\({}_{\rm Jup}\) to 9 M\({}_{\rm Jup}\) and 17 M\({}_{\rm Jup}\) for the 90% contours for the same ages. On the other hand, working in mass ratio space, with primary masses computed at the same ages, results in changes of \(\sim\)1% in the corresponding \(q\) levels. This makes mass ratio an advantageous variable for an analyses of binary properties, providing an almost age-independent space to work in, as had been showed in Fontanive et al. (2018). We therefore focus on that parameter space for the statistical analysis in Section 4.3. The detection probability map, in physical projected separation against mass ratio, contains 1000 values uniformly distributed in log space between \(10^{-0.5}\) au and \(10^{3.5}\) au for the separation axis (0.31-3100 au), and 500 values in mass ratio linearly spaced between 0 and 1. ### Constraints on Binary Frequency With a statistically significant sample of 33 ultracool brown dwarfs and a deep sensitivity to low-mass objects, the lack of detected true companion in this survey allows us to place stringent upper limits on the occurrence of binary companions around late-T and Y dwarfs. To do this, we adopt the Bayesian statistical framework from Fontanive et al. (2018), a formalism initially developed to constrain the binary properties of late-T dwarfs, in the predecessor survey to the one conducted here. Without any detections, we are unable to place new observational constraints on the binary orbital separation or mass ratio distributions of this population. We therefore have to rely on existing results for the shape of the underlying binary population. In exoplanet surveys around stars, the output of planet formation theories, in the form of population synthesis models (Forgan et al., 2018; Emsenhuber et al., 2021), are typically available for direct comparisons of observed populations to theoretical predictions (e.g., Vigan et al., 2021). Unfortunately, such simulations are very limited for brown dwarf populations, and rarely extend down to the very low-mass end of the substellar regime (Bate, 2014), populated by isolated T and Y dwarfs like the ones studied here. In the absence of such theoretical expectations for the distributions of orbital periods and binary mass ratios, we adopt the results from Fontanive et al. (2018). That study derived a binary frequency of \(f=5.5^{+5.2}_{-3.3}\)% for T5-Y0 brown dwarfs over separations of 1.5-1000 au, for an overall binary fraction of \(8\pm 6\)%. Modelling the projected separation as a lognormal distribution and the mass ratio as a power law, they found a peak in separation at \(\rho=2.9^{+0.8}_{-1.4}\) au with a logarithmic width of \(\sigma=0.21^{+0.14}_{-0.08}\), and mass ratio distribution heavily skewed towards equal-mass systems, with a power-law index of \(\gamma=6.1^{+6.0}_{-2.7}\). We followed the procedures and used the Markov chain Monte Carlo (MCMC) tools from Fontanive et al. (2018, 2019), based on the emcee python implementation of the affine-invariant ensemble sampler for MCMC (Foreman-Mackey et al., 2013). We generated binary populations from the separation and mass ratio distributions described above, and combined them with our survey detection probability map generated in Section 4.2 to derive an upper limit on the binary rate \(f\) of our observed sample given our null detection. We focused on the region of the parameter space between \(\rho=1-1000\) au and \(q=0.4-1\), down to the limits of our binary search sensitivity level (see Figure 10). Adopting a uniform prior between 0 and 1 for \(f\), we found a binary frequency of \(f<4.9\)% (\(<13.0\)%) at the 1-\(\sigma\) (2-\(\sigma\)) level. Figure 11 shows the obtained posterior distribution for the binary fraction with the corresponding 68% and 95% confidence intervals indicated by the shaded regions. ## 5 Discussion Figure 12 compares our obtained constraint in binary fraction to values measured for earlier-type, more massive field brown dwarfs and stars. Apart from the G-K dwarf binary rate (Raghavan et al., 2010), all frequencies were derived from surveys probing systems with separations \(>\)1-3 au, comparable to our programme, making the comparison in multiplicity fractions directly relevant. Our measured upper limit of \(f<4.9\)% (68% confidence interval) for the frequency of systems on separations of 1-1000 au and mass ratios between 0.4-1 is in good agreement with previous results derived in Fontanive et al. (2018) for a sample of slightly earlier-type T5-Y0 brown dwarfs (\(f=5.5^{+5.26}_{-3.3}\)% beyond 1.5 au, for an overall binary fraction of \(8\pm 6\)%). These findings marginally confirm the idea that the decrease in binary frequencies with later type observed across the stellar and substellar regimes for the field population might continue throughout the substellar mass range down to the very lowest masses, as illustrated in Figure 12. The continuity in this trend across the star-brown dwarf boundary would argue for a common origin between stars and brown dwarfs, although larger sample sizes and more stringent constraints are still needed to validate this result. As brown dwarfs are incredibly faint, the methods most commonly used to search for binary and planetary companions to stars, like the radial velocity and transit methods, are typically unfeasible in the brown dwarf regime, in particular for ultracool late-type objects at the low-mass end like the targets studied here. Astrometry might provide a more vi Figure 11: Posterior probability distribution of the binary frequency \(f\) on separations 1–1000 au and mass ratios 0.4–1, for our observed sample of 33 targets. The shaded areas represent the 1-\(\sigma\) (deep blue) and 2-\(\sigma\) (light blue) intervals of confidence, respectively. able alternative approach to search for companions to faint brown dwarfs (e.g., _HST_ Program GO 170804, PI Bedin), although very little work has been carried out on this side and no systems have been reported this way so far. Direct imaging thus remains the primary option available to search for companions to brown dwarfs, but is limited to relatively high-mass ratios and is only sensitive to the outer parts of the separation parameter space. Brown dwarf binaries appear to be in tighter orbital configurations than their more massive stellar counterparts (Burgasser et al., 2003, 2006), consistent with the trend observed in binary separation distribution of multiple systems within the stellar regime. As this tendency extends across the brown dwarf mass range all the way down to the very lowest masses (Fontanive et al., 2018), the peak of the orbital separation distribution for the latest-type primaries approaches the resolving limit of most direct imaging instruments capable of observing brown dwarfs. Footnote 4: [https://ui.adsabs.harvard.edu/abs/2022hat.prop17080B/abstract](https://ui.adsabs.harvard.edu/abs/2022hat.prop17080B/abstract) For instance, in the few searches for companions to late-T and Y primaries conducted to date, most existing discoveries emerged from the surveys with the best inner working angles (Keck/NIRC2, Liu et al., 2011; Dupuy et al., 2015), which would have been (or were) missed by studies only able to access wider angular separations (e.g., _HST_/WFC3, Aberasturi et al., 2014; Fontanive et al., 2018). As we are mostly sensitive to companions beyond few au, the lack of new detection in our _HST_ program is hence consistent with previous results of substellar binarity. Indeed, our survey's inner working angle coincides roughly with the peak around 3 au in orbital separation from Fontanive et al. (2018) for such systems, meaning that we are unable to detect at least half of potentially existing companions. With the largest sample of Y dwarfs ever probed for multiplicity, our strong constraints of \(f<4.9\%\) (1-\(\sigma\)) between 1-1000 au thus confirms that the binary rate of brown dwarfs continues to decrease with later spectral types all the way down to Y dwarfs on these separations. These results could also be consistent with a further shift of the binary separation peak to even smaller separations around Y dwarfs. The first and only binary system with a Y-type primary (WISE J033605.5\(-\)014350.4, which is not in our sample) was recently discovered with _JWST_/NIRCam (Calssendorff et al., 2023), but required _JWST_'s plate-scale (63 mas and 31 mas for the F480M and F150W filters used, respectively) to detect the \(\sim\)84 mas (0.97 au) Y+Y binary. The rare current discoveries therefore strongly indicate that spatial resolutions reaching sub-au separations are likely needed to uncover more members of this binary population. Future surveys exploring the inner regions of the separation space will be crucial to provide a definitive test of whether the observed peak around 2-3 au for T and Y dwarfs (Fontanive et al., 2018) is real, or if it is an artefact of incompleteness on shorter separations (Bardalez Gagliuffi et al., 2015), with a true peak lying at even tighter separations. The 3-5 \(\mu\)m sensitivity and high angular resolution of _JWST_ will allow us to study these objects in larger samples at closer separations (e.g. _JWST_ Cycle 1 program GO 24735, PIs Albert & Meyer). Footnote 5: [https://ui.adsabs.harvard.edu/abs/2021jwst.prop.2473A/abstract](https://ui.adsabs.harvard.edu/abs/2021jwst.prop.2473A/abstract) With an excellent sensitivity and completeness to companions on wide orbital separations (outside 0.2-0.5'', or 1-10 au, depending on the targets' distances and brightness), our survey robustly confirms that wide companions are extremely rare in the Galatic field around the lowest-mass systems. The binaries discovered by Liu et al. (2012) with separations of 8-15 au thus remain highly anomalous and the only examples of binaries on outer Solar System scales with such low-mass primaries. We also know that yet wider systems with separations of tens to hundreds of au do form around similar mass primaries, since they are observed in young star-forming regions and moving groups (Chauvin et al., 2005; Todorov et al., 2010; Fontanive et al., 2020). The fact that no analogue to these extremely wide, low-mass binaries has ever been uncovered at evolved ages, despite a high sensitivity to such companions around nearby old objects, indicates that such systems do not survive to field ages (Burgasser et al., 2006; Biller et al., 2011). Our results, with no detection of wide companions out of 33 observed objects, reinforce the idea that the widely-separated binaries with very low-mass primaries identified in young associations have no counterparts among isolated objects in the Solar neighbourhood. ## 6 Conclusions We conducted an imaging search for low-mass binary and planetary companions around 33 nearby brown dwarfs with spectral types of T8-Y1, using WFC3/IR observations from Figure 12: Binary fraction as a function of spectral type for stars and brown dwarfs in the field, showing the continuous decrease in multiplicity rate with later type. Our measured upper limit for T8–Y1 dwarfs is plotted in magenta. All measurements come from surveys probing similar binary separations to our programme, \(>1\)–3 au, except for the Raghavan et al. (2010) result for G and K stars. Data from Raghavan et al. (2010); Bergfors et al. (2010); Janson et al. (2012); Close et al. (2003); Gizis et al. (2003); Reid et al. (2006); Burgasser et al. (2006); Fontanive et al. (2018). the Hubble Space Telescope. With 9 Y-type primaries, our observed sample is the largest subset of such very late-type, ultracool brown dwarfs investigated for multiplicity to date. Our deep _HST_ observations provide the first detections of late-T dwarfs in the water-absorption band at 1.4 \(\mu\)m, providing new observational constraints on the depth of this spectroscopic feature at the coldest temperatures. We derive new estimates of our targets' luminosities, temperatures and masses from evolutionary models, using our _HST_ near-infrared photometry combined with _Spitzer_ data in the infrared. We found no evidence for wide binary companions in our survey, despite an excellent sensitivity and completeness from a few au and down to secondary masses of 10-15 M\({}_{\rm Jup}\). Three candidates were identified in our procedures for companion selection based on water-band absorption levels, but two were ruled out from their stationary background positions relative to the primaries in archival _HST_ data, while the third source was rejected a bonafide companion based on its non-detection in deep infrared _Spitzer_ images, where such a cold companion should be bright. We were nonetheless able to place stringent constraints on the binary rate \(f\) of these objects from our null detection, and measured an upper limit of \(f<4.9\%\) at the 1-\(\sigma\) level (\(<13.0\%\) at the 2-\(\sigma\) level) over the separation range 1-1000 au and for mass ratios between 0.4-1. This is consistent with previous results, and reaffirms that the decrease in multiplicity fraction with later spectral types seen among stars and more massive brown dwarfs might persist all the way down to the latest-type objects. Our survey confirms that wide companions are extremely rare around the lowest-mass and coldest isolated brown dwarfs, even though counterpart systems are observed at young ages, indicating that such systems do not survive as bound components to field ages. If our targets have binary companions, they would likely be within our inner working angle of \(\sim\)1-5 au, on separations around or inside the currently-observed peak in binary separation for T and Y dwarfs. Future programs that explore the inner regions of the separation space will be crucial to identify more members of this binary population and provide a definitive test of the observed separation peak. ## Acknowledgements CF acknowledges support from the Trottier Family Foundation and the Trottier Institute for Research on Exoplanets through her Trottier Postdoctoral Fellowship. This work was funded by the Trottier Institute for Research on Exoplanets and the Center for Space and Habitability, and partly carried out within the framework of the National Centres of Competence in Research PlanetS supported by the Swiss National Science Foundation. LRB acknowledges partial support by MIUR under PRIN programme #2017Z2HSMF and by PRIN-INAF 2019. This survey is based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. These observations are associated with programme 15201. Some of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). ## Data availability All observational data used in this work are publicly available. The reduced stacked images from this work will be supplied upon request to the corresponding author.
``` ハッブル宇宙望遠鏡による低質量二つの星系伴星と惑星系を探査した結果を、T8-Y1スペクトルタイプを持つ33Nearbyの褐色矮星に提示します。この調査は、これらの微弱系に新たな光度情報を与え、質量、温度のモデル推定値を導き出しました。0.2-0.5''の視線で深い感度を達成しながら、5 AUの外側では質量比0.4-0.7まで探査しました。しかし、私たちの主星系には伴星を見つけることができません。この調査の完備度から、この1つの系では、0.4%未満の伴星頻度を1σレベル (< 13.0%を2σレベル) で推定しました。この範囲の1-1000 auの距離と質量比q>0.4
2309.16803
Local boundedness of minimizers under unbalanced Orlicz growth conditions
Local minimizers of integral functionals of the calculus of variations are analyzed under growth conditions dictated by different lower and upper bounds for the integrand. Growths of non-necessarily power type are allowed. The local boundedness of the relevant minimizers is established under a suitable balance between the lower and the upper bounds. Classical minimizers, as well as quasi-minimizers are included in our discussion. Functionals subject to so-called $p,q$-growth conditions are embraced as special cases and the corresponding sharp results available in the literature are recovered.
Andrea Cianchi, Mathias Schäffner
2023-09-28T19:12:20
http://arxiv.org/abs/2309.16803v2
# Local boundedness of minimizers under unbalanced Orlicz growth conditions ###### Abstract. Local minimizers of integral functionals of the calculus of variations are analyzed under growth conditions dictated by different lower and upper bounds for the integrand. Growths of non-necessarily power type are allowed. The local boundedness of the relevant minimizers is established under a suitable balance between the lower and the upper bounds. Classical minimizers, as well as quasi-minimizers are included in our discussion. Functionals subject to so-called \(p,q\)-growth conditions are embraced as special cases and the corresponding sharp results available in the literature are recovered. Key words and phrases:Local minimizers, local boundedness, unbalanced Orlicz growth, Orlicz-Sobolev inequalities 2000 Mathematics Subject Classification: 49N60 ## 1. Introduction We are concerned with the local boundedness of local minimizers, or quasi-minimizers, of integral functionals of the form \[\mathcal{F}(u,\Omega)=\int_{\Omega}f(x,u,\nabla u)\,dx, \tag{1.1}\] where \(\Omega\) is an open set in \(\mathbb{R}^{n}\), with \(n\geq 2\), and \(f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) is a Caratheodory function subject to proper structure and growth conditions. Besides its own interest, local boundedness is needed to ensure certain higher regularity properties of minimizers. Interestingly, some regularity results for minimizers admit variants that require weaker hypotheses under the a priori assumption of their local boundedness. Local boundedness of local minimizers of the functional \(\mathcal{F}\) is classically guaranteed if \(f(x,t,\xi)\) is subject to lower and upper bounds in terms of positive multiples of \(|\xi|^{p}\), for some \(p\geq 1\). This result can be traced back to the work of Ladyzhenskaya and Ural'ceva [LaUr], which, in turn, hinges upon methods introduced by De Giorgi in his regularity theory for linear elliptic equations with merely measurable coefficients. The study of functionals built on integrands \(f(x,t,\xi)\) bounded from below and above by different powers \(|\xi|^{p}\) and \(|\xi|^{q}\), called with \(p,q\)-growth in the literature, was initiated some fifty years ago. A regularity theory for minimizers under assumptions of this kind calls for additional structure conditions on \(f\), including convexity in the gradient variable. As shown in various papers starting from the nineties of the last century, local minimizers of functional with \(p,q\)-growth are locally bounded under diverse structure conditions, provided that the difference between \(q\) and \(p\) is not too large, depending on the dimension \(n\). This issue was addressed in [MaPa, MoNa] and, more recently, in [CMM1, CMM2]. Related questions are considered in [BMS, FuSb, Str] in connection with anisotropic growth conditions. By contrast, counterexamples show that unbounded minimizers may exist if the exponents \(p\) and \(q\) are too far apart [Gia, Ho, Ma1, Ma2]. The gap between the assumptions on \(p\) and \(q\) in these examples and in the regularity result has recently been filled in the paper [HiSch], where the local boundedness of minimizers is established for the full range of exponents \(p\) and \(q\) excluded from the relevant counterexamples. An extension of the techniques from [HiSch] has recently been applied in [DeGr] to extend the boundedness result to obstacle problems. In the present paper, the conventional realm of polynomial growths is abandoned and the question of local boundedness of local minimizers, and quasi-minimizers, is addressed under bounds on \(f\) of Orlicz type. More specifically, the growth of \(f\) is assumed to be governed by Young functions, namely nonnegative convex functions vanishing at \(0\). The local boundedness of minimizers in the case when lower and upper bounds on \(f\) are imposed in terms of the same Young function follows via a result from [Ci3], which also deals with anisotropic Orlicz growths. The same problem for solutions to elliptic equations is treated in [Ko]. Our focus here is instead on the situation when different Young functions \(A(|\xi|)\) and \(B(|\xi|)\) bound \(f(x,t,\xi)\) from below and above. Functionals with \(p,q\)-growth are included as a special instance. A sharp balance condition between the Young functions \(A\) and \(B\) is exhibited for any local minimizer of the functional \(\mathcal{F}\) to be locally bounded. Bounds on \(f(x,t,\xi)\) depending on a function \(E(|t|)\) are also included in our discussion. Let us mention that results in the same spirit can be found in the paper [DMP], where, however, more restrictive non-sharp assumptions are imposed. The global boundedness of global minimizers of functionals and of solutions to boundary value problems for elliptic equations subject to Orlicz growth conditions has also been examined in the literature and is the subject e.g. of [Al, BCM, Ci2, Ta1, Ta2]. Note that, unlike those concerning the local boundedness of local minimizers and local solutions to elliptic equations, global boundedness results in the presence of prescribed boundary conditions just require lower bounds in the gradient variable for integrands of functionals or equation coefficients. Therefore, the question of imposing different lower and upper bounds does not arise with this regard. Beyond boundedness, several further aspects of the regularity theory of solutions to variational problems and associated Euler equations, under unbalanced lower and upper bounds, have been investigated. The early papers [Ma2, Ma3] have been followed by various contributions on this topic, a very partial list of which includes [BCM, BeMi, BeSch2, BeSch3, BoBr, BCSV, BGS, ByOh, CKP1, CKP2, CoMi, DeMi1, DeMi3, ELP, ELM, HaOk1, Ma4]. A survey of investigations around this area can be found in [MiRa]. In particular, results from [BCM, BoBr, CKP1, DeMi2, HaOk2] demonstrate the critical role of local boundedness for higher regularity of local minimizers, which we alluded to above. ## 2. Main result We begin by enucleating a basic case of our result for integrands in (1.1) which do not depend on \(u\). Namely, we consider functionals of the form \[\mathcal{F}(u,\Omega)=\int_{\Omega}f(x,\nabla u)\,dx, \tag{2.1}\] where \[f:\Omega\times\mathbb{R}^{n}\to\mathbb{R}.\] A standard structure assumption to be fulfilled by \(f\) is that \[\text{the function}\quad\mathbb{R}^{n}\ni\xi\mapsto f(x,\xi)\quad\text{ is convex for a.e. }x\in\Omega. \tag{2.2}\] Next, an \(A,B\)-growth condition on \(f\) is imposed, in the sense that \[A(|\xi|)-L\leq f(x,\xi)\leq B(|\xi|)+L\quad\text{for a.e. }x\in\Omega\text{ and every }\xi\in\mathbb{R}^{n}, \tag{2.3}\] where \(A\) is a Young function and \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity. By contrast, the latter condition is not required on the lower bound \(A\). The function \(A\) dictates the natural functional framework for the trial functions \(u\) in the minimization problem for \(\mathcal{F}\). It is provided by the Orlicz-Sobolev class \(V^{1}_{\text{loc}}K^{A}(\Omega)\) of those weakly differentiable functions on \(\Omega\) such that \[\int_{\Omega^{\prime}}A(|\nabla u|)\,dx<\infty\] for every open set \(\Omega^{\prime}\Subset\Omega\). Besides standard local minimizers, we can as well deal with so-called quasi-minimizers, via the very same approach. A function \(u\in V^{1}_{\text{loc}}K^{A}(\Omega)\) is said to be a local quasi-minimizer of \(\mathcal{F}\) if \[\mathcal{F}(u,\Omega^{\prime})<\infty\] for every open set \(\Omega^{\prime}\Subset\Omega\), and there exists a constant \(Q\geq 1\) such that \[\mathcal{F}(u,\operatorname{supp}\varphi)\leq Q\mathcal{F}(u+\varphi, \operatorname{supp}\varphi) \tag{2.4}\] for every \(\varphi\in V^{1}_{\text{loc}}K^{A}(\Omega)\) such that \(\operatorname{supp}\varphi\Subset\Omega\). Plainly, \(u\) is a standard local minimizer of \(\mathcal{F}\) provided that inequality (2.4) holds with \(Q=1\). Throughout the paper, we shall assume that \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt=\infty. \tag{2.5}\] Indeed, if \(A\) grows so fast near infinity that \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt<\infty, \tag{2.6}\] then every function \(u\in V^{1}_{\rm loc}K^{A}(\Omega)\) is automatically bounded, irrespective of whether it minimizes \(\mathcal{F}\) or not. This is due to the inclusion \[V^{1}_{\rm loc}K^{A}(\Omega)\subset L^{\infty}_{\rm loc}(\Omega), \tag{2.7}\] which holds as a consequence of a Sobolev-Poincare inequality in Orlicz spaces. Heuristically speaking, our result ensures that any local quasi-minimizer of \(\mathcal{F}\) as in (2.1) is locally bounded, provided that the function \(B\) does not grow too quickly near infinity compared to \(A\). The maximal admissible growth of \(B\) is described through the sharp Sobolev conjugate \(A_{n-1}\) of \(A\) in dimension \(n-1\), whose definition is recalled in the next section. More precisely, if \[n\geq 3\quad\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n- 2}}dt=\infty, \tag{2.8}\] then \(B\) has to be dominated by \(A_{n-1}\) near infinity, in the sense that \[B(t)\leq A_{n-1}(Lt)\qquad\text{for }t\geq t_{0}, \tag{2.9}\] for some positive constants \(L\) and \(t_{0}\). On the other hand, in the regime complementary to (2.8), namely in either of the following cases \[\begin{cases}n=2\\ n\geq 3\quad\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n- 2}}dt<\infty,\end{cases} \tag{2.10}\] no additional hypothesis besides the \(\Delta_{2}\)-condition near infinity is needed on \(B\). Notice that, by an Orlicz-Poincare-Sobolev inequality on \(\mathbb{S}^{n-1}\), both options in (2.10) entail that \(V^{1}_{\rm loc}K^{A}(\mathbb{S}^{n-1})\subset L^{\infty}_{\rm loc}(\mathbb{S}^ {n-1})\). Altogether, our boundedness result for functionals of the form (2.1) reads as follows. **Theorem 2.1**.: _Let \(f:\Omega\times\mathbb{R}^{n}\to\mathbb{R}\) be a Caratheodory function satisfying the structure assumption (2.2). Suppose that the growth condition (2.3) holds for some Young functions \(A\) and \(B\), such that \(B\in\Delta_{2}\) near infinity. Assume that either condition (2.10) is in force, or condition (2.8) is in force and \(B\) fulfills estimate (2.9). Then any local quasi-minimizer of the functional \(\mathcal{F}\) in (2.1) is locally bounded in \(\Omega\)._ Assume now that \(\mathcal{F}\) has the general form (1.1), and hence \[f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}.\] Plain convexity in the gradient variable is no longer sufficient, as a structure assumption, for a local boundedness result to hold. One admissible strengthening consists of coupling it with a kind of almost monotonicity condition in the \(u\) variable. Precisely, one can suppose that \[\begin{cases}\text{the function}\quad\mathbb{R}^{n}\ni\xi\mapsto f(x,t,\xi)& \text{is convex for a.e. }x\in\Omega\text{ and every }t\in\mathbb{R},\\ f(x,t,\xi)\leq Lf(x,s,\xi)+E(|s|)+L&\text{if }|t|\leq|s|,\text{ for a.e. }x\in\Omega\text{ and every }\xi\in\mathbb{R}^{n},\end{cases} \tag{2.11}\] where \(L\) is a positive constant and \(E:[0,\infty)\to[0,\infty)\) is a non-decreasing function fulfilling the \(\Delta_{2}\)-condition near infinity. An alternate condition which still works is the joint convexity of \(f\) in the couple \((t,\xi)\), in the sense that \[\text{the function}\quad\mathbb{R}\times\mathbb{R}^{n}\ni(t,\xi)\mapsto f(x,t, \xi)\quad\text{ is convex for a.e. }x\in\Omega. \tag{2.12}\] The growth of \(f\) is governed by the following bounds: \[A(|\xi|)-E(|t|)-L\leq f(x,t,\xi)\leq B(|\xi|)+E(|t|)+L\quad\text{for a.e. }x\in\Omega\text{ and every }t\in\mathbb{R}\text{ and }\xi\in\mathbb{R}^{n}, \tag{2.13}\] where \(A\) is a Young function, \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity, and \(E\) is the same function as in (2.11), if this assumption is in force. The appropriate function space for trial functions in the definition of quasi-minimizer of the functional \(\mathcal{F}\) is still \(V^{1}_{\rm loc}K^{A}(\Omega)\), and the definition given in the special case (2.1) carries over to the present general framework. The bound to be imposed on the function \(B\) is the same as in the \(u\)-free case described above. On the other hand, the admissible growth of the function \(E\) is dictated by the Sobolev conjugate \(A_{n}\) of \(A\) in dimension \(n\). Specifically, we require that \[E(t)\leq A_{n}(Lt)\qquad\text{for }t\geq t_{0}, \tag{2.14}\] for some positive constants \(L\) and \(t_{0}\). Our comprehensive result then takes the following form. **Theorem 2.2**.: _Let \(f:\Omega\times\mathbb{R}\times\mathbb{R}^{n}\to\mathbb{R}\) be a Caratheodory function satisfying either the structure assumption (2.11) or (2.12). Suppose that the growth condition (2.13) holds for some Young functions \(A\) and \(B\) and a non-decreasing function \(E\), such that \(B,E\in\Delta_{2}\) near infinity. Assume that either condition (2.10) is in force, or condition (2.8) is in force and \(B\) fulfills estimate (2.9). Moreover, assume that \(E\) fulfills estimate (2.14). Then any local quasi-minimizer of the functional \(\mathcal{F}\) in (1.1) is locally bounded in \(\Omega\)._ Our approach to Theorems 2.1 and 2.2 follows along the lines of De Giorgi's regularity result for linear equations with merely measurable coefficients, on which, together with Moser's iteration technique, all available proofs of the local boundedness of local solutions to variational problems or elliptic equations are virtually patterned. The main novelties in the present framework amount to the use of sharp Poincare and Sobolev inequalities in Orlicz spaces and to an optimized form of the Caccioppoli-type inequality. The lack of homogeneity of non-power type Young functions results in Orlicz-Sobolev inequalities whose integral form necessarily involves a gradient term on both sides. This creates new difficulties, that also appear, again because of the non-homogeneity of Young functions, in deriving the optimized Caccioppoli inequality. The latter requires an ad hoc process in the choice of trial functions in the definition of quasi-minimizers. The advantage of the use of the relevant Caccioppoli inequality is that its proof only calls into play Sobolev-type inequalities on \((n-1)\)-dimensional spheres, instead of \(n\)-dimensional balls. This allows for growths of the function \(B\) dictated by the \((n-1)\)-dimensional Sobolev conjugate of \(A\). By contrast, a more standard choice of trial functions would only permit slower growths of \(B\), not exceeding the \(n\)-dimensional Sobolev conjugate of \(A\). Orlicz-Sobolev and Poincare inequalities in dimension \(n\) just come into play in the proof of Theorem 2.2, when estimating terms depending on the variable \(u\). The trial function optimization strategy is reminiscent of that used in diverse settings in recent years. The version exploited in [11] - a variant of [10] - to deal with functionals subject to \(p,q\)-growth conditions is sensitive to the particular growth of the integrand. The conditions imposed in the situation under consideration here are so general to force us to resort to a more robust optimization argument, implemented in Lemma 5.1, Section 5. The latter is inspired to constructions employed in [12] in the context of div-curl lemmas, and in [13] in the proof of absence of Lavrientiev-phenomena in vector-valued convex minimization problems. We conclude this section by illustrating Theorems 2.1 and 2.2 with applications to a couple of special instances. The former corresponds to functionals with \(p,q\)-growth. It not only recovers the available results but also augments and extends them in some respects. The latter concerns functionals with "power-times-logarithmic" growths, and provides us with an example associated with genuinely non-homogenous Young functions. **Example 2.1**.: In the standard case when \[A(t)=t^{p},\] with \(1\leq p\leq n\), Theorem 2.1 recovers a result of [11]. Indeed, if \(n\geq 3\) and \(1\leq p<n-1\), we have that \(A_{n-1}(t)\approx t^{\frac{(n-1)p}{(n-1)-p}}\), and assumption (2.9) is equivalent to \[B(t)\lesssim t^{\frac{(n-1)p}{(n-1)-p}}\quad\text{near infinity}. \tag{2.15}\] Here, the relations \(\lesssim\) and \(\approx\) mean domination and equivalence, respectively, in the sense of Young functions. If \(p=n-1\), then \(A_{n-1}(t)\approx e^{t^{\frac{n-1}{n-2}}}\) near infinity, whereas if \(p>n-1\), then the second alternative condition (2.10) is satisfied. Hence, if either \(n=2\) or \(n\geq 3\) and \(p\geq n-1\), then any Young function \(B\in\Delta_{2}\) near infinity is admissible. Condition (2.15) is sharp, since the functionals with \(p,q\)-growth exhibited in [14, 15, 16] admit unbounded local minimizers if assumption (2.15) is dropped. Let us point out that the result deduced from Theorem 2.1 also enhances that of [11], where the function \(\xi\mapsto f(x,\xi)\) is assumed to fulfil a variant of the \(\Delta_{2}\)-condition, which is not imposed here. On the other hand, Theorem 2.2 extends the result of [11], where integrands only depending on \(x\) and \(\nabla u\) are considered. The conclusion of Theorem 2.2 hold under the same bound (2.15) on the function \(B\). Moreover, \(A_{n}(t)\approx t^{\frac{np}{n-p}}\) if \(1\leq p<n\) and \(A_{n}(t)\approx e^{t^{\frac{n}{n-1}}}\) near infinity if \(p=n\). Hence, if \(1\leq p<n\), then assumption (2.14) reads: \[E(t)\lesssim t^{\frac{np}{n-p}}\quad\text{near infinity}.\] If \(p=n\), then any non-decreasing function \(E\) satisfying the \(\Delta_{2}\)-condition near infinity satisfies assumption (2.14), and it is therefore admissible. **Example 2.2**.: Assume that \[A(t)\approx t^{p}(\log t)^{\alpha}\quad\text{near infinity},\] where \(1<p<n\) and \(\alpha\in\mathbb{R}\), or \(p=1\) and \(\alpha\geq 0\), or \(p=n\) and \(\alpha\leq n-1\). Observe that these restrictions on the exponents \(p\) and \(\alpha\) are required for \(A\) to be a Young function fulfilling condition (2.5). From an application of Theorem 2.2 one can deduce that any local minimizer of \(\mathcal{F}\) is locally bounded under the following assumptions., If \(n\geq 3\) and \(p<n-1\), then we have to require that \[B(t)\lesssim t^{\frac{(n-1)p}{(n-1)-p}}(\log t)^{\frac{(n-1)\alpha}{(n-1)-p}} \quad\text{near infinity}.\] If either \(n=2\), or \(n\geq 3\) and \(n-1\leq p<n\), then any Young function \(B\in\Delta_{2}\) near infinity is admissible. Moreover, if \(p<n\), then our assumption on \(E\) takes the form: \[E(t)\lesssim t^{\frac{np}{n-p}}(\log t)^{\frac{n\alpha}{n-p}}\quad\text{near infinity}.\] If \(p=n\), then any non-decreasing function \(E\in\Delta_{2}\) near infinity is admissible. ## 3. Orlicz-Sobolev spaces This section is devoted to some basic definitions and properties from the theory of Young functions and Orlicz spaces. We refer the reader to the monograph [RaRe] for a comprehensive presentation of this theory. The Sobolev and Poincare inequalities in Orlicz-Sobolev spaces that play a role in our proofs are also recalled. Orlicz spaces are defined in terms of Young functions. A function \(A:[0,\infty)\to[0,\infty]\) is called a Young function if it is convex (non trivial), left-continuous and \(A(0)=0\). The convexity of \(A\) and its vanishing at \(0\) imply that \[\lambda A(t)\leq A(\lambda t)\quad\text{for $\lambda\geq 1$ and $t\geq 0$}, \tag{3.1}\] and that the function \[\frac{A(t)}{t}\quad\text{is non-decreasing in $(0,\infty)$}. \tag{3.2}\] The Young conjugate \(\widetilde{A}\) of \(A\) is defined by \[\widetilde{A}(t)=\sup\{\tau t-A(\tau):\,\tau\geq 0\}\qquad\text{for}\qquad t \geq 0\,.\] The following inequalities hold: \[s\leq A^{-1}(s)\widetilde{A}^{-1}(s)\leq 2s\qquad\text{for $s\geq 0$}, \tag{3.3}\] where \(A^{-1}\) and \(\widetilde{A}^{-1}\) denote the generalized right-continuous inverses of \(A\) and \(\widetilde{A}\), respectively. A Young function \(A\) is said to satisfy the \(\Delta_{2}\)-condition globally - briefly \(A\in\Delta_{2}\) globally - if there exists a constant \(c\) such that \[A(2t)\leq cA(t)\quad\text{for $t\geq 0$}. \tag{3.4}\] If inequality (3.4) just holds for \(t\geq t_{0}\) for some \(t_{0}>0\), then we say that \(A\) satisfies the \(\Delta_{2}\)-condition near infinity, and write \(A\in\Delta_{2}\) near infinity. One has that \[A\in\Delta_{2}\text{ globally [near infinity] if and only if there exists $q\geq 1$ such that $\frac{tA^{\prime}(t)}{A(t)}\leq q$ for a.e. $t>0$ [$t\geq t_{0}$]}. \tag{3.5}\] A Young function \(A\) is said to dominate another Young function \(B\) globally if there exists a positive constant \(c\) such that \[B(t)\leq A(ct) \tag{3.6}\] for \(t\geq 0\). The function \(A\) is said to dominate \(B\) near infinity if there exists \(t_{0}\geq 0\) such that (3.6) holds for \(t\geq t_{0}\). If \(A\) and \(B\) dominate each other globally [near infinity], then they are called equivalent globally [near infinity]. We use the notation \(B\lesssim A\) to denote that \(A\) dominates \(B\), and \(B\approx A\) to denote that \(A\) and \(B\) are equivalent. This terminology and notation will also be adopted for merely nonnegative functions, which are not necessarily Young functions. Let \(\Omega\) be a measurable set in \(\mathbb{R}^{n}\). The Orlicz class \(K^{A}(\Omega)\) built upon a Young function \(A\) is defined as \[K^{A}(\Omega)=\bigg{\{}u:\text{$u$ is measurable in $\Omega$ and $\int_{\Omega}A(|u|)\,dx<\infty$}\bigg{\}}. \tag{3.7}\] The set \(K^{A}(\Omega)\) is convex for every Young function \(A\). The Orlicz space \(L^{A}(\Omega)\) is the linear hull of \(K^{A}(\Omega)\). It is a Banach function space, equipped with the Luxemburg norm defined as \[\|u\|_{L^{A}(\Omega)}=\inf\left\{\lambda>0:\int_{\Omega}A\left(\frac{|u|}{ \lambda}\right)dx\leq 1\right\} \tag{3.8}\] for a measurable function \(u\). These notions are modified as usual to define the local Orlicz class \(K^{A}_{\rm loc}(\Omega)\) and the local Orlicz space \(L^{A}_{\rm loc}(\Omega)\). If either \(A\in\Delta_{2}\) globally, or \(|\Omega|<\infty\) and \(A\in\Delta_{2}\) near infinity, then \(K^{A}(\Omega)\) is, in fact, a linear space, and \(K^{A}(\Omega)=L^{A}(\Omega)\). Here, \(|\Omega|\) denotes the Lebesgue measure of \(\Omega\). Notice that, in particular, \(L^{A}(\Omega)=L^{p}(\Omega)\) if \(A(t)=t^{p}\) for some \(p\in[1,\infty)\), and \(L^{A}(\Omega)=L^{\infty}(\Omega)\) if \(A(t)=0\) for \(t\in[0,1]\) and \(A(t)=\infty\) for \(t\in(1,\infty)\). The identity \[\|\chi_{E}\|_{L^{A}(\Omega)}=\frac{1}{A^{-1}(1/|E|)} \tag{3.9}\] holds for every Young function \(A\) and any measurable set \(E\subset\Omega\). Here, \(\chi_{E}\) stands for the characteristic function of \(E\). The Holder inequality in Orlicz spaces tells us that \[\int_{\Omega}|uv|\,dx\leq 2\|u\|_{L^{A}(\Omega)}\|v\|_{L^{\widetilde{A}}(\Omega)} \tag{3.10}\] for \(u\in L^{A}(\Omega)\) and \(v\in L^{\widetilde{A}}(\Omega)\). Assume now that \(\Omega\) is an open set. The homogeneous Orlicz-Sobolev class \(V^{1}K^{A}(\Omega)\) is defined as the convex set \[V^{1}K^{A}(\Omega)=\left\{u\in W^{1,1}_{\rm loc}(\Omega):\,|\nabla u|\in K^{A} (\Omega)\right\} \tag{3.11}\] and the inhomogeneous Orlicz-Sobolev class \(W^{1}K^{A}(\Omega)\) is the convex set \[W^{1}K^{A}(\Omega)=K^{A}(\Omega)\cap V^{1}K^{A}(\Omega). \tag{3.12}\] The homogenous Orlicz-Space \(V^{1}L^{A}(\Omega)\) and its inhomogenous counterpart \(W^{1}L^{A}(\Omega)\) are accordingly given by \[V^{1}L^{A}(\Omega)=\left\{u\in W^{1,1}_{\rm loc}(\Omega):\,|\nabla u|\in L^{A} (\Omega)\right\} \tag{3.13}\] and \[W^{1}L^{A}(\Omega)=L^{A}(\Omega)\cap V^{1}L^{A}(\Omega). \tag{3.14}\] The latter is a Banach space endowed with the norm \[\|u\|_{W^{1,A}(\Omega)}=\|u\|_{L^{A}(\Omega)}+\|\nabla u\|_{L^{A}(\Omega)}. \tag{3.15}\] Here, and in what follows, we use the notation \(\|\nabla u\|_{L^{A}(\Omega)}\) as a shorthand for \(\|\,|\nabla u|\,\|_{L^{A}(\Omega)}\). The local versions \(V^{1}_{\rm loc}K^{A}(\Omega)\), \(W^{1}_{\rm loc}K^{A}(\Omega)\), \(V^{1}_{\rm loc}L^{A}(\Omega)\), and \(W^{1}_{\rm loc}L^{A}(\Omega)\) of these sets/spaces is obtained by modifying the above definitions as usual. In the case when \(L^{A}(\Omega)=L^{p}(\Omega)\) for some \(p\in[1,\infty]\), the standard Sobolev space \(W^{1,p}(\Omega)\) and its homogeneous version \(V^{1,p}(\Omega)\) are recovered. Orlicz and Orlicz-Sobolev classes of weakly differentiable functions \(u\) defined on the \((n-1)\)-dimensional unit sphere \(\mathbb{S}^{n-1}\) in \(\mathbb{R}^{n}\) also enter our approach. These spaces are defined as in (3.7), (3.8), (3.11), (3.13), and (3.14), with the Lebesgue measure replaced with the \((n-1)\)-dimensional Hausdorff measure \(\mathcal{H}^{n-1}\), and \(\nabla u\) replaced with \(\nabla_{\mathbb{S}}u\), the vector field on \(\mathbb{S}^{n-1}\) whose components are the covariant derivatives of \(u\) As highlighted in the previous section, sharp embedding theorems and corresponding inequalities in Orlicz-Sobolev spaces play a critical role in the formulation of our result and in its proof. As shown in [10] (see also [10] for an equivalent version), the optimal \(n\)-dimensional Sobolev conjugate of a Young function \(A\) fulfilling \[\int_{0}\!\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt<\infty \tag{3.16}\] is the Young function \(A_{n}\) defined as \[A_{n}(t)=A(H_{n}^{-1}(t))\qquad\text{for }t\geq 0, \tag{3.17}\] where the function \(H_{n}:[0,\infty)\to[0,\infty)\) is given by \[H_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt\right) ^{\frac{n-1}{n}}\qquad\text{for $s\geq 0$}. \tag{3.18}\] The function \(A_{n-1}\) is defined analogously, by replacing \(n\) with \(n-1\) in equations (3.17) and (3.18). In the statements of Theorems 2.1 and 2.2, the functions \(A_{n}\),and \(A_{n-1}\) are defined after modifying \(A\) near \(0\), if necessary, in such a way that condition (3.16) be satisfied. Assumptions (2.3) and (2.13) are not affected by the choice of the modified function \(A\), thanks to the presence of the additive constant \(L\). Membership of a function in an Orlicz-Sobolev local class or space associated with \(A\) is also not influenced by this choice, inasmuch as the behavior of \(A\) near \(0\) is irrelevant (up to additive and/or multiplicative constants) whenever integrals or norms over sets with finite measure are concerned. An optimal Sobolev-Poincare inequality on balls \(\mathbb{B}_{r}\subset\mathbb{R}^{n}\), centered at \(0\) and with radius \(r\) reads as follows. In its statement, we adopt the notation \[u_{\mathbb{B}_{r}}=\fint_{\mathbb{B}_{r}}u(x)\,dx,\] where \(\fint\) stands for integral average. **Theorem A**.: _Let \(n\geq 2\), let \(r>0\), and let \(A\) be a Young function fulfilling condition (3.16). Then, there exists a constant \(\kappa=\kappa(n)\) such that_ \[\int_{\mathbb{B}_{r}}A_{n}\!\left(\frac{|u-u_{\mathbb{B}_{r}}|}{\kappa\big{(} \int_{\mathbb{B}_{r}}A(|\nabla u|)dy\big{)}^{\frac{1}{n}}}\right)dx\leq\int_{ \mathbb{B}_{r}}A(|\nabla u|)\,dx \tag{3.19}\] _for every \(u\in V^{1}K^{A}(\mathbb{B}_{r})\)._ As a consequence of inequality (3.19) and of Lemma 4.1, Section 4, the following inclusion holds: \[V^{1}_{\mathrm{loc}}K^{A}(\Omega)\subset K^{A}_{\mathrm{loc}}(\Omega) \tag{3.20}\] for any open set \(\Omega\subset\mathbb{R}^{n}\) and any Young function \(A\). Thereby, \[V^{1}_{\mathrm{loc}}K^{A}(\Omega)=W^{1}_{\mathrm{loc}}K^{A}(\Omega).\] Hence, in what follows, the spaces \(V^{1}_{\mathrm{loc}}K^{A}(\Omega)\) and \(W^{1}_{\mathrm{loc}}K^{A}(\Omega)\) will be equally used. Besides the Sobolev-Poincare inequality of Theorem A, a Sobolev type inequality is of use in our applications and is the subject of the following theorem. Only Part (i) of the statement will be needed. Part (ii) substantiates inclusion (2.7). **Theorem B**.: _Let \(n\geq 2\), let \(r>0\), and let \(A\) be a Young function fulfilling condition (3.16)._ * _Assume that condition (_2.5_) holds. Then, there exists a constant_ \(\kappa=\kappa(n,r)\) _such that_ (3.21) \[\int_{\mathbb{B}_{r}}A_{n}\!\left(\frac{|u|}{\kappa\big{(}\int_{\mathbb{B}_{r} }A(|u|)+A(|\nabla u|)dy\big{)}^{\frac{1}{n}}}\right)dx\leq\int_{\mathbb{B}_{r }}A(|u|)+A(|\nabla u|)\,dx\] _for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{r})\)_._ * _Assume that condition (_2.6_) holds. Then, there exists a constant_ \(\kappa=\kappa(n,r,A)\) _such that_ (3.22) \[\|u\|_{L^{\infty}(\mathbb{B}_{r})}\leq\kappa\bigg{(}\int_{\mathbb{B}_{r}}A(|u |)+A(|\nabla u|)\,dx\bigg{)}^{\frac{1}{n}}\] _for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{r})\)_._ _In particular, if \(r\in[r_{1},r_{2}]\) for some \(r_{2}>r_{1}>0\), then the constant \(\kappa\) in inequalities (3.21) and (3.22) depends on \(r\) only via \(r_{1}\) and \(r_{2}\)._ A counterpart of Theorem B for Orlicz-Sobolev functions on the sphere \(\mathbb{S}^{n-1}\) takes the following form. **Theorem C**.: _Let \(n\geq 2\) and let \(A\) be a Young function such that_ \[\int_{0}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2}}dt<\infty \tag{3.23}\] _if \(n\geq 3\)._ 1. _Assume that_ \(n\geq 3\) _and_ (3.24) \[\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2}}dt=\infty.\] _Then, there exists a constant_ \(\kappa=\kappa(n)\) _such that_ (3.25) \[\int_{\mathbb{S}^{n-1}}A_{n-1}\bigg{(}\frac{|u|}{\kappa\big{(}\int_{\mathbb{S} ^{n-1}}A(|u|)+A(|\nabla_{\mathbb{S}}u|)d\mathcal{H}^{n-1}(y)\big{)}^{\frac{1}{n -1}}}\bigg{)}\,d\mathcal{H}^{n-1}(x)\leq\int_{\mathbb{S}^{n-1}}A(|u|)+A(| \nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n-1}(x)\] _for_ \(u\in W^{1}K^{A}(\mathbb{S}^{n-1})\)_._ 2. _Assume that one of the following situations occurs:_ (3.26) \[\begin{cases}n=2&\text{and}\quad\lim_{t\to 0^{+}}\frac{A(t)}{t}>0\\ \\ n\geq 3&\text{and}\quad\int^{\infty}\left(\frac{t}{A(t)}\right)^{\frac{1}{n-2 }}dt<\infty.\end{cases}\] _Then, there exists a constant_ \(\kappa=\kappa(n,A)\) _such that_ (3.27) \[\|u\|_{L^{\infty}(\mathbb{S}^{n-1})}\leq\kappa\bigg{(}\int_{\mathbb{S}^{n-1}} A(|u|)+A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n-1}(x)\bigg{)}^{\frac{1}{n-1}}\] _for_ \(u\in W^{1}K^{A}(\mathbb{S}^{n-1})\)_._ Theorems A and B are special cases of [15, Theorems 4.4 and 3.1], respectively, which hold in any Lipschitz domain in \(\mathbb{R}^{n}\) (and for Orlicz-Sobolev spaces of arbitrary order). The assertions about the dependence of the constants can be verified via a standard scaling argument. Theorem C can be derived via arguments analogous to those in the proof of [15, Theorem 3.1]. For completeness, we offer the main steps of the proof. Proof of Theorem C.: _Part (i)._ Let us set \[u_{\mathbb{S}^{n-1}}=\fint_{\mathbb{S}^{n-1}}u(x)\,d\mathcal{H}^{n-1}(x).\] A key step is a Sobolev-Poincare type inequality, a norm version of (3.19) on \(\mathbb{S}^{n-1}\), which tells us that \[\|u-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq c\|\nabla_{ \mathbb{S}}u\|_{L^{A}(\mathbb{S}^{n-1})} \tag{3.28}\] for some constant \(c=c(n)\) and for \(u\in V^{1}L^{A}(\mathbb{S}^{n-1})\). A proof of inequality (3.28) rests upon the following symmetrization argument combined with a one-dimensional Hardy-type inequality in Orlicz spaces. Set \[c_{n}=\mathcal{H}^{n-1}(\mathbb{S}^{n-1}) \tag{3.29}\] and denote by \(u^{\circ}:[0,c_{n}]\to[-\infty,\infty]\) the signed decreasing rearrangement of \(u\), defined by \[u^{\circ}(s)=\inf\{t\in\mathbb{R}:\mathcal{H}^{n-1}(\{u>t\})\leq s\}\quad \text{for $s\in[0,c_{n}]$}.\] Moreover, define the signed symmetral \(u^{\sharp}:\mathbb{S}^{n-1}\to[-\infty,\infty]\) of \(u\) as \[u^{\sharp}(x)=u^{\circ}(V(x))\quad\text{for $x\in\mathbb{S}^{n-1}$},\] where \(V(x)\) denotes the \(\mathcal{H}^{n-1}\)-measure of the spherical cap on \(\mathbb{S}^{n-1}\), centered at the north pole on \(\mathbb{S}^{n-1}\), whose boundary contains \(x\). Thus, \(u^{\sharp}\) is a function, which is equimeasurable with \(u\), and whose level sets are spherical caps centered at the north pole. The equimeasurability of the functions \(u\), \(u^{\circ}\) and \(u^{\sharp}\) ensures that \[\|u-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}=\|u^{\sharp}-u_{ \mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}=\|u^{\circ}-u_{\mathbb{S} ^{n-1}}\|_{L^{A_{n-1}}(0,c_{n})}. \tag{3.30}\] Moreover, since \(u^{\circ}(c_{n}/2)\) is a median of \(u^{\circ}\) on \((0,c_{n})\) and \(u_{\mathbb{S}^{n-1}}\) agrees with the mean value of \(u^{\circ}\) over \((0,c_{n})\), one has that \[\|u^{\circ}-u^{\circ}(c_{n}/2)\|_{L^{A_{n-1}}(0,c_{n})}\geq\tfrac{1}{2}\|u^{ \circ}-u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(0,c_{n})}=\tfrac{1}{2}\|u-u_{ \mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}, \tag{3.31}\] see e.g. [CMP, Lemma 2.2]. On the other hand, a version of the Polya-Szego principle on \(\mathbb{S}^{n-1}\) tells us that \(u^{\circ}\) is locally absolutely continuous, \(u^{\sharp}\in V^{1}L^{A}(\mathbb{S}^{n-1})\), and \[\left\|I_{\mathbb{S}^{n-1}}(s)\,\Big{(}-\frac{du^{\circ}}{ds}\Big{)}\right\|_{L ^{A}(0,c_{n})}=\|\nabla_{\mathbb{S}}u^{\sharp}\,\|_{L^{A}(\mathbb{S}^{n-1})} \leq\|\nabla_{\mathbb{S}}u\|_{L^{A}(\mathbb{S}^{n-1})}, \tag{3.32}\] where \(I_{\mathbb{S}^{n-1}}:[0,c_{n}]\to[0,\infty)\) denotes the isoperimetric function of \(\mathbb{S}^{n-1}\) (see [BrZi]). It is well-known that there exists a positive constant \(c=c(n)\) such that \[I_{\mathbb{S}^{n-1}}(s)\geq c\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\quad\text{ for }s\in(0,c_{n}). \tag{3.33}\] Hence, \[c\bigg{\|}\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\Big{(}-\frac{du^{\circ}}{ds} \Big{)}\bigg{\|}_{L^{A}(0,c_{n})}\leq\|\nabla_{\mathbb{S}}u\|_{L^{A}(\mathbb{ S}^{n-1})}, \tag{3.34}\] The absolute continuity of \(u^{\circ}\) ensures that \[u^{\circ}(s)-u^{\circ}(c_{n})=\int_{s}^{c_{n}/2}\bigg{(}-\frac{du^{\circ}}{dr} \bigg{)}\,dr\qquad\text{for }s\in(0,c_{n}). \tag{3.35}\] Thanks to equations (3.30), (3.31), (3.34), (3.35), and to the symmetry of the function \(\min\{s,c_{n}-s\}^{\frac{n-2}{n-1}}\) about \(c_{n}/2\), inequality (3.28) is reduced to the inequality \[\bigg{\|}\int_{s}^{c_{n}/2}r^{-\frac{n-2}{n-1}}\phi(r)\,dr\bigg{\|}_{L^{A_{n-1 }}(0,c_{n}/2)}\leq c\|\phi\|_{L^{A}(0,c_{n}/2)} \tag{3.36}\] for a suitable constant \(c=c(n)\) and for every \(\phi\in L^{A}(0,c_{n}/2)\). Inequality (3.36) is in turn a consequence of [Ci2, inequality (2.7)]. Next, by Lemma 4.2, Section 4, applied with \(n\) replaced with \(n-1\), \[\frac{1}{\widetilde{A}^{-1}(1/(t))}\frac{1}{A_{n-1}^{-1}(t))}\leq\frac{1}{t^{ \frac{n-2}{n-1}}}\quad\text{for }t>0.\] Hence, by inequality (3.10), with \(\Omega\) replaced with \(\mathbb{S}^{n-1}\), one has that \[\|u_{\mathbb{S}^{n-1}}\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})} =|u_{\mathbb{S}^{n-1}}|\|1\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq \frac{2}{c_{n}}\|u\|_{L^{A}(\mathbb{S}^{n-1})}\|1\|_{L^{A}(\mathbb{S}^{n-1})}\] \[=\frac{2}{c_{n}}\frac{1}{\widetilde{A}^{-1}(1/c_{n})}\frac{1}{A_{ n-1}^{-1}(1/c_{n})}\|u\|_{L^{A}(\mathbb{S}^{n-1})}\leq\frac{2}{c_{n}^{\frac{1}{n-1} }}\|u\|_{L^{A}(\mathbb{S}^{n-1})}. \tag{3.37}\] Coupling inequality (3.28) with (3.37) and making use of the triangle inequality entail that \[\|u\|_{L^{A_{n-1}}(\mathbb{S}^{n-1})}\leq c\big{(}\|\nabla_{\mathbb{S}}u\|_{L ^{A}(\mathbb{S}^{n-1})}+\|u\|_{L^{A}(\mathbb{S}^{n-1})}\big{)} \tag{3.38}\] for some constant \(c=c(n)\) and for \(u\in W^{1}L^{A}(\mathbb{S}^{n-1})\). Now set \[M=\int_{\mathbb{S}^{n-1}}A(|\nabla_{\mathbb{S}}u|)+A(|u|)\,d\mathcal{H}^{n-1}( x),\] and apply inequality (3.38) with the function \(A\) replaced with the Young function \(A_{M}\) given by \[A_{M}(t)=\frac{A(t)}{M}\qquad\text{for }t\geq 0.\] Hence, \[\|u\|_{L^{(A_{M})_{n-1}}(\mathbb{S}^{n-1})}\leq c\big{(}\|\nabla_{\mathbb{S}}u \|_{L^{A_{M}}(\mathbb{S}^{n-1})}+\|u\|_{L^{A_{M}}(\mathbb{S}^{n-1})}\big{)}, \tag{3.39}\] where \((A_{M})_{n-1}\) denotes the function obtained on replacing \(A\) with \(A_{M}\) in the definition of \(A_{n-1}\). The fact that the constant \(c\) in (3.38) is independent of \(A\) is of course crucial in deriving inequality (3.39). Observe that \[(A_{M})_{n-1}(t)=\frac{1}{M}A_{n-1}\Big{(}\frac{t}{M^{\frac{1}{n-1}}}\Big{)} \qquad\text{for }t\geq 0. \tag{3.40}\] On the other hand, by the definition of Luxemburg norm and the choice of \(M\), \[\|u\|_{L^{A_{M}}(\mathbb{S}^{n-1})}\leq 1\quad\text{and}\quad\|\nabla_{\mathbb{S}}u\|_{L ^{A}(\mathbb{S}^{n-1})}\leq 1. \tag{3.41}\] Therefore, by the definition of Luxemburg norm again, inequality (3.39) tells us that \[\frac{1}{M}\int_{\mathbb{S}^{n-1}}A_{n-1}\bigg{(}\frac{|u(x)|}{2cM^{\frac{1}{n-1 }}}\bigg{)}\,d\mathcal{H}^{n-1}(x)\leq 1.\] Hence, inequality (3.25) follows. _Part (ii)._ First, assume that \(n\geq 3\) and the integral condition in (3.26) holds. Let \(\overline{A}\) be the Young function defined as \[\overline{A}(t)=\bigg{(}t^{\frac{n-1}{n-2}}\,\int_{t}^{\infty}\frac{ \widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr\bigg{)}^{\widetilde{\phantom{ \text{for}}}}\quad\text{ for }t\geq 0, \tag{3.42}\] where \((\cdots)^{\widetilde{\phantom{\text{\tiny$\frown$ stands for the Young conjugate of the function in parenthesis. Notice that the convergence of integral on the right-hand side of equation (3.42) is equivalent to the convergence of the integral in (3.26), see [14, Lemma 2.3]. Since we are assuming that \(A\) fulfills condition (3.23), the same lemma also ensures that \[\int_{0}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr<\infty. \tag{3.43}\] From [14, Theorem 4.1] one has that \[\overline{A}\big{(}c\|u-u_{\mathbb{S}^{n-1}}\|_{L^{\infty}(\mathbb{S}^{n-1})} \big{)}\leq\fint_{\mathbb{S}^{n-1}}A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{n -1} \tag{3.44}\] for some positive constant \(c=c(n)\) and for \(u\in V^{1}K^{A}(\mathbb{S}^{n-1})\). Furthermore, by Jensen's inequality, \[A\big{(}\|u_{\mathbb{S}^{n-1}}\|_{L^{\infty}(\mathbb{S}^{n-1})} \big{)}\leq A\bigg{(}\fint_{\mathbb{S}^{n-1}}|u|\,d\mathcal{H}^{n-1}\bigg{)} \leq\fint_{\mathbb{S}^{n-1}}A(|u|)\,d\mathcal{H}^{n-1}. \tag{3.45}\] Thanks to [14, Inequality (4.6)], \[\overline{A}(t)\leq A(t)\qquad\text{for }t\geq 0. \tag{3.46}\] Moreover, inequality (3.43) ensures that \[t^{\frac{n-1}{n-2}}\,\int_{t}^{\infty}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1} {n-2}}}\,dr\leq c\,t^{\frac{n-1}{n-2}}\qquad\text{for }t\geq 0, \tag{3.47}\] where we have set \[c=\int_{0}^{\infty}\frac{\widetilde{A}(r)}{r^{1+\frac{n-1}{n-2}}}\,dr.\] Taking the Young conjugates of both sides of inequality (3.47) results in \[\overline{A}(t)\geq ct^{n-1}\qquad\text{for }t\geq 0, \tag{3.48}\] for some constant \(c=c(n,A)\). Inequality (3.27) follows, via the triangle inequality, from inequalities (3.44), (3.45), (3.46) and (3.48). Assume next that \(n=2\) and the limit condition in (3.26) holds. If we denote by \(a\) this limit, then \[A(t)\geq at\qquad\text{for }t\geq 0. \tag{3.49}\] A simple one-dimensional argument, coupled with Jensen's inequality and the increasing monotonicity of the function \(tA^{-1}(1/t)\) shows that \[A\big{(}\tfrac{1}{2\pi}\|u-u_{\mathbb{S}^{1}}\|_{L^{\infty}(\mathbb{S}^{1})} \big{)}\leq\fint_{\mathbb{S}^{1}}A(|\nabla_{\mathbb{S}}u|)\,d\mathcal{H}^{1} \tag{3.50}\] for \(u\in V^{1}K^{A}(\mathbb{S}^{1})\) (see [14, Inequality (4.8) and below]). Inequality (3.27) now follows from (3.45) (which holds also when \(n=2\)), (3.49) and (3.50). ## 4. Analitic lemmas Here, we collect a few technical lemmas about one-variable functions. We begin with two inequalities involving a Young function and its Sobolev conjugate. **Lemma 4.1**.: _Let \(n\geq 2\) and let \(A\) be a Young function fulfilling condition (3.16). Then, for every \(k>0\) there exists a positive constant \(c=c(k,A,n)\) such that_ \[A(t)\leq A_{n}(kt)+c\qquad\text{for }t\geq 0. \tag{4.1}\] Proof.: Fix \(k>0\). Since \(A_{n}(t)=A(H_{n}^{-1}(t))\) and \(\lim_{t\to\infty}\frac{H_{n}^{-1}(t)}{t}=\infty\), there exists \(t\geq t_{0}\) such that \(A(t)\leq A_{n}(kt)\) for \(t\geq t_{0}\). Inequality (4.1) hence follows, with \(c=A(t_{0})\). **Lemma 4.2**.: _Let \(n\geq 2\) and let \(A\) be a Young function fulfilling condition (3.16). Then,_ \[\frac{1}{\widetilde{A}^{-1}(t)}\frac{1}{A_{n}^{-1}(t)}\leq\frac{1}{t^{\frac{1 }{n^{\prime}}}}\qquad\text{for }t>0. \tag{4.2}\] Proof.: Holder's inequality and property (3.2) imply that \[t =\int_{0}^{t}\left(\frac{A(r)}{r}\right)^{\frac{1}{n}}\!\left( \frac{r}{A(r)}\right)^{\frac{1}{n}}dr\leq\left(\int_{0}^{t}\frac{A(r)}{r}\, dr\right)^{\frac{1}{n}}\!\left(\int_{0}^{t}\left(\frac{r}{A(r)}\right)^{\frac{1}{n-1}}dr \right)^{\frac{1}{n^{\prime}}}\] \[\leq\left(\frac{A(t)}{t}\right)^{\frac{1}{n}}\!t^{\frac{1}{n}}H _{n}(t)=A(t)^{\frac{1}{n}}H_{n}(t)\quad\text{for }t>0. \tag{4.3}\] Hence, \[A^{-1}(t)\leq t^{\frac{1}{n}}H_{n}(A^{-1}(t))\quad\text{for }t\geq 0. \tag{4.4}\] The first inequality in (3.3) and inequality (4.4) imply that \[t\leq A^{-1}(t)\widetilde{A}^{-1}(t)\leq t^{\frac{1}{n}}H_{n}(A^{-1}(t)) \widetilde{A}^{-1}(t)\quad\text{for }t\geq 0. \tag{4.5}\] Hence, inequality (4.2) follows. The next result ensures that the functions \(A\), \(B\) and \(E\) appearing in assumption (2.13) can be modified near \(0\) in such a way that such an assumption is still fulfilled, possibly with a different constant \(L\), and the conditions imposed on \(A\), \(B\) and \(E\) in Theorem 2.2 are satisfied globally, instead of just near infinity. Of course, the same applies to the simpler conditions of Theorem 2.1, where the function \(E\) is missing. **Lemma 4.3**.: _Assume that the functions \(f\), \(A\), \(B\) and \(E\) are as in Theorem 2.2. Then, there exist two Young functions \(\widehat{A},\widehat{B}:[0,\infty)\to[0,\infty)\), an increasing function \(\widehat{E}:[0,\infty)\to[0,\infty)\), and constants \(\widehat{L}\geq 1\) and \(q>n\) such that:_ \[\widehat{A}(|\xi|)-\widehat{E}(|t|)-\widehat{L}\leq f(x,t,\xi)\leq\widehat{ B}(|\xi|)+\widehat{E}(|t|)+\widehat{L}\qquad\text{for a.e.}\ x\in\Omega\text{, for every }t\in\mathbb{R}\text{, and every }\xi\in\mathbb{R}^{n}\text{,} \tag{4.7}\] \[t^{\frac{n}{n-1}}\leq\widehat{L}\,\widehat{A}_{n}(t)\quad\text{ for }t\geq 0\text{,}\] (4.8) \[\lim_{t\to 0^{+}}\frac{\widehat{A}(t)}{t}>0,\] (4.9) \[\widehat{E}(2t)\leq\widehat{L}\widehat{E}(t)\quad\text{for }t\geq 0\text{,}\] (4.10) \[\widehat{E}(t)\leq\widehat{A}_{n}(\widehat{L}t)\quad\text{for }t\geq 0\text{,}\] (4.11) \[\widehat{B}(\lambda t)\leq\lambda^{q}\widehat{B}(t)\quad\text{for }t \geq 0\text{ and }\lambda\geq 1\text{.} \tag{4.6}\] _Moreover, if assumption (2.8) is in force, then the function \(B\) satisfies assumption (2.9) and_ \[\widehat{B}(t)\leq\widehat{A}_{n-1}(\widehat{L}t)\qquad\text{for }t\geq 0\text{;} \tag{4.12}\] _if assumption (2.10) is in force, then_ \[\widehat{B}(t)\leq\widehat{L}t^{q}\quad\text{for }t\geq 0. \tag{4.13}\] _Here, \(\widehat{A}_{n-1}\) and \(\widehat{A}_{n}\) denote the functions defined as \(A_{n-1}\) and \(A_{n}\), with \(A\) replaced with \(\widehat{A}\)._ Proof.: _Step 1._ _Construction of \(\widehat{A}\). Denote by \(t_{1}\) the maximum among \(1\), the constant \(t_{0}\) appearing in inequalities (2.14) and (2.9), and the lower bound for \(t\) in the definition of the \(\Delta_{2}\)-condtion near infinity for the functions \(B\) and \(E\). Let us set \(a=\frac{A(t_{1})}{t_{1}}\), and define the Young function \(\widehat{A}\) as \[\widehat{A}(t)=\begin{cases}at&\text{if }0\leq t<t_{1}\\ A(t)&\text{if }t\geq t_{1}.\end{cases} \tag{4.14}\] Clearly, \(\widehat{A}\) satisfies property (4.8) and \[A(t)\leq\widehat{A}(t)\qquad\text{for }t\geq 0. \tag{4.15}\] Also, the convexity of \(A\) ensures that \[\widehat{A}(t)\geq at\quad\text{for }t\geq 0. \tag{4.16}\] Since \[\widehat{H}_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{\widehat{A}(t)}\right) ^{\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}\qquad\text{for }s\geq 0,\] we deduce that \[\widehat{H}_{n}(s)\leq a^{-\frac{1}{n}}s^{\frac{n-1}{n}}\quad\text{for }s\geq 0,\] whence \[a^{\frac{1}{n-1}}t^{\frac{n}{n-1}}\leq\widehat{H}_{n}^{-1}(t)\quad\text{for }t\geq 0.\] Inasmuch as \(\widehat{A}_{n}=\widehat{A}\circ\widehat{H}_{n}^{-1}\), the latter inequality and inequality (4.16) yield: \[\widehat{A}_{n}(t)\geq\widehat{A}((a^{\frac{1}{n-1}}t^{\frac{n}{n-1}})\geq(at )^{\frac{n}{n-1}}\quad\text{for }t\geq 0.\] This shows that inequality (4.7) holds for sufficiently large \(\widehat{L}\). For later reference, also note that \[\widehat{A}_{n}(t)=(at)^{\frac{n}{n-1}}\quad\text{for }t\in[0,t_{1}]. \tag{4.17}\] Next, we have that \[A_{n}(t)\leq\widehat{A}_{n}(t)\qquad\text{for }t\geq 0. \tag{4.18}\] Indeed, inequality (4.15) implies that \[\widehat{H}_{n}(s)\leq H_{n}(s)\qquad\text{for }s\geq 0.\] Thus, \(H_{n}^{-1}(t)\leq\widehat{H}_{n}^{-1}(t)\) for \(t\geq 0\), whence inequality (4.18) follows, on making use of (4.15) again. Moreover, there exists \(t_{2}\geq t_{1}\), depending on \(n\) and \(A\), such that \[\widehat{A}_{n}(t)\leq A_{n}(2t)\quad\text{for }t\geq t_{2}. \tag{4.19}\] Actually, if \(s\geq t_{1}\) and is sufficiently large, then \[\widehat{H}_{n}(s)=\left(\int_{0}^{s}\!\left(\frac{t}{\widehat{A}(t)}\right)^ {\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}\geq\left(\int_{t_{1}}^{s}\!\left( \frac{t}{\widehat{A}(t)}\right)^{\frac{1}{n-1}}dt\right)^{\frac{n-1}{n}}= \left(\int_{t_{1}}^{s}\!\left(\frac{t}{A(t)}\right)^{\frac{1}{n-1}}dt\right)^ {\frac{n-1}{n}}\geq\frac{1}{2}H_{n}(s).\] Observe that the last inequality holds, for large \(s\), thanks to assumption (2.5). Hence, \(\widehat{H}_{n}^{-1}(t)\leq H_{n}^{-1}(2t)\) for sufficiently large \(t\) and thereby \[\widehat{A}_{n}(t)=\widehat{A}(\widehat{H}_{n}^{-1}(t))=A(\widehat{H}_{n}^{-1 }(t))\leq A(H_{n}^{-1}(2t)))=A_{n}(2t)\quad\text{for }t\geq t_{2},\] provided that \(t_{2}\) is sufficiently large. Inequality (4.19) is thus established. _Step 2._ _Construction of \(\widehat{B}\)._ First, consider the case when (2.8) and (2.9) hold. Since \(B\) is a Young function, there exists \(t_{3}\geq t_{2}\), where \(t_{2}\) is the number from Step 2, such that \(B(t_{3})>A_{n-1}(t_{1})\). Define the Young function \(\widehat{B}\) as \[\widehat{B}(t)=\begin{cases}\widehat{A}_{n-1}(t)&\text{if }0\leq t<t_{2}\\ \frac{t_{3}-t_{2}}{t_{3}-t_{2}}\widehat{A}_{n-1}(t_{2})+\frac{t-t_{2}}{t_{3}- t_{2}}B(t_{3})&\text{if }t_{2}\leq t<t_{3}\\ B(t)&\text{if }t\geq t_{3}.\end{cases}\] We claim that inequality (4.12) holds with this choice of \(\widehat{B}\), provided that \(\widehat{L}\) is large enough. If \(t\in[0,t_{2})\), the inequality in question is trivially satisfied with \(\widehat{L}=1\). If \(t\in[t_{2},t_{3})\), then \[\widehat{B}(t)\leq\widehat{B}(t_{3})=B(t_{3})\leq A_{n-1}(Lt_{3})\leq\widehat{ A}_{n-1}(Lt_{3})\leq\widehat{A}_{n-1}((Lt_{3}/t_{2})t),\] where the third inequality holds thanks to (4.18). Finally, if \(t>t_{3}\), then \[\widehat{B}(t)=B(t)\leq A_{n-1}(Lt)\leq\widehat{A}_{n-1}(Lt).\] Altogether, inequality (4.12) is fulfilled with \(\widehat{L}=\max\left\{1,\frac{Lt_{3}}{t_{2}}\right\}\). In order to establish inequality (4.11), it suffices to show that \(\widehat{B}\) satisfies the \(\Delta_{2}\)-condition globally. Since \(\widehat{B}\) is a Young function, this condition is in turn equivalent to the fact that there exists a constant \(c\) such that \[\frac{t\widehat{B}^{\prime}(t)}{\widehat{B}(t)}\leq c\quad\text{for a.e. }t>0. \tag{4.20}\] Since \(B\) is a Young function satisfying the \(\Delta_{2}\)-condition near infinity, and \(\widehat{B}(t)=B(t)\) for large \(t\), condition (4.20) certainly holds for large \(t\). On the other hand, since \[\lim_{t\to 0+}\frac{t\widehat{B}^{\prime}(t)}{\widehat{B}(t)}=\lim_{t\to 0+} \frac{t\widehat{A}^{\prime}_{n-1}(t)}{\widehat{A}_{n-1}(t)}=\frac{n-1}{n-2},\] condition (4.20) also holds for \(t\) close to \(0\). Hence, it holds for every \(t>0\). Next, consider the case when (2.10) holds. The \(\Delta_{2}\)-condition near infinity for \(B\) implies that there exist constants \(q>1\), \(t_{4}>1\) and \(c>0\) such that with \(B(t)\leq ct^{q}\) for all \(t\geq t_{4}\). Since \(t_{4}>1\), we may suppose, without loss of generality, that \(q>n\). Since \(B(t)\leq\widehat{L}(t^{q}+1)\) for \(t\geq 0\), provided that \(\widehat{L}\) is sufficiently large, the choice \(\widehat{B}(t)=\widehat{L}t^{q}\) makes inequalities (4.11) and (4.13) true. _Step 3._ Construction of \(\widehat{E}\). We define \(\widehat{E}\) analogously to \(\widehat{B}\), by replacing \(B\) with \(E\) and \(\widehat{A}_{n-1}\) with \(\widehat{A}_{n}\). The same argument as in Step 2 tells us that inequalities (4.9) and (4.10) hold for a suitable choice of the constant \(\widehat{L}\). _Step 4._ Conclusion. Since \[f(x,t,\xi)\leq B(|\xi|)+E(|t|)+L\leq\widehat{B}(|\xi|)+\widehat{E}(|\xi|)+B(t _{3})+E(t_{3})+L\] and \[f(x,t,\xi)\geq A(|\xi|)-E(|t|)-L\geq\widehat{A}(|\xi|)-\widehat{E}(|\xi|)-A(t _{1})-E(t_{3})-L,\] for a.e. \(x\in\Omega\), and for every \(t\in\mathbb{R}\) and \(\xi\in\mathbb{R}^{n}\), equation (4.6) follows, provided that \(\widetilde{L}\) is chosen sufficiently large. We conclude this section by recalling the following classical lemma - see e.g. [Giu, Lemma 6.1] **Lemma 4.4**.: _Let \(Z:[\rho,\sigma]\to[0,\infty)\) be a bounded function. Assume that there exist constants \(a,b\geq 0\), \(\alpha>0\) and \(\theta\in[0,1)\) such that_ \[Z(r)\leq\theta Z(s)+(s-r)^{-\alpha}a+b\quad\text{if }\rho\leq r<s\leq\sigma.\] _Then,_ \[Z(r)\leq c\big{(}(s-r)^{-\alpha}a+b\big{)}\quad\text{if }\rho\leq r<s\leq\sigma,\] _for some constant \(c=c(\alpha,\theta)>1\)._ ## 5. Proof of Theorem 2.2 We shall limit ourselves to proving Theorem 2.2, since the content of Theorem 2.1 is just a special case of the former. A key ingredient is provided by Lemma 5.1 below. In the statement, \(\Phi_{q}:[0,\infty)\to[0,\infty)\) denotes the function defined for \(q\geq 1\) as \[\Phi_{q}(t)=\begin{cases}t&\text{if }0\leq t<1\\ t^{q}&\text{if }t\geq 1.\end{cases} \tag{5.1}\] One can verify that \[\Phi_{q}(\lambda t)\leq\lambda^{q}\Phi_{q}(t)\qquad\text{for }\lambda\geq 1\text{ and }t\geq 0. \tag{5.2}\] Moreover, given a function \(u\in W^{1}K^{A}(\mathbb{B}_{1})\), we set \[F(u,\rho,\sigma)=\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|u|)+A(| \nabla u|)\,dx \tag{5.3}\] for \(0<\rho<\sigma<1\). **Lemma 5.1**.: _Let \(A\) and \(B\) be Young functions and \(0<\rho<\sigma<1\)._ 1. _Suppose that condition (_2.8_) is in force. Assume that there exist constants_ \(L\geq 1\) _and_ \(q>1\) _such that_ (5.4) \[B(t)\leq A_{n-1}(Lt)\quad\text{and}\quad B(\lambda t)\leq\lambda^{q}B(t)\quad \text{for $t\geq 0$ and $\lambda\geq 1$.}\] _Then, for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{1})\) _there exists a function_ \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) _satisfying_ (5.5) _and such that_ (5.6) \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq c\,\Phi_{q}\bigg{(}\frac{\kappa F(u,\rho,\sigma)^{\frac{1}{n-1}}}{( \sigma-\rho)^{\frac{1}{n-1}}\rho}\bigg{)}F(u,\rho,\sigma)\] _for some constant_ \(c=c(n,q,L)\geq 1\)_. Here,_ \(\kappa\) _denotes the constant appearing in inequality (_3.25_)._ 2. _Suppose that condition (_3.26_) is in force. Assume that there exist constants_ \(L\geq 1\) _and_ \(q>n\) _such that_ (5.7) \[B(t)\leq Lt^{q}\qquad\text{ for all $t\geq 0$.}\] _Then, for every_ \(u\in W^{1}K^{A}(\mathbb{B}_{1})\) _there exists a function_ \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) _satisfying conditions (_5.5_), such that_ (5.8) \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq \frac{c\kappa^{q}F(u,\rho,\sigma)^{\frac{q}{n-1}}}{(\sigma-\rho) ^{q-1+\frac{q}{n-1}}\rho^{q-(n-1)}}\] _for some constant_ \(c=c(n,q,L)\geq 1\)_. Here,_ \(\kappa\) _denotes the constant appearing in inequality (_3.27_)._ Proof.: Let \(u\in W^{1}K^{A}(\mathbb{B}_{1})\). Define, for \(r\in[0,1]\), the function \(u_{r}:\mathbb{S}^{n-1}\to\mathbb{R}\) as \(u_{r}(z)=u(rz)\) for \(z\in\mathbb{S}^{n-1}\). By classical properties of restrictions of Sobolev functions to \((n-1)\)-dimensional concentric spheres, one has that \(u_{r}\) is a weakly differentiable function for a.e. \(r\in[0,1]\). Hence, by Fubini's theorem, there exists a set \(N\subset[0,1]\) such that \(|N|=0\), and \(u_{r}\in W^{1}K^{A}(\mathbb{S}^{n-1})\) for every \(r\in[0,1]\setminus N\). Set \[U_{1}=\bigg{\{}r\in[\rho,\sigma]\setminus N\,:\,\int_{\mathbb{S}^{n-1}}A(| \nabla_{\mathbb{S}}u_{r}(z)|)\,d\mathcal{H}^{n-1}(z)\leq\frac{4}{(\sigma-\rho )r^{n-1}}\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u|)\, \mathrm{d}x\bigg{\}}. \tag{5.9}\] From Fubini's Theorem, the inequality \(|\nabla_{\mathbb{S}}u_{r}(z)|\leq|\nabla u(rz)|\) for \(\mathcal{H}^{n-1}\)-a.e. \(z\in\mathbb{S}^{n-1}\), and the very definition of the set \(U_{1}\) we infer that \[\int_{\mathbb{B}_{\sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u| )\,dx= \int_{\rho}^{\sigma}r^{n-1}\int_{\mathbb{S}^{n-1}}A(|\nabla u(rz)| )\,d\mathcal{H}^{n-1}(z)\,dr\] \[\geq \int_{(\rho,\sigma)\setminus U_{1}}r^{n-1}\int_{\mathbb{S}^{n-1} }A(|\nabla_{\mathbb{S}}u_{r}(z)|)\,d\mathcal{H}^{n-1}(z)\,dr\] \[> \frac{4((\sigma-\rho)-|U_{1}|)}{(\sigma-\rho)}\int_{\mathbb{B}_{ \sigma}\setminus\mathbb{B}_{\rho}}A(|\nabla u|)\,dx.\] Hence, \(|U_{1}|\geq\frac{3}{4}(\sigma-\rho)\). An analogous computation ensures that the set \[U_{2}=\bigg{\{}r\in[\rho,\sigma]\setminus N\,:\,\int_{\mathbb{S}^{n-1}}A(|u_{ r}(z)|)\,d\mathcal{H}^{n-1}(z)\leq\frac{4}{(\sigma-\rho)r^{n-1}}\int_{\mathbb{B}_{ \sigma}\setminus\mathbb{B}_{\rho}}A(|u|)\,dx\bigg{\}} \tag{5.10}\] has the property that \(|U_{2}|\geq\frac{3}{4}(\sigma-\rho)\). Thereby, if we define the set \[U=U_{1}\cap U_{2},\] then \[|U|\geq|(\rho,\sigma)|-|(\rho,\sigma)\setminus U_{1}|-|(\rho,\sigma)\setminus U _{2}|\geq\frac{1}{2}(\sigma-\rho). \tag{5.11}\] Next, define the function \(\eta:\mathbb{B}_{1}\to[0,1]\) as \[\eta(x)=\begin{cases}1&\text{if }0\leq|x|<\rho\\ \frac{1}{|U|}\int_{|x|}^{\sigma}\chi_{U}(s)\,ds&\text{if }\rho\leq|x|\leq \sigma\\ 0&\text{if }\sigma<|x|\leq 1.\end{cases}\] One has that \(0\leq\eta\leq 1\), \(\eta=1\) in \(\mathbb{B}_{\rho}\), \(\eta=0\) in \(\mathbb{B}_{1}\setminus\mathbb{B}_{\sigma}\), \(\eta\in W_{0}^{1,\infty}(\mathbb{B}_{1})\) and \[|\nabla\eta(rz)|=\begin{cases}0&\text{for a.e. }r\notin U\\ \frac{1}{|U|}&\text{for a.e. }r\in U,\end{cases} \tag{5.12}\] and for \(z\in\mathbb{S}^{n-1}\). Hence, the function \(\eta\) satisfies the properties claimed in (5.5). Next, set, for \(r\in[0,1]\setminus N\), \[F_{r}(u)=\int_{\mathbb{S}^{n-1}}A(|u_{r}(z)|)+A(|\nabla_{\mathbb{S}}u_{r}(z) |)\,d\mathcal{H}^{n-1}(z). \tag{5.13}\] By the definition of the set \(U\), \[F_{r}(u)\leq\frac{4}{(\sigma-\rho)r^{n-1}}F(u,\rho,\sigma)\quad\text{for a.e. }r\in U. \tag{5.14}\] We have now to make use of different inequalities, depending on whether we deal we case (i) or (ii). Case (i). Owing to inequality (3.1) and to the second inequality in (5.4), \[B(\lambda t)\leq\Phi_{q}(\lambda)B(t)\qquad\quad\text{for }\lambda\geq 0\text{ and }t\geq 0. \tag{5.15}\] The following chain holds: \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq \int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}B\bigg{(}\bigg{|}\frac{2}{ (\sigma-\rho)}u_{r}(z)\bigg{|}\bigg{)}\,d\mathcal{H}^{n-1}(z)\,dr\] \[= \int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}B\bigg{(}\bigg{|}\frac{2 \kappa u_{r}(z)F_{r}(u)^{\frac{1}{n-1}}}{\kappa(\sigma-\rho)F_{r}(u)^{\frac{1 }{n-1}}}\bigg{|}\bigg{)}\,d\mathcal{H}^{n-1}(z)\,dr\] \[\leq \int_{U}r^{n-1}\Phi_{q}\bigg{(}\bigg{|}\frac{2L\kappa F_{r}(u)^{ \frac{1}{n-1}}}{(\sigma-\rho)r}\bigg{|}\bigg{)}F_{r}(u)\,dr\] \[\leq \Phi_{q}\bigg{(}\bigg{|}\frac{2L\kappa\epsilon^{\frac{1}{n-1}}F (u,\rho,\sigma)^{\frac{1}{n-1}}}{(\sigma-\rho)^{1+\frac{1}{n-1}}\rho}\bigg{|} \bigg{)}4F(u,\rho,\sigma),\] where the second inequality holds by inequality (5.15) and the first inequality in (5.4), the third inequality follows from the Sobolev inequality (3.25), and the last inequality relies upon inequality (5.14) and the fact that \(|U|\leq(\sigma-\rho)\). Clearly, inequality (5.6) follows from (5.16). Case (ii). The following chain holds: \[\int_{\mathbb{B}_{1}}B(|u\nabla\eta(x)|)\,dx\leq L\int_{U}r^{n-1}\int_{\mathbb{S}^{n-1}}\bigg{|}\frac{2}{(\sigma-\rho)}u_{r} (z)\bigg{|}^{q}\,d\mathcal{H}^{n-1}(z)\,dr\] \[\leq \frac{L2^{q}\alpha_{n}\kappa^{q}}{(\sigma-\rho)^{q}}\int_{U}r^{n- 1}F_{r}(u)^{\frac{q}{n-1}}\,dr\] \[\leq \frac{L2^{q}\alpha_{n}\kappa^{q}}{(\sigma-\rho)^{q}}\int_{U}r^{n- 1}\bigg{(}\frac{4F(u,\rho,\sigma)}{(\sigma-\rho)r^{n-1}}\bigg{)}^{\frac{q}{n-1 }}\,dr\] \[\leq \frac{L2^{q}4\frac{q}{n-1}c_{n}\kappa^{q}}{(\sigma-\rho)^{q-1+ \frac{q}{n-1}}\rho\mu^{q-(n-1)}}F(u,\rho,\sigma)^{\frac{q}{n-1}},\] where \(c_{n}\) is given by (3.29), the first inequality holds by inequality (5.7), the second one by inequality (3.27), the third one by inequality (5.14), and the last one since \(|U|\leq(\sigma-\rho)\). Inequality (5.8) follows via (5.17). We are now in a position to accomplish the proof of our main result. Proof of Theorem 2.2.: Owing to Lemma 4.3, without loss of generality we can assume that the functions \(A\), \(B\) and \(E\) also satisfy the properties stated for the functions \(\widehat{A}\), \(\widehat{B}\) and \(\widehat{E}\) in the lemma. When we refer to properties in the statement of this lemma, we shall mean that they are applied directly to \(A\), \(B\) and \(E\). In particular, \(q\) denotes the exponent appearing in the statement of the lemma. Moreover, \(Q\) is the constant from the definition of quasi-minimizer. We also assume that \(\mathbb{B}_{1}\Subset\Omega\) and prove that \(u\) is bounded in \(\mathbb{B}_{\frac{1}{2}}\). The general case follows via a standard scaling and translation argument. For ease of presentation, we split the proof in steps. _Step 1. Basic energy estimate._ Set, for \(r>0\) and \(l>0\), \[\mathcal{A}_{l,r}=\mathbb{B}_{r}\cap\{x\in\Omega\,:\,u(x)>l\} \tag{5.18}\] and \[J(l,r)=\int_{\mathbb{B}_{r}}A((u-l)_{+})+A(|\nabla(u-l)_{+}|)\,dx. \tag{5.19}\] Here, the subscript "\(+\)" stands for the positive part. If assumption (2.8) holds, then we claim that there exists a constant \(c=c(n,q,L,Q)\geq 1\) such that \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx\leq c\bigg{(}\frac{\Phi_{q}(\kappa J(k,\sigma)^{\frac{1}{n-1}})}{( \sigma-\rho)^{\frac{q}{n-1}}}J(k,\sigma)+\int_{\mathcal{A}_{k,\sigma}}(E(|u|) +1)\,dx\bigg{)} \tag{5.20}\] for \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma<1\), where \(\kappa\) denotes the constant from inequality (3.25) If assumption (2.10) holds, then we claim that there exists a constant \(c=c(n,q,L,Q)\geq 1\) such that \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx\leq c\bigg{(}\frac{\kappa^{q}}{(\sigma-\rho)^{\frac{q}{n-1}}}J(k,\sigma)^{ \frac{q}{n-1}}+\int_{\mathcal{A}_{k,\sigma}}(E(|u|)+1)\,dx\bigg{)} \tag{5.21}\] for \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma<1\), where \(\kappa\) denotes the constant from inequality (3.27). We shall first establish inequalities (5.20) and (5.21) under assumption (2.11). Given \(k\geq 0\) and \(\frac{1}{2}\leq\rho<\sigma\leq 1\), let \(\eta\in W^{1,\infty}_{0}(\mathbb{B}_{1})\) be as in the statement of Lemma 5.1, applied with \(u\) replaced with \((u-k)_{+}\). Choose the function \(\varphi=-\eta^{q}(u-k)_{+}\) in the definition of quasi-minimizer for \(u\). From this definition and the first property in (2.11) one infers that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx \leq Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,\nabla(u+\varphi ))\,dx\] \[=Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,(1-\eta^{q})\nabla u -q\eta^{q-1}\nabla\eta(u-k))\,dx\] \[\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u+\varphi, \nabla u)+\eta^{q}f\bigg{(}x,u+\varphi,-\frac{q\nabla\eta}{\eta}(u-k)\bigg{)} \,dx\,.\] Hence, since \(0\leq u+\varphi\leq u\) on \(\mathcal{A}_{k,\sigma}\), the second property in (2.11), the upper bound in (2.13), and the monotonicity of the function \(E\) ensure that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k, \sigma}}(1-\eta^{q})\big{(}Lf(x,u,\nabla u)+E(u)+L\big{)}+\eta^{q}\bigg{(}B \bigg{(}\frac{q|\nabla\eta|}{\eta}(u-k)\bigg{)}+E(u)+L\bigg{)}\,dx. \tag{5.22}\] Inasmuch as \(0\leq\eta\leq 1\) and \(\eta=1\) in \(\mathbb{B}_{\rho}\), the use of inequality (4.11) on the right-hand side of (5.22) yields: \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq QL\int_{\mathcal{A}_{k, \sigma}\setminus\mathbb{B}_{\rho}}f(x,u,\nabla u)\,dx+Q\int_{\mathcal{A}_{k, \sigma}}q^{q}B\big{(}|\nabla\eta|(u-k)\big{)}+E(|u|)+L\,dx. \tag{5.23}\] Now, suppose that assumption (2.8) holds. Combining inequality (5.23) with estimate (5.6) (applied to \((u-k)_{+}\)) tells us that \[\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\leq QL\int_{\mathcal{A}_{k,\sigma}\setminus\mathbb{B}_{\rho}}f(x,u, \nabla u)\,dx+cQ\Phi_{q}\bigg{(}\frac{2\kappa J(k,\sigma)^{\frac{1}{n-1}}}{( \sigma-\rho)^{\frac{n}{n-1}}}\bigg{)}J(k,\sigma)+Q\int_{\mathcal{A}_{k,\sigma} }(E(u)+L)\,dx \tag{5.24}\] for some constant \(c=c(n,q,L)\geq 1\). Observe that in deriving inequality (5.24), we have exploited the inequalities \(\frac{1}{2}\leq\rho\) and \(F((u-k)_{+},\rho,\sigma)\leq J(k,\sigma)\). Adding the expression \(QL\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\) to both sides of inequality (5.24) and using inequality (5.2) enable one to deduce that \[\int_{\mathcal{A}_{k,\rho}}f(x,u,\nabla u)\,dx\leq\frac{QL}{QL+1}\int_{ \mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx+c\bigg{(}\frac{\Phi_{q}\big{(} \kappa J(k,\sigma)^{\frac{1}{n-1}}\big{)}}{(\sigma-\rho)^{\frac{qn}{n-1}}}J(k,\sigma)+\int_{\mathcal{A}_{k,\sigma}}(E(u)+1)\,dx\bigg{)},\] for some constant \(c=c(n,q,L,Q)\geq 1\). Estimate (5.20) follows via Lemma 4.4 and the lower bound in (2.13). Assume next that assumtption (2.10) holds. Hence, the full assumption (3.26) holds, thanks to equation (4.8). One can start again from (5.23), make use of inequality (5.8), and argue as above to obtain inequality (5.21). The fact that \[\frac{1}{(\sigma-\rho)^{q-1+\frac{q}{n-1}}}\leq\frac{1}{(\sigma-\rho)^{\frac{q n}{n-1}}},\] since \(\sigma-\rho\leq 1\), is relevant in this argument. It remains to prove inequalities (5.20) and (5.21) under the alternative structure condition (2.12). Let \(\varphi\) be as above, and observe that \(u+\varphi=\eta^{q}k+(1-\eta^{q})u\) on \(\mathcal{A}_{k,\sigma}\). Hence, by property (2.12), \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k,\sigma}}f(x,u+\varphi,\nabla(u+\varphi))\,dx\] \[= Q\int_{\mathcal{A}_{k,\sigma}}f\big{(}x,(1-\eta^{q})u+\eta^{q}k,(1-\eta^{q})\nabla u-q\eta^{q-1}\nabla\eta(u-k)\big{)}\,dx\] \[\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u,\nabla u)+\eta^ {q}f\bigg{(}x,k,-\frac{q\nabla\eta}{\eta}(u-k)\bigg{)}\,dx.\] Thanks to assumption (2.13) and the monotonicity of \(E\), which guarantees that \(E(k)\leq E(u)\) in \(\mathcal{A}_{k,\sigma}\), we obtain that \[\int_{\mathcal{A}_{k,\sigma}}f(x,u,\nabla u)\,dx\leq Q\int_{\mathcal{A}_{k,\sigma}}(1-\eta^{q})f(x,u,\nabla u)+\eta^{q}(L+E(u) +B\bigg{(}\frac{q|\nabla\eta|}{\eta}(u-k)\bigg{)}\,dx. \tag{5.25}\] A replacement of inequality (5.22) with (5.25) and an analogous argument as above yields the same conclusions. _Step 2. One-step improvement._ Let us set \[c_{B}=\max\{\kappa,1\},\] where \(\kappa\) denotes a constant, depending only on \(n\), such that inequality (3.21) holds for every \(r\in[\frac{1}{2},1]\). We claim that, if \(h>0\) is such that \[c_{B}LJ(h,\sigma)^{\frac{1}{n}}\leq 1, \tag{5.26}\] then \[J(k,\rho)\leq c\bigg{(}\frac{1}{(\sigma-\rho)^{\frac{qn}{n-1}}}+\frac{1}{(k-h)^{ \frac{n}{n-1}}}+L^{\log_{2}(\frac{k}{k-h})}\bigg{)}J(h,\sigma)^{1+\frac{1}{n} }\qquad\text{if }k>h, \tag{5.27}\] for a suitable constant \(c=c(n,q,L,Q,A)\geq 1\). To this purpose, fix \(h>0\) such that inequality (5.26) holds. We begin by showing that there exists a constant \(c=c(n,L)\) such that \[|\mathcal{A}_{k,\sigma}|\leq c\frac{J(h,\sigma)^{\frac{n+1}{n}}}{(k-h)^{\frac{n}{n-1}}} \qquad\text{if }k>h. \tag{5.28}\] Inequality (5.28) is a consequence of the following chain: \[|\mathcal{A}_{k,\sigma}|A_{n}(k-h)= \int_{\mathcal{A}_{k,\sigma}}A_{n}(k-h)\,dx\leq\int_{\mathcal{A}_ {k,\sigma}}A_{n}(u-h)\,dx\] \[\leq \int_{\mathcal{A}_{k,\sigma}}A_{n}\bigg{(}\frac{c_{B}(u-h)J(h, \sigma)^{\frac{1}{n}}}{c_{B}J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx\leq c_{B}J(h,\sigma)^{\frac{1}{n}}\int_{\mathcal{A}_{k,\sigma}}A_{n}\bigg{(}\frac{u-h}{c_{B }J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx. \tag{5.29}\] Notice that the last inequality holds thanks to inequality (3.1), applied with \(A\) replaced with \(A_{n}\), and to assumption (5.26). Coupling inequality (5.29) with inequality (3.21) enables us to deduce that \[|\mathcal{A}_{k,\sigma}|\leq \frac{c_{B}J(h,\sigma)^{\frac{n+1}{n}}}{A_{n}(k-h)}.\] Hence inequality (5.28) follows, via (4.7). Next, by the monotonicity of \(E\) and assumption (4.9), \[\int_{\mathcal{A}_{k,\sigma}}E(u)\,dx= \int_{\mathcal{A}_{k,\sigma}}E((u-k)+k)\,dx\leq\int_{\mathcal{A}_{ k,\sigma}}E(2(u-k))+E(2k)\,dx\] \[\leq L\int_{\mathcal{A}_{k,\sigma}}E(u-k)+E(k)\,dx\quad\text{for }k>0. \tag{5.30}\] From inequality (3.1) applied to \(A_{n}\) and assumption (5.26) one infers that \[\int_{\mathcal{A}_{k,\sigma}}E(u-k)\,dx\leq \int_{\mathcal{A}_{k,\sigma}}E(u-h)\,dx\leq\int_{\mathcal{A}_{k, \sigma}}A_{n}(L(u-h))\,dx\] \[\leq c_{B}LJ(h,\sigma)^{\frac{1}{n}}\int_{\mathcal{A}_{h,\sigma}}A_{n} \bigg{(}\frac{u-h}{c_{B}J(h,\sigma)^{\frac{1}{n}}}\bigg{)}\,dx\leq c_{B}LJ(h, \sigma)^{1+\frac{1}{n}}\quad\text{if }k>h. \tag{5.31}\] Owing to assumption (4.9) and chain (5.31), \[\int_{\mathcal{A}_{k,\sigma}}E(k)= E\bigg{(}\frac{k}{k-h}(k-h)\bigg{)}|\mathcal{A}_{k,\sigma}|\leq E\bigg{(}2^{\lfloor\log_{2}\frac{k}{k-h} \rfloor+1}(k-h)\bigg{)}|\mathcal{A}_{k,\sigma}|\] \[\leq L^{\log_{2}(\frac{k}{k-h})+1}E(k-h)|\mathcal{A}_{k,\sigma}|\leq L ^{\log_{2}(\frac{k}{k-h})+1}\int_{\mathcal{A}_{h,\sigma}}E(u-h)\,dx\] \[\leq L^{\log_{2}(\frac{k}{k-h})+1}c_{B}LJ(h,\sigma)^{1+\frac{1}{n}} \quad\text{if }k>h, \tag{5.32}\] where \(\lfloor\,\cdot\,\rfloor\) stands for integer part. Combining inequalities (5.30)-(5.32) yields: \[\int_{\mathcal{A}_{k,\sigma}}E(u)\,dx\leq cL^{\log_{2}(\frac{k}{k-h})}J(h, \sigma)^{\frac{n+1}{n}}\quad\text{if }k>h, \tag{5.33}\] for some constant \(c=c(n,L)\). From this point, the argument slightly differs depending on whether condition (2.8) or (3.26) holds. Assume first that (2.8) is in force. Assumption (5.26) implies that there exists a constant \(c=c(n,q,L)\) such that \[\Phi_{q}(\kappa J(k,\sigma)^{\frac{1}{n-1}})\leq cJ(k,\sigma)^{\frac{1}{n-1}} \quad\text{if }k>h, \tag{5.34}\] where \(\kappa\) is the constant from inequality (3.25). Making use of inequalities (5.28), (5.33) and (5.34) to estimate the right-hand side of (5.20) results in the following bound for its left-hand side: \[\int_{\mathbb{B}_{\rho}}A(|\nabla(u-k)_{+}|)\,dx \leq c\Bigg{(}\frac{J(h,\sigma)^{\frac{n}{n-1}}}{(\sigma-\rho)^{ \frac{n}{n-1}}}+\frac{J(h,\sigma)^{\frac{n+1}{n-1}}}{(k-h)^{\frac{n}{n-1}}}+L^ {\log_{2}(\frac{k}{k-h})}J(h,\sigma)^{\frac{n+1}{n}}\Bigg{)}\] \[\leq c^{\prime}\Bigg{(}\frac{1}{(\sigma-\rho)^{\frac{n}{n-1}}}+ \frac{1}{(k-h)^{\frac{n}{n-1}}}+L^{\log_{2}(\frac{k}{k-h})}\Bigg{)}J(h,\sigma) ^{\frac{n+1}{n}}\quad\text{if }k>h, \tag{5.35}\] for suitable constants \(c=c(n,q,L,Q)\geq 1\) and \(c^{\prime}=c^{\prime}(n,q,L,Q)\geq 1\). From inequality (4.1) we infer that \[\int_{\mathbb{B}_{\rho}}A((u-k)_{+})\,dx\leq\int_{\mathbb{B}_{\rho}}A_{n}((u-k) _{+})\,dx+c|\mathcal{A}_{k,\rho}|\leq\int_{\mathbb{B}_{\sigma}}A_{n}((u-h)_{+} )\,dx+c|\mathcal{A}_{k,\sigma}|\quad\text{if }k>h, \tag{5.36}\] for some constant \(c=c(n,A)\). A combination of the latter inequality with (5.28) and (5.29) tells us that \[\int_{\mathbb{B}_{\rho}}A((u-k)_{+})\,dx\leq cJ(h,\sigma)^{1+\frac{1}{n}}+c \frac{J(h,\sigma)^{1+\frac{1}{n}}}{(k-h)^{\frac{n}{n-1}}}\quad\text{if }k>h, \tag{5.37}\] for some constant \(c=c(n,L,A)\). Coupling inequaliy (5.35) with (5.37) yields (5.27). Assume now that condition (3.26) holds. Assumption (5.26) and the inequality \(q>n\) guarantee that there exists a constant \(c=c(n,q,L)\) such that \[J(k,\sigma)^{\frac{q}{n-1}}\leq cJ(k,\sigma)^{\frac{n+1}{n}}\quad\text{if }k>h. \tag{5.38}\] From inequalities (5.28), (5.33) and (5.38) one obtains (5.35) also in this case. Inequality (5.27) again follows via (5.35) and (5.37). _Step 3. Iteration._ Given \(K\geq 1\) and \(\ell\in\mathbb{N}\cup\{0\}\), set \[k_{\ell}=K(1-2^{-(\ell+1)}),\quad\sigma_{\ell}=\frac{1}{2}+\frac{1}{2^{\ell+2}},\quad\text{and}\quad J_{\ell}=J(k_{\ell},\sigma_{\ell}). \tag{5.39}\] Thanks to inequality (5.27), if \(\ell\in\mathbb{N}\) is such that \[c_{B}LJ_{\ell}^{\frac{1}{n}}\leq 1, \tag{5.40}\] then \[J_{\ell+1}\leq c\bigg{(}2^{\ell\frac{q}{n-1}}+K^{-\frac{n}{n-1}}2^{\ell\frac{n}{n -1}}+L^{\ell}\bigg{)}J_{\ell}^{1+\frac{1}{n}} \tag{5.41}\] for a suitable constant \(c=c(n,q,L,Q,A)\geq 1\). Clearly, inequality (5.41) implies that \[J_{\ell+1}\leq c_{2}2^{\gamma\ell}J_{\ell}^{1+\frac{1}{n}} \tag{5.42}\] where \(\gamma=\max\{q\frac{n}{n-1},\log_{2}L\}\) and \(c_{2}=c_{2}(n,q,L,Q,A)\geq 1\) is a suitable constant. Let \(\tau=\tau(n,q,L,Q,A)\in(0,1)\) be such that \[c_{2}2^{\gamma}\tau^{\frac{1}{n}}=1. \tag{5.43}\] Set \[\varepsilon_{0}=\min\{(c_{B}L)^{-n},\tau^{n}\}.\] We claim that, if \[J_{0}\leq\varepsilon_{0}, \tag{5.44}\] then \[J_{\ell}\leq\tau^{\ell}J_{0}\qquad\text{for every $\ell\in\mathbb{N}\cup\{0\}$}. \tag{5.45}\] We prove this claim by induction. The case \(\ell=0\) is trivial. Suppose that inequality (5.45) holds for some \(\ell\in\mathbb{N}\). Assumption (5.44) entails that \[c_{B}LJ_{\ell}^{\frac{1}{n}}\leq c_{B}L(\tau^{\ell}J_{0})^{\frac{1}{n}}\leq c _{B}L\varepsilon_{0}^{\frac{1}{n}}\leq 1.\] Therefore, thanks to equations (5.42), (5.45), and (5.43), \[J_{\ell+1}\leq c_{2}2^{\gamma\ell}J_{\ell}^{1+\frac{1}{n}}\leq c_{2}(2^{\gamma} \tau^{\frac{1}{n}})^{\ell}J_{0}^{\frac{1}{n}}(\tau^{\ell}J_{0})\leq c_{2}^{1- \ell}\varepsilon_{0}^{\frac{1}{n}}\tau^{\ell}J_{0}\leq\tau^{\ell+1}J_{0}. \tag{5.46}\] Notice that the last inequality holds thanks to the inequalities \(c_{2}\geq 1\), \(\ell\geq 1\), and \(\varepsilon_{0}\leq\tau^{n}\). Inequality (5.45), with \(\ell\) replaced with \(\ell+1\), follows from (5.46). _Step 4. Assumption (5.44) holds for large \(K\)._ Since \[J_{0}=J(K/2,\mathbb{B}_{\frac{3}{4}}),\] inequality (5.44) will follow, for sufficiently large \(K\), if we show that \[\lim_{k\to\infty}J(k,\mathbb{B}_{\frac{3}{4}})=0. \tag{5.47}\] Inasmuch as \(u\in V^{1}_{\mathrm{loc}}K^{A}(\Omega)\), from inclusion (3.20) we infer that \(\lim_{k\to\infty}|\mathcal{A}_{k,\frac{3}{4}}|=0\). Hence, the dominated convergence theorem guarantees that \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx=\lim _{k\to\infty}\int_{\mathcal{A}_{k,\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx=0. \tag{5.48}\] It thus suffices to show that \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A(|(u-k)_{+}|)\,dx=0. \tag{5.49}\] To this purpose, note that, by inequality (4.1) and the monotonicity of \(A_{n}\), \[\int_{\mathbb{B}_{\frac{3}{4}}}A(|(u-k)_{+}|)\,dx\leq c|\mathcal{A}_{k,\frac{3 }{4}}|+\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}(|(u-k)_{+}|)\,dx \tag{5.50}\] \[\leq c|\mathcal{A}_{k,\frac{3}{4}}|+\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\! \left(2\bigg{|}(u-k)_{+}-\fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}dy\bigg{|} \right)dx+\int_{\mathbb{B}_{\frac{3}{2}}}A_{n}\!\left(2\bigg{|}\fint_{\mathbb{B }_{\frac{3}{4}}}(u-k)_{+}dy\bigg{|}\right)dx\] for some constant \(c=c(n,A)\). Moreover, \[\lim_{k\to\infty}\mathcal{A}_{k,\frac{3}{4}}=0,\] and \[\lim_{k\to\infty}\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\!\left(2\bigg{|}\fint_{ \mathbb{B}_{\frac{3}{4}}}(u-k)_{+}\bigg{|}\right)dx\leq\lim_{k\to\infty}| \mathbb{B}_{\frac{3}{4}}|A_{n}\!\left(\frac{2\|(u-k)_{+}\|_{L^{1}(\mathbb{B}_{ \frac{3}{4}})}}{|\mathbb{B}_{\frac{3}{4}}|}\right)=0.\] It remains to prove that the second addend on the rightmost side of chain (5.50) vanishes when \(k\to\infty\). Thanks to the limit in (5.48), for every \(\delta>0\) there exists \(k_{\delta}\in\mathbb{N}\) such that \[\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)\,dx\leq\delta\qquad\text{ if }k\geq k_{\delta}. \tag{5.51}\] Choose \(\delta\) in (5.51) such that \(2c_{B}\delta^{\frac{1}{n}}\leq 1\). Property (3.1) applied to \(A_{n}\), and the Sobolev-Poincare inequality in Orlicz spaces (3.19) applied to the function \((u-k)_{+}\) ensure that, if \(k>k_{\delta}\), then \[\int_{\mathbb{B}_{\frac{3}{4}}}A_{n}\!\left(2\bigg{|}(u-k)_{+}- \fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}\bigg{|}\right)dx\leq 2c_{B}\delta^{\frac{1}{n}}\int_{\mathbb{B}_{\frac{3}{4}}}A_{n} \!\left(\frac{|(u-k)_{+}-\fint_{\mathbb{B}_{\frac{3}{4}}}(u-k)_{+}dy|}{c_{B} \big{(}\int_{\mathbb{B}_{\frac{3}{4}}}A(|\nabla(u-k)_{+}|)dy\big{)}^{\frac{1} {n}}}\right)dx\] \[\leq 2c_{B}\delta^{\frac{1}{n}}\int_{\mathbb{B}_{\frac{3}{4}}}A(| \nabla(u-k)_{+}|)dx.\] Since the last integral tends to \(0\) as \(k\to\infty\), equation (5.49) is establsied. _Step 5. Conclusion._ Inequality (5.45) tells us that \(\inf_{\ell\in\mathbb{N}}J_{\ell}=0\). Hence, from the definitions of \(J_{\ell}\) and \(J(h,\sigma)\) we deduce that \[\int_{\mathbb{B}_{\frac{1}{2}}}A((u-K)_{+})\,dx\leq J(K,\mathbb{B}_{\frac{1}{2 }})\leq\inf_{\ell\in\mathbb{N}}J_{\ell}=0.\] Therefore, \(u\leq K\) a.e. in \(\mathbb{B}_{\frac{1}{2}}\). In order to prove a parallel lower bound for \(u\), observe that the function \(-u\) is a quasiminimizer of the functional defined as in (1.1), with the integrand \(f\) replaced with the integral \(\widetilde{f}\) given by \[\widetilde{f}(x,t,\xi)=f(x,-t,-\xi)\quad\text{ for }(x,t,\xi)\in\Omega\times \mathbb{R}\times\mathbb{R}^{n}.\] The structure conditions (2.11) and (2.12) and the growth condition (2.13) on the function \(f\) are inherited by the function \(\widetilde{f}\). An application of the above argument to the function \(-u\) then tells us that there exists a constant \(K^{\prime}>0\) such that \(-u\leq K^{\prime}\) a.e. in \(\mathbb{B}_{\frac{1}{2}}\). The proof is complete. ## Compliance with Ethical Standards **Funding**. This research was partly funded by: (i) GNAMPA of the Italian INdAM - National Institute of High Mathematics (grant number not available) (A. Cianchi); (ii) Research Project of the Italian Ministry of Education, University and Research (MIUR) Prin 2017 "Direct and inverse problems for partial differential equations: theoretical aspects and applications", grant number 201758MTR2 (A. Cianchi); **Conflict of Interest**. The authors declare that they have no conflict of interest.
2304.00045
PyQBench: a Python library for benchmarking gate-based quantum computers
We introduce PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parametrized Fourier family of measurements. For more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones.
Konrad Jałowiecki, Paulina Lewandowska, Łukasz Pawela
2023-03-31T18:02:43
http://arxiv.org/abs/2304.00045v1
# PyQBench: a Python library for benchmarking gate-based quantum computers ###### Abstract We introduce PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parametrized Fourier family of measurements. For more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones. keywords: Quantum computing, Benchmarking quantum computers, Discrimination of quantum measurements, Discrimination of von Neumann measurements, Open-source, Python programming Pacs: 03.67.-a, 03.67.Lx Msc: 81P68 + Footnote †: journal: SoftwareX ## Current code version ## 1 Motivation and significance Noisy Intermediate-Scale Quantum (NISQ) [1] devices are storming the market, with a wide selection of devices based on different architectures and accompanying software solutions. Among hardware providers offering public access to their gate-based devices, one could mention Rigetti [2], IBM [3], Oxford Quantum Group [4], IonQ [5] or Xanadu [6]. Other vendors offer devices operating in different paradigms. Notably, one could mention D-Wave [7] and their quantum annealers, or QuEra devices [8] based on neural atoms. Most vendors provide their own software stack and application programming interface for accessing their devices. To name a few, Rigetti's computers are available through their Forest SDK [9] and PyQuil library [10] and IBM Q [3] \begin{table} \begin{tabular}{|c|l|l|} \hline C1 & Current code version & 0.1.1 \\ \hline C2 & Permanent link to code/repository used for this code version & [https://github.com/iitis/PyQBench](https://github.com/iitis/PyQBench) \\ \hline C3 & Code Ocean compute capsule & [https://codeocean.com/capsule/89088992-9a27-4712-8525-d92a9b23060f/tree](https://codeocean.com/capsule/89088992-9a27-4712-8525-d92a9b23060f/tree) \\ \hline C4 & Legal Code License & Apache License 2.0 \\ \hline C5 & Code versioning system used & git \\ \hline C6 & Software code languages, tools, and services used & Python, Qiskit, AWS Braket \\ \hline C7 & Compilation requirements, operating environments \& dependencies & Python?= 3.8 \\ & & numpy \textasci{}= 1.22.0 \\ & & scipy \textasci{}= 1.7.0 \\ & & pandas \textasci{}= 1.5.0 \\ & & amazon-braket\textasci{}= 1.11.1 \\ & & pydantic \textasci{}= 1.9.1 \\ & & qiskit \textasci{}= 0.37.2 \\ & & mthree \textasci{}= 1.1.0 \\ & & tqdm \textasci{}= 4.64.1 \\ & & pyyaml \textasci{}= 6.0 \\ & & qiskit-braket\textasci{}-provider \textasci{}= 0.0.3 \\ \hline C8 & If available Link to developer & [https://pyqbench.readthedocs.io/en/latest/](https://pyqbench.readthedocs.io/en/latest/) \\ & documentation/manual & en/latest/ \\ \hline C9 & Support email for questions & dexter2206@gmail.com \\ \hline \end{tabular} \end{table} Table 1: Code metadata computers can be accessed through Qiskit [11] or IBM Quantum Experience web interface [12]. Some cloud services, like Amazon Braket [13], offer access to several quantum devices under a unified API. On top of that, several libraries and frameworks can integrate with multiple hardware vendors. Examples of such frameworks include IBM Q's Qiskit or Zapata Computing's Orquestra [14]. It is well known that NISQ devices have their limitations [15]. The question is to what extent those devices can perform meaningful computations? To answer this question, one has to devise a methodology for benchmarking them. For gate-based computers, on which this paper focuses, there already exist several approaches. One could mention randomized benchmarking [16; 17; 18; 19; 20], benchmarks based on the quantum volume [21; 22; 23]. In this paper, we introduce a different approach to benchmarking gate-based devices with a simple operational interpretation. In our method, we test how well the given device is at guessing which of the two known von Neumann measurements were performed during the experiment. We implemented our approach in an open-source Python library called PyQBench. The library supports any device available through the Qiskit library, and thus can be used with providers such as IBM Q or Amazon Braket. Along with the library, the PyQBench package contains a command line tool for running most common benchmarking scenarios. ## 2 Existing benchmarking methodologies and software Unsurprisingly, PyQBench is not the only software package for benchmarking gate-based devices. While we believe that our approach has significant benefits over other benchmarking techniques, for completeness, in this section we discuss some of the currently available similar software. Probably the simplest benchmarking method one could devise is simply running known algorithms and comparing outputs with the expected ones. Analyzing the frequency of the correct outputs, or the deviation between actual and expected outputs distribution provides then a metric of the performance of a given device. Libraries such as Munich Quantum Toolkit (MQT) [24; 25] or SupermarQ [26; 27] contain benchmarks leveraging multiple algorithms, such as Shor's algorithm or Grover's algorithm. Despite being intuitive and easily interpretable, such benchmarks may have some problems. Most importantly, they assess the usefulness of a quantum device only for a very particular algorithm, and it might be hard to extrapolate their results to other algorithms and applications. For instance, the inability of a device to consistently find factorizations using Shor's algorithms does not tell anything about its usefulness in Variational Quantum Algorithm's. Another possible approach to benchmarking quantum computers is randomized benchmarking. In this approach, one samples circuits to be run from some predefined set of gates (e.g. from the Clifford group) and tests how much the output distribution obtained from the device running these circuits differs from the ideal one. It is also common to concatenate randomly chosen circuits with their inverses (which should yield the identity circuit) and run those concatenated circuits on the device. Libraries implementing this approach include Qiskit [28] or PyQuil [29]. Another quantity used for benchmarking NISQ devices is quantum volume. The quantum volume characterizes capacity of a device for solving computational problems. It takes into account multiple factors like number of qubits, connectivity and measurement errors. The Qiskit library allows one to measure quantum volume of a device by using its qiskit.ignis.verifica tion.quantum_volume. Other implementations of Quantum Volume can be found as well, see e.g. [30]. ## 3 Preliminaries and discrimination scheme approach In this section we describe how the benchmarking process in PyQBench works. To do so, we first discuss necessary mathematical preliminaries. Then, we present the general form of the discrimination scheme used in PyQBench and practical considerations on how to implement it taking into account limitations of the current NISQ devices. ### Mathematical preliminaries Let us first recall the definition of a von Neumann measurement, which is the only type of measurement used in PyQBench. A von Neumann measurement \(\mathcal{P}\) is a collection of rank-one projectors \(\{|u_{0}\rangle\langle u_{0}|,\ldots,|u_{d-1}\rangle\langle u_{d-1}|\}\), called effects, that sum up to identity, i.e. \(\sum_{i=0}^{d-1}|u_{i}\rangle\langle u_{i}|=\mbox{1l}\). If \(U\) is a unitary matrix of size \(d\), one can construct a von Neumann measurement \(\mathcal{P}_{U}\) by taking projectors onto its columns. In this case we say that \(\mathcal{P}_{U}\) is described by the matrix \(U\). Typically, NISQ devices can only perform measurements in computational \(Z\)-basis, i.e. \(U=\mbox{1l}\). To implement an arbitrary von Neumann measurement \(\mathcal{P}_{U}\), one has to first apply \(U^{\dagger}\) to the measured system and then follow with \(Z\)-basis measurement. This process, depicted in Fig. 1, can be viewed as performing a change of basis in which measurement is performed prior to measurement in the computational basis. ### Discrimination scheme Benchmarks in PyQBench work by experimentally determining the probability of correct discrimination between two von Neumann measurements by the device under test and comparing the result with the ideal, theoretical predictions. Without loss of generality1, we consider discrimination task between single qubit measurements \(\mathcal{P}_{\mathbf{1}}\), performed in the computational Z-basis, and an alternative measurement \(\mathcal{P}_{U}\) performed in the basis \(U\). Note, however, that the discrimination scheme described below can work regardless of dimensionality of the system, see [31] for details. Footnote 1: Explaining why we can consider only discrimination scheme between \(\mathcal{P}_{\mathbf{1}}\) and \(\mathcal{P}_{U}\) is beyond the scope of this paper. See [31] for a in depth explanation. In general, the discrimination scheme presented in Fig. 2, requires an auxiliary qubit. First, the joint system is prepared in some state \(|\psi_{0}\rangle\). Then, one of the measurements, either \(\mathcal{P}_{U}\) or \(\mathcal{P}_{\mathbf{1}}\), is performed on the first part of the system. Based on its outcome \(i\), we choose another POVM \(\mathcal{P}_{V_{i}}\) and perform it on the second qubit, obtaining the output in \(j\). Finally, if \(j=0\), we say that the performed measurement is \(\mathcal{P}_{U}\), otherwise we say that it was \(\mathcal{P}_{\mathbf{1}}\). Naturally, we need to repeat the same procedure multiple times for both measurements to obtain a reliable estimate of the underlying probability distribution. In PyQBench, we assume that the experiment is repeated the same number of times for both \(\mathcal{P}_{U}\) and \(\mathcal{P}_{\mathbf{1}}\). Unsurprisingly, both the \(|\psi_{0}\rangle\) and the final measurements \(\mathcal{P}_{V_{i}}\) have to be chosen specifically for given \(U\) to maximize the probability of a correct guess. The detailed description how these choices are made in [32], and for now we will focus only how this scheme can be implemented on the actual devices, assuming that all the components are known. Figure 1: Implementation of a von Neumann measurement using measurement in computational basis. The upper circuit shows a symbolic representation of a von Neumann measurement \(\mathcal{P}_{U}\). The bottom, equivalent circuit depicts its decomposition into a change of basis followed by measurement in the \(Z\) basis. #### 3.2.1 Implementation of discrimination scheme on actual NISQ devices Current NISQ devices are unable to perform conditional measurements, which is the biggest obstacle to implementing our scheme on real hardware. However, we circumvent this problem by slightly adjusting our scheme so that it only uses components available on current devices. For this purpose, we use two possible options: using a postselection or a direct sum \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\). **Scheme 1**.: (Postselection) The first idea uses a postselection scheme. In the original scheme, we measure the first qubit and only then determine which measurement should be performed on the second one. Instead of doing this choice, we can run two circuits, one with \(\mathcal{P}_{V_{0}}\) and one with \(\mathcal{P}_{V_{1}}\) and measure both qubits. We then discard the results of the circuit for which label \(i\) does not match measurement label \(k\). Hence, the circuit for postselection looks as depicted in Fig. 3. To perform the benchmark, one needs to run multiple copies of the postselection circuit, with both \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\). Each circuit has to be run in both variants, one with final measurement \(\mathcal{P}_{V_{0}}\) and the second with the final measurement \(\mathcal{P}_{V_{1}}\). The experiments can thus be grouped into classes identified by tuples of the form \((\mathcal{Q},k,i,j)\), where \(\mathcal{Q}\in\{\mathcal{P}_{U},\mathcal{P_{1}}\}\) denotes the chosen measurement, \(k\in\{0,1\}\) designates the final measurement used, and \(i\in\{0,1\}\) and \(j\in\{0,1\}\) being the labels of outcomes as presented in Fig. 3. We Figure 3: A schematic representation of the setup for distinguishing measurements \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\) using postselection approach. In postselection scheme, one runs such circuits for both \(k=0,1\) and discards results for cases when there is a mismatch between \(k\) and \(i\). Figure 2: Theoretical scheme of discrimination between von Neumann measurements \(\mathcal{P}_{U}\) and \(\mathcal{P_{1}}\). then discard all the experiments for which \(i\neq k\). The total number of valid experiments is thus: \[N_{\text{total}}=\#\{(\mathcal{Q},k,i,j):k=i\}. \tag{1}\] Finally, we count the valid experiments resulting in successful discrimination. If we have chosen \(\mathcal{P}_{U}\), then we guess correctly iff \(j=0\). Similarly, for \(P_{\mathbf{1}}\), we guess correctly iff \(j=1\). If we define \[N_{\mathcal{P}_{U}} =\#\{(\mathcal{Q},k,i,j):\mathcal{Q}=\mathcal{P}_{U},k=i,j=0\}, \tag{2}\] \[N_{\mathcal{P}_{\mathbf{1}}} =\#\{(\mathcal{Q},k,i,j):\mathcal{Q}=\mathcal{P}_{\mathbf{1}},k= i,j=1\}, \tag{3}\] then the empirical success probability can be computed as \[p_{\text{succ}}(\mathcal{P}_{U},\mathcal{P}_{\mathbf{1}})=\frac{N_{\mathcal{ P}_{U}}+N_{\mathcal{P}_{\mathbf{1}}}}{N_{\text{total}}}. \tag{4}\] The \(p_{\text{succ}}\) is the quantity reported to the user as the result of the benchmark. **Scheme 2.** (Direct sum) The second idea uses the direct sum \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\) implementation. Here, instead of performing a conditional measurement \(\mathcal{P}_{V_{k}}\), where \(k\in\{0,1\}\), we run circuits presented in Fig. 4. One can see why such a circuit is equivalent to the original discrimination scheme. If we rewrite the block-diagonal matrix \(V_{0}^{\dagger}\oplus V_{1}^{\dagger}\) as follows: \[V_{0}^{\dagger}\oplus V_{1}^{\dagger}=|0\rangle\langle 0|\otimes V_{0}^{ \dagger}+|1\rangle\langle 1|\otimes V_{1}^{\dagger}, \tag{5}\] we can see that the direct sum in Eq. (5) commutes with the measurement on the first qubit. Thanks to this, we can switch the order of operations to obtain the circuit from Fig. 5. Now, depending on the outcome \(i\), one of the summands in Eq. (5) vanishes, and we end up performing exactly the same operations as in the original scheme. In this scheme, the experiment can be characterized by a pair \((\mathcal{Q},i,j)\), where \(\mathcal{Q}=\{\mathcal{P}_{U},\mathcal{P_{\mathbf{1}}}\}\) and \(i,j\in\{0,1\}\) are the output labels. The number of successful trials for \(U\) and \(1\!\!1\), respectively, can be written as \[N_{\mathcal{P}_{U}} =\#\{(\mathcal{Q},i,j):\mathcal{Q}=\mathcal{P}_{U},j=0\}, \tag{6}\] \[N_{\mathcal{P_{\mathbf{1}}}} =\#\{(\mathcal{Q},i,j):\mathcal{Q}=\mathcal{P_{\mathbf{1}}},j=1\}. \tag{7}\] Then, the probability of correct discrimination between \(\mathcal{P}_{U}\) and \(\mathcal{P_{\mathbf{1}}}\) is given by \[p_{\mathrm{succ}}=\frac{N_{\mathcal{P}_{U}}+N_{\mathcal{P_{\mathbf{1}}}}}{N_{ \mathrm{total}}}, \tag{8}\] where \(N_{\mathrm{total}}\) is the number of trials. #### 3.2.2 Importance of choosing the optimal discrimination scheme In principle, the schemes described in the previous section could be used with any choice of \(|\psi_{0}\rangle\) and final measurements \(\mathcal{P}_{V_{i}}\). However, we argue that it is best to choose those components in such a way that they maximize the probability of correct discrimination. To see that, suppose that some choice of \(|\psi_{0}\rangle,\mathcal{P}_{V_{0}},\mathcal{P}_{V_{1}}\) yields the theoretical upper bound of discriminating between two measurements of one, i.e. on a perfect quantum computer you will always make a correct guess. Then, on real hardware, we might obtain any empirical value in range \(\left[\frac{1}{2},1\right]\). On the other hand, if we choose the components of our scheme such that the successful discrimination probability is only \(\frac{3}{5}\), the possible range of empirically obtainable probabilities is only \(\left[\frac{1}{2},\frac{3}{5}\right]\). Hence, in the second case, the discrepancy between theoretical and empirical results will be less pronounced. #### 3.2.3 Constructing optimal discrimination scheme To construct the optimal discrimination scheme, one starts by calculating the probability of correct discrimination. Using the celebrated result by Helstrom [33], one finds that the optimal probability of correct discrimination between two quantum measurements, \(\mathcal{P}\) and \(\mathcal{Q}\), is \[p_{\mathrm{succ}}(\mathcal{P},\mathcal{Q})=\frac{1}{2}+\frac{1}{4}\|\mathcal{P} -\mathcal{Q}\|_{\diamond}, \tag{9}\] where \[\|\mathcal{P}-\mathcal{Q}\|_{\diamond}=\max_{\|\psi\rangle\|_{1}=1}\|\left(( \mathcal{P}-\mathcal{Q})\otimes\openone\right)(|\psi\rangle\langle\psi|)\|_{ 1}. \tag{10}\] The quantum state \(|\psi_{0}\rangle\) maximizing the diamond norm above is called the _discriminator_, and can be computed e.g. using semidefinite programming (SDP) [32; 34]. Furthermore, using the proof of the Holevo-Helstrom theorem, it is possible to construct corresponding unitaries \(V_{0}\), \(V_{1}\) to create the optimal discrimination strategy. For brevity, we do not describe this procedure here. Instead, we refer the interested reader to [32]. ## 4 Discrimination scheme for parameterized Fourier family and implementation So far, we only discussed how the discrimination is performed assuming that all needed components \(|\psi_{0}\rangle\), \(V_{0}\), and \(V_{1}\) are known. In this section, we provide a concrete example using parametrized Fourier family of measurements. The parametrized Fourier family of measurements is defined as a set of the measurements \(\{\mathcal{P}_{U_{\phi}}\colon\phi\in[0,2\pi]\}\), where \[U_{\phi}=H\left(\begin{array}{cc}1&0\\ 0&e^{i\phi}\end{array}\right)H^{\dagger}, \tag{11}\] and \(H\) is the Hadamard matrix of dimension two. For each element of this set, the discriminator is a Bell state: \[|\psi_{0}\rangle=\frac{1}{\sqrt{2}}\left(|00\rangle+|11\rangle\right). \tag{12}\] Observe that \(|\psi_{0}\rangle\) does not depend on the angle \(\phi\). However, the unitaries \(V_{0}\), \(V_{1}\) depend on \(\phi\) and take the following form: \[V_{0}=\left(\begin{array}{cc}i\sin\left(\frac{\pi-\phi}{4}\right)&-i\cos \left(\frac{\pi-\phi}{4}\right)\\ \cos\left(\frac{\pi-\phi}{4}\right)&\sin\left(\frac{\pi-\phi}{4}\right)\end{array} \right), \tag{13}\] \[V_{1}=\left(\begin{array}{cc}-i\cos\left(\frac{\pi-\phi}{4}\right)&i\sin \left(\frac{\pi-\phi}{4}\right)\\ \sin\left(\frac{\pi-\phi}{4}\right)&\cos\left(\frac{\pi-\phi}{4}\right)\end{array} \right). \tag{14}\] Finally, the theoretical probability of correct discrimination between von Neumann measurements \(\mathcal{P}_{U_{\phi}}\) and \(\mathcal{P_{\mathbf{1}}}\) is given by \[p_{\text{succ}}(\mathcal{P}_{U_{\phi}},\mathcal{P_{\mathbf{1}}})=\frac{1}{2}+ \frac{|1-e^{i\phi}|}{4}. \tag{15}\] We explore the construction of \(|\psi_{0}\rangle\), \(V_{0}\) and \(V_{1}\) for parametrized Fourier family of measurements in C. ## 5 Software description This section is divided into two parts. In Section 5.1 we describe functionalities of PyQBench package. Next, in Section 5.2, we give a general overview of the software architecture. ### Software Functionalities The PyQBench can be used in two modes: as a Python library and as a CLI script. When used as a library, PyQBench allows the customization of discrimination scheme. The user provides a unitary matrix \(U\) defining the measurement to be discriminated, the discriminator \(|\psi_{0}\rangle\), and unitaries \(V_{0}\) and \(V_{1}\) describing the final measurement. The PyQBench library provides then the following functionalities. 1. Assembling circuits for both postselection and direct sum-based discrimination schemes. 2. Executing the whole benchmarking scenario on specified backend (either real hardware or software simulator). 3. Interpreting the obtained outputs in terms of discrimination probabilities. Note that the execution of circuits by PyQBench is optional. Instead, the user might want to opt in for fine-grained control over the execution of the circuits. For instance, suppose the user wants to simulate the discrimination experiment on a noisy simulator. In such a case, they can define the necessary components and assemble the circuits using PyQBench. The circuits can then be altered, e.g. to add noise to particular gates, and then run using any Qiskit backend by the user. Finally, PyQBench can be used to interpret the measurements to obtain discrimination probability. The PyQBench library also contains a readily available implementation of all necessary components needed to run discrimination experiments for parametrized Fourier family of measurements, defined previously in Section 4. However, if one only wishes to use this particular family of measurements in their benchmarks, then using PyQBench as a command line tool might be more straightforward. PyQBench's command line interface allows running the benchmarking process without writing Python code. The configuration of CLI is done by YAML [35] files describing the benchmark to be performed and the description of the backend on which the benchmark should be run. Notably, the YAML configuration files are reusable. The same benchmark can be used with different backends and vice versa. The following section describes important architectural decisions taken when creating PyQBench, and how they affect the end-user experience. ### Software Architecture #### 5.2.1 Overview of the software structure As already described, PyQBench can be used both as a library and a CLI. Both functionalities are implemented as a part of qbench Python package. The exposed CLI tool is also named qbench. For brevity, we do not discuss the exact structure of the package here, and instead refer an interested reader to the source code available at GitHub [36] or at the reference manual [37]. PyQBench can be installed from official Python Package Index (PyPI) by running pip install pyqbench. In a properly configured Python environment the installation process should also make the qbench command available to the user without a need for further configuration. #### 5.2.2 Integration with hardware providers and software simulators PyQBench is built around the Qiskit [11] ecosystem. Hence, both the CLI tool and the qbench library can use any Qiskit-compatible backend. This includes, IBM Q backends (available by default in Qiskit) and Amazon Braket devices and simulators (available through qiskit-braket-provider package [38; 39]). When using PyQBench as library, instances of Qiskit backends can be passed to functions that expect them as parameters. However, in CLI mode, the user has to provide a YAML file describing the backend. An example of such file can be found in Section 6, and the detailed description of the expected format can be found at PyQBench's documentation. #### 5.2.3 Command Line Interface The Command Line Interface (CLI) of PyQBench has nested structure. The general form of the CLI invocation is shown in listing 1. ``` qbench<benchmark-type><command><parameters> ``` Currently, PyQBench's CLI supports only one type of benchmark (discrimination of parametrized Fourier family of measurements), but we decided on structuring the CLI in a hierarchical fashion to allow for future extensions. Thus, the only accepted value of <benchmark-type> is disc-fourier. The qbench disc-fourier command has four subcommands: * benchmark: run benchmarks. This creates either a result YAML file containing the measurements or an intermediate YAML file for asynchronous experiments. * status: query status of experiments submitted for given benchmark. This command is only valid for asynchronous experiments. * resolve: query the results of asynchronously submitted experiments and write the result YAML file. The output of this command is almost identical to the result obtained from synchronous experiments. * tabulate: interpret the results of a benchmark and summarize them in the CSV file. We present usage of each of the above commands later in section 6. #### 5.2.4 Asynchronous vs. synchronous execution PyQBench's CLI can be used in synchronous and asynchronous modes. The mode of execution is defined in the YAML file describing the backend (see Section 6 for an example of this configuration). We decided to couple the mode of execution to the backend description because some backends cannot work in asynchronous mode. When running qbench disc-fourier benchmark in asynchronous mode, the PyQBench submits all the circuits needed to perform a benchmark and then writes an intermediate YAML file containing metadata of submitted experiments. In particular, this metadata contains information on correlating submitted job identifiers with particular circuits. The intermediate file can be used to query the status of the submitted jobs or to resolve them, i.e. to wait for their completion and get the measurement outcomes. In synchronous mode, PyQBench first submits all jobs required to run the benchmark and then immediately waits for their completion. The advantage of this approach is that no separate invocation of qbench command is needed to actually download the measurement outcomes. The downside, however, is that if the script is interrupted while the command is running, the intermediate results will be lost. Therefore, we recommend using asynchronous mode whenever possible. ## 6 Illustrative examples In this section, we present two examples demonstrating the usage of PyQBench. In the first example, we show how to implement a discrimination scheme for a user-defined measurement and possible ways of using this scheme with qbench library. The second example demonstrates the usage of the CLI. We show how to prepare the input files for the benchmark and how to run it using the qbench tool. ### Using user-defined measurement with qbench package In this example, we will demonstrate how qbench package can be used with user-defined measurement. For this purpose, we will use \(U=H\) (the Hadamard gate). The detailed calculations that lead to the particular form of the discriminator and final measurements can be found in B. The explicit formula for discriminator in this example reads: \[|\psi_{0}\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|11\rangle), \tag{16}\] with final measurements being equal to \[V_{0}=\left(\begin{array}{cc}\alpha&-\beta\\ \beta&\alpha\end{array}\right), \tag{17}\] and \[V_{1}=\left(\begin{array}{cc}-\beta&\alpha\\ \alpha&\beta\end{array}\right), \tag{18}\] where \[\alpha=\frac{\sqrt{2-\sqrt{2}}}{2}=\cos\left(\frac{3}{8}\pi\right), \tag{19}\] \[\beta=\frac{\sqrt{2+\sqrt{2}}}{2}=\sin\left(\frac{3}{8}\pi\right). \tag{20}\] To use the above benchmarking scheme in PyQBench, we first need to construct circuits that can be executed by actual hardware. To this end, we need to represent each of the unitaries as a sequence of standard gates, keeping in mind that quantum circuits start execution from the \(|00\rangle\) state. The circuit taking \(|00\rangle\) to the Bell state \(|\psi_{0}\rangle\) comprises the Hadamard gate followed by CNOT gate on both qubits (see Fig. 6). For \(V_{0}\) and \(V_{1}\) observe that \(V_{0}=\mathrm{RY}\left(\frac{3}{4}\pi\right)\), where \(\mathrm{RY}\) is rotation gate around the \(Y\) axis defined by \[\mathrm{RY}(\theta)=\left(\begin{array}{cc}\cos\frac{\theta}{2}&-\sin\frac{ \theta}{2}\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{array}\right). \tag{21}\] To obtain \(V_{1}\) we need only to swap the columns, i.e. \[V_{1}=\mathrm{RY}\left(\frac{3}{4}\pi\right)\mathrm{X}\,. \tag{22}\] Finally, the optimal probability of correct discrimination is equal to \[p_{\mathrm{succ}}(\mathcal{P}_{U},\mathcal{P}_{\mathbf{1}})=\frac{1}{2}+\frac {\sqrt{2}}{4}. \tag{23}\] We will now demonstrate how to implement this theoretical scheme in PyQBench. For this example we will use the Qiskit Aer simulator [40]. First, we import the necessary functions and classes from PyQBench and Qiskit. We also import numpy for the definition of np.pi constant and the exponential function. The exact purpose of the imported functions will be described at the point of their usage. ``` importnumpyasnp fromqiskitimportQuantumCircuit,Aer fromqbench.schemes.postselection import benchmark_using_postselection fromqbench.schemes.direct_sumimportbenchmark_using_direct_sum ``` Listing 2: Imports needed for running benchmarking example To implement the discrimination scheme in PyQBench, we need to define all the necessary components as Qiskit instructions. We can do so by constructing a circuit object acting on qubits 0 and 1 and then converting them using to_instruction() method. ``` defstate_prep(): circuit=QuantumCircuit(2) circuit.h(0) circuit.cnot(0, 1) returncircuit.to_instruction() defu_dag(): Figure 6: Decomposition of the Bell state \(|\psi_{0}\rangle\). circuit = QuantumCircuit(1) circuit.h(0) return circuit.to_instruction() def v0_dag(): circuit = QuantumCircuit(1) circuit.ry(-np.pi * 3 / 4, 0) return circuit.to_instruction() def v1_dag(): circuit = QuantumCircuit(1) circuit.ry(-np.pi * 3 / 4, 0) circuit.x(0) return circuit.to_instruction() def v0_v1_direct_sum_dag(): circuit = QuantumCircuit(2) circuit.ry(-np.pi * 3 / 4, 0) circuit.cnot(0, 1) return circuit.to_instruction() ``` Listing 4: Defining a backend We now construct a backend object, which in this case is an instance of Aer simulator. ``` simulator=Aer.get_backend("aer_simulator") ``` Listing 5: Simulation benchmark by using postselection In the simplest scenario, when one does not want to tweak execution details and simply wishes to run the experiment on a given backend, everything that is required is now to run benchmark_using_postselection or benchmark_using_direct_sum function, depending on the user preference. ``` postselection_result=benchmark_using_postselection( backend=simulator, target=0, ancilla=1, state_preparation=state_prep(), u_dag=u_dag(), v0_dag=v0_dag(), v1_dag=v1_dag(), num_shots_per_measurement=10000, ``` direct_sum_result=benchmark_using_direct_sum( backend=simulator, target=1, ancilla=2, state_preparation=state_prep(), u_dag=u_dag(), v0_v1_direct_sum_dag=v0_v1_direct_sum_dag(), num_shots_per_measurement=10000, ) ``` Listing 6: Simulation benchmark by using direct sum The postselection_result and direct_sum_result variables contain now the empirical probabilities of correct discrimination. We can compare them to the theoretical value and compute the absolute error. ``` p_succ=(2+np.sqrt(2))/4 print(f"Analyticalp_succ={p_succ}") print( f"Postselection:p_succ={postselection_result},abs.error= {p_succ-postselection_result}" ) print(f"Directsum:p_succ={direct_sum_result},abs.error= {p_succ-direct_sum_result}") ``` Listing 7: Examining the benchmark results In the example presented above we used functions that automate the whole process - from the circuit assembly, through running the simulations to interpreting the results. But what if we want more control over some parts of this process? One possibility would be to add some additional parameters to benchmark_using_xyz functions, but this approach is not scalable. Moreover, anticipating all possible uses cases isimpossible. Therefore, we decided on another approach. PyQBench provides functions performing: 1. Assembly of circuits needed for experiment, provided the components discussed above. 2. Interpretation of the obtained measurements. The difference between the two approaches is illustrated on the diagrams in Fig. 7. For the rest of this example we focus only on the postselection case, as the direct sum case is analogous. We continue by importing two more functions from PyQBench. ``` fromqbench.schemes.postselectionimport( assemble_postselection_circuits, compute_probabilities_from_postselection_measurements, ) circuits=assemble_postselection_circuits( target=0, ancilla=1, state_preparation=state_prep(), u_dag=u_dag(), v0_dag=v0_dag(), v1_dag=v1_dag(), ) ``` Listing 8: Assembling circuits Recall that for a postselection scheme we have two possible choices of the "unknown" measurement and two possible choices of a final measurement, which gives a total of four circuits needed to run the benchmark. The function assemble_postselection_circuits creates all four circuits and places them in a dictionary with keys "id_v0", "id_v1", "u_v0", "u_v1". We will now run our circuits using noisy and noiseless simulation. We start by creating a noise model using Qiskit. ``` fromqiskit.providers.aerimportnoise error=noise.ReadoutError([[0.75,0.25],[0.8,0.2]]) noise_model=noise.NoiseModel() noise_model.add_readout_error(error,[0]) noise_model.add_readout_error(error,[1]) ``` Listing 9: Adding noise model Once we have our noise model ready, we can execute the circuits with and without noise. To this end, we will use Qiskit's execute function. One caveat is that we have to keep track which measurements correspond to Figure 7: Differences between simplified (top) and user–controlled (bottom) execution of benchmarks in PyQBench. Compared to simplified benchmarking, in user-controlled benchmarks the user has direct access to the circuits being run, and hence can alter them (e.g. by adding noise) and/or choose the parameters used for executing them on the backend. which circuit. We do so by fixing an ordering on the keys in the circuits dictionary. ``` fromqiskitimportexecute keys_ordering=["id_v0","id_v1","u_v0","u_v1"] all_circuits=[circuits[key]forkkeyinkeys_ordering] counts_noisy=execute( all_circuits, backend=simulator, noise_model=noise_model, shots=10000)).result().get_counts() counts_noiseless=execute( all_circuits, backend=simulator, shots=10000)).result().get_counts() ``` Listing 10: Running circuits Finally, we use the measurement counts to compute discrimination probabilities using compute_probabilities_from_postselection_measurements function. ``` prob_succ_noiseless= compute_probabilities_from_postselection_measurements( id_v0_counts=counts_noiseless[0], id_v1_counts=counts_noiseless[1], u_v0_counts=counts_noiseless[2], u_v1_counts=counts_noiseless[3], ) ``` Listing 11: Computation probabilities ``` prob_succ_noisy= compute_probabilities_from_postselection_measurements( id_v0_counts=counts_noisy[0], id_v1_counts=counts_noisy[1], u_v0_counts=counts_noisy[2], u_v1_counts=counts_noisy[3], We can now examine the results. As an example, in one of our runs, we obtained prob_succ_noiseless = 0.8524401115559386 and prob_succ_noisy = 0.5017958400693446. As expected, for noisy simulations, the result lies further away from the target value of 0.8535533905932737. This concludes our example. In the next section, we will show how to use PyQBench's CLI. ### Using qbench CLI Using PyQBench as a library allows one to conduct a two-qubits benchmark with arbitrary von Neumann measurement. However, as discussed in the previous guide, it requires writing some amount of code. For a Fourier parametrized family of measurements, PyQBench offers a simplified way of conducting benchmarks using a Command Line Interface (CLI). The workflow with PyQBench's CLI can be summarized as the following list of steps:: 1. Preparing configuration files describing the backend and the experiment scenario. 2. Submitting/running experiments. Depending on the experiment scenario, execution can be synchronous, or asynchronous. 3. (optional) Checking the status of the submitted jobs if the execution is asynchronous. 4. Resolving asynchronous jobs into the actual measurement outcomes. 5. Converting obtained measurement outcomes into tabulated form. #### 6.2.1 Preparing configuration files The configuration of PyQBench CLI is driven by YAML files. The first configuration file describes the experiment scenario to be executed. The second file describes the backend. Typically, this backend will correspond to the physical device to be benchmarked, but for testing purposes one might as well use any other Qiskit-compatible backend including simulators. Let us first describe the experiment configuration file, which might look as follow. ``` type:discrimination-fourier qubits: -target:0 -ancilla:1 -target:1 -ancilla:2 angles: ``` start: 0 stop: 2 * pi num_steps: 3 gateset: ibmq method: direct_sum num_shots: 100 ``` The experiment file contains the following fields: * type: a string describing the type of the experiment. Currently, the only option of type is discrimination-fourier. * qubits: a list enumerating pairs of qubits on which the experiment should be run. For configuration in listing 12, the benchmark will run on two pairs of qubits. The first pair is 0 and 1, and the second one is 1 and 2. We decided to describe a pair by using target and ancilla keys rather than using a plain list to emphasize that the role of qubits in the experiment is not symmetric. * angles: an object describing the range of angles for Fourier parameterized family. The described range is always uniform, starts at the start, ends at stop and contains num_steps points, including both start and stop. The start and stop can be arithmetic expressions using pi literal. For instance, the range defined in listing 12 contains three points: 0, \(\pi\) and \(2\pi\). * gateset: a string describing the set of gates used in the decomposition of circuits in the experiment. The PyQBench contains explicit implementations of circuits The possible options are [ibmq, lucy, rigetti], corresponding to decompositions compatible with IBM Q devices, OQC Lucy device, and Rigetti devices. Alternatively, one might wish to turn off the decomposition by using a special value generic. However, for this to work a backend used for the experiment must natively implement all the gates needed for the experiment, as described in 4. * method: a string, either postselection or direct_sum determining which implementation of the conditional measurement is used. * num_shots: an integer defines how many shots are performed in the experiment for a particular angle, qubit pair and circuit. Note that if one wishes to compute the total number of shots in the experiment, it is necessary to take into account that the postselection method uses twice as many circuits as the direct_sum method. The second configuration file describes the backend. We decided to decouple the experiment and the backend files because it facilitates their reuse. For instance, the same experiment file can be used to run benchmarks on multiple backends, and the same backend description file can be used with multiple experiments. Different Qiskit backends typically require different data for their initialization. Hence, there are multiple possible formats of the backend configuration files understood by PyQBench. We refer the interested reader to the PyQBench's documentation. Below we describe an example YAML file describing IBM Q backend named Quito. ``` name:ibmq_quito asynchronous:false provider: hub:ibm-q group:open project:main ``` Listing 13: IBMQ backend IBMQ backends typically require an access token to IBM Quantum Experience. Since it would be unsafe to store it in plain text, the token has to be configured separately in IBMQ_TOKEN environmental variable. #### 6.2.2 Remarks on using the asynchronous flag For backends supporting asynchronous execution, the asynchronous setting can be configured to toggle it. For asynchronous execution to work, the following conditions have to be met: * Jobs returned by the backend have unique job_id. * Jobs are retrievable from the backend using the backend.retrieve_job method, even from another process (e.g. if the original process running the experiment has finished). Since PyQBench cannot determine if the job retrieval works for a given backend, it is the user's responsibility to ensure that this is the case before setting asynchronous to true. #### 6.2.3 Running the experiment and collecting measurements data After preparing YAML files defining experiment and backend, running the benchmark can be launched by using the following command line invocation: ``` qbenchdisc-fourierbenchmarkexperiment_file.ymlbackend_file.yml The output file will be printed to stdout. Optionally, the - -output OUTPUT parameter might be provided to write the output to the OUTPUT file instead. ``` qbenchdisc-fourierbenchmarkexperiment_file.ymlbackend_file.yml --outputasync_results.yml ``` The result of running the above command can be twofold: * If backend is asynchronous, the output will contain intermediate data containing, amongst others, job_ids correlated with the circuit they correspond to. * If the backend is synchronous, the output will contain measurement outcomes (bitstrings) for each of the circuits run. For synchronous experiment, the part of output looks similar to the one below. The whole YAML file can be seen in E. ``` data: -target:0 - ancilla:1 phi:0.0 - results_per_circuit: - name:id histogram:{'00':28, '01':26, '10':21, '11':25} mitigation_info: target:{prob_meas0_prep1:0.052200000000000024, prob_meas1_prep0:0.0172} ancilla:{prob_meas0_prep1:0.05900000000000005, prob_meas1_prep0:0.0202} mitigated_histogram:{'00':0.2637212373658018, '01':0.25865061319892463, '10':0.2067279352110304, '11':0.2709002142242433} ``` The data includes target, ancilla, phi, and results_per_circuit. The first three pieces of information have already been described. The last data results_per_circuit gives us the following additional information: * name: the information which measurement is used during experiment, either string "u" for \(\mathcal{P}_{U}\) or string "id" for \(\mathcal{P}_{\mathbf{1}}\). In this example we consider \(\mathcal{P}_{\mathbf{1}}\). * histogram: the dictionary with measurements' outcomes. The keys represent possible bitstrings, whereas the values are the number of occurrences. * mitigation_info: for some backends (notably for backends corresponding to IBM Q devices), backends.properties().qubits contains information that might be used for error mitigation using the MThree method [41; 42]. If this info is available, it will be stored in the mitigation_info field, otherwise this field will be absent. * mitigated_histogram: the histogram with measurements' outcomes after the error mitigation. #### 6.2.4 (Optional) Getting status of asynchronous jobs PyQBench provides also a helper command that will fetch the statuses of asynchronous jobs. The command is: ``` qbenchdisc-fourierstatusasync_results.yml ``` and it will display dictionary with histogram of statuses. #### 6.2.5 Resolving asynchronous jobs For asynchronous experiments, the stored intermediate data has to be resolved in actual measurements' outcomes. The following command will wait until all jobs are completed and then write a result file. ``` qbenchdisc-fourierresolveasync-results.ymlresolved.yml ``` The resolved results, stored in resolved.yml, would look just like if the experiment was run synchronously. Therefore, the final results will look the same no matter in which mode the benchmark was run, and hence in both cases the final output file is suitable for being an input for the command computing the discrimination probabilities. #### 6.2.6 Computing probabilities As a last step in the processing workflow, the results file has to be passed to tabulate command: ``` qbenchdisc-fouriertabulateresults.ymlresults.csv ``` A sample CSV file is provided below: ## 7 Impact With the surge of availability of quantum computing architectures in recent years it becomes increasingly difficult to keep track of their relative performance. To make this case even more difficult, various providers give access to different figures of merit for their architectures. Our package allows the user to test various architectures, available through qiskit and Amazon BraKet using problems with simple operational interpretation. We provide one example built-in in the package. Furthermore, we provide a powerful tool for the users to extend the range of available problems in a way that suits their needs. Due to this possibility of extension, the users are able to test specific aspects of their architecture of interest. For example, if their problem is related to the amount of coherence (the sum of absolute value of off-diagonal elements) of the states present during computation, they are able to quickly prepare a custom experiment, launch it on desired architectures, gather the result, based on which they can decide which specific architecture they should use. Finally, we provide the source code of PyQBench on GitHub [36] under an open source license which will allow users to utilize and extend our package in their specific applications. ## 8 Conclusions In this study, we develop a Python library PyQBench, an innovative open-source framework for benchmarking gate-based quantum computers. PyQBench can benchmark NISQ devices by verifying their capability of discriminating between two von Neumann measurements. PyQBench offers a simplified, ready-to-use, command line interface (CLI) for running benchmarks using a predefined parameterized Fourier family of measurements. For \begin{table} \begin{tabular}{|c c c c c|} \hline target & ancilla & phi & ideal\_prob & disc\_prob & mit\_disc\_prob \\ \hline \hline 0 & 1 & 0 & 0.5 & 0.46 & 0.45 \\ \hline 0 & 1 & 3.14 & 1 & 0.95 & 0.98 \\ \hline 0 & 1 & 6.28 & 0.5 & 0.57 & 0.58 \\ \hline 1 & 2 & 0 & 0.5 & 0.57 & 0.57 \\ \hline 1 & 2 & 3.14 & 1 & 0.88 & 0.94 \\ \hline 1 & 2 & 6.28 & 0.5 & 0.55 & 0.56 \\ \hline \end{tabular} \end{table} Table 2: The resulting CSV file contains table with columns target, ancilla, phi, ideal_prob, disc_prob and, optionally, mit_disc_prob. Each row in the table describes results for a tuple of (target, ancilla, phi). The reference optimal value of discrimination probability is present in ideal_prob column, whereas the obtained, empirical discrimination probability can be found in the disc_prob column. The mit_disc_prob column contains empirical discrimination probability after applying the Mthree error mitigation [41; 42], if it was applied. more advanced scenarios, PyQBench offers a way of employing user-defined measurements instead of predefined ones. ## 9 Conflict of Interest We wish to confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support for this work that could have influenced its out- come. ## Acknowledgements This work is supported by the project "Near-term quantum computers Challenges, optimal implementations and applications" under Grant Number POIR.04.04.00-00-17C1/18-00, which is carried out within the Team-Net programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. PL is also a holder of European Union scholarship through the European Social Fund, grant InterPOWER (POWR.03.05.00-00-Z305).
PyQBenchは、ゲートベースの量子計算機のベンチマークするための革新的なオープンソースフレームワークです。PyQBenchは、二つのノイマン測定の区別能力を持つNISQデバイスをベンチマークするために使用できます。PyQBenchは、事前に定義されたパラメータ化されたフーリエ関数の測定を使用してベンチマークを実行するための簡素化された、すぐに使えるコマンドラインインターフェースを提供します。より高度なシナリオでは、PyQBenchは、事前に定義された測定ではなく、ユーザー定義の測定を使用するための方法を提供します。
2307.16377
JOTR: 3D Joint Contrastive Learning with Transformers for Occluded Human Mesh Recovery
In this study, we focus on the problem of 3D human mesh recovery from a single image under obscured conditions. Most state-of-the-art methods aim to improve 2D alignment technologies, such as spatial averaging and 2D joint sampling. However, they tend to neglect the crucial aspect of 3D alignment by improving 3D representations. Furthermore, recent methods struggle to separate the target human from occlusion or background in crowded scenes as they optimize the 3D space of target human with 3D joint coordinates as local supervision. To address these issues, a desirable method would involve a framework for fusing 2D and 3D features and a strategy for optimizing the 3D space globally. Therefore, this paper presents 3D JOint contrastive learning with TRansformers (JOTR) framework for handling occluded 3D human mesh recovery. Our method includes an encoder-decoder transformer architecture to fuse 2D and 3D representations for achieving 2D$\&$3D aligned results in a coarse-to-fine manner and a novel 3D joint contrastive learning approach for adding explicitly global supervision for the 3D feature space. The contrastive learning approach includes two contrastive losses: joint-to-joint contrast for enhancing the similarity of semantically similar voxels (i.e., human joints), and joint-to-non-joint contrast for ensuring discrimination from others (e.g., occlusions and background). Qualitative and quantitative analyses demonstrate that our method outperforms state-of-the-art competitors on both occlusion-specific and standard benchmarks, significantly improving the reconstruction of occluded humans.
Jiahao Li, Zongxin Yang, Xiaohan Wang, Jianxin Ma, Chang Zhou, Yi Yang
2023-07-31T02:58:58
http://arxiv.org/abs/2307.16377v2
# JOTR: 3D Joint Contrastive Learning with Transformers for ###### Abstract In this study, we focus on the problem of 3D human mesh recovery from a single image under obscured conditions. Most state-of-the-art methods aim to improve 2D alignment technologies, such as spatial averaging and 2D joint sampling. However, they tend to neglect the crucial aspect of 3D alignment by improving 3D representations. Furthermore, recent methods struggle to separate the target human from occlusion or background in crowded scenes as they optimize the 3D space of target human with 3D joint coordinates as local supervision. To address these issues, a desirable method would involve a framework for fusing 2D and 3D features and a strategy for optimizing the 3D space globally. Therefore, this paper presents 3D JOint contrastive learning with TRansformers (**JOTR**) framework for handling occluded 3D human mesh recovery. Our method includes an encoder-decoder transformer architecture to fuse 2D and 3D representations for achieving 2D\(\&\)3D aligned results in a coarse-to-fine manner and a novel 3D joint contrastive learning approach for adding explicitly global supervision for the 3D feature space. The contrastive learning approach includes two contrastive losses: joint-to-joint contrast for enhancing the similarity of semantically similar voxels (i.e., human joints), and joint-to-non-joint contrast for ensuring discrimination from others (e.g., occlusions and background). Qualitative and quantitative analyses demonstrate that our method outperforms state-of-the-art competitors on both occlusion-specific and standard benchmarks, significantly improving the reconstruction of occluded humans. Code is available at [https://github.com/x1jh0520/JOTR](https://github.com/x1jh0520/JOTR). + Footnote †: ddagger}\) Yi Yang is the corresponding author. + Footnote †: ddagger}\) Yi Yang is the corresponding author. ## 1 Introduction The estimation of 3D human meshes from single RGB images is an active area of research in computer vision with a broad range of applications in robotics, AR/VR, and human behavior analysis. In contrast to estimating the pose of general objects [69], human mesh recovery is more challenging due to the complex and deformable structure of the human body. Nevertheless, enhancing human-centric tasks can be achieved by combining visual features and prior knowledge about human anatomy through constructing multi-knowledge representations [66]. Generally, the human mesh recovery task takes a single image as input and regresses human model parameters such as SMPL [46] as output. Driven by deep neural networks, this task has achieved rapid progress [10, 21, 25, 28, 31, 32, 33, 34, 41, 42, 56, 57, 72, 76]. Recent studies have focused on regressing accurate human meshes despite occlusions. To achieve this, most of them employ 2D prior knowledge (_e.g._, UV maps [76], part segmentation masks [31] and 2D human key points [28]) to focus the model on visible human body parts for enhancing the 2D alignment of the predicted mesh. Additionally, some methods [10, 57] introduce 3D representations to locate 3D joints and extract 2D features from the corresponding regions of the 2D image. Even though the above methods have achieved significant progress in occluded human mesh recovery, they still remain constrained to these two aspects: the pursuit of **2D alignment** and **local supervision** for 3D joints. (i) As shown in Fig. 0(a), the above methods employing 2D prior knowledge mainly focus on **2D alignment** technologies, including spatial averaging and 2D joint sampling. However, in crowded or occluded scenarios, solely focusing on 2D alignment may acquire ambiguous features for the entire mesh due to the lack of estimation of hidden parts. Accordingly, the invisible human body parts would be aligned based on prior knowledge of the standard SMPL template, resulting in misalignment with visible parts and leading to inaccurate 3D reconstructions. (ii) Furthermore, creating a comprehensive and precise 3D representation from a single RGB image is an ill-posed problem as the inherently limited information. As illustrated in Fig. 0(b), some methods that use 3D representations rely on localized 3D joints as **local supervision**, ignoring the rich semantic relations between voxels across different scenes. These "local" contents (_i.e._, human joints) occupy only a small portion of the 3D space, while most voxels are often occupied by occlusions and background. Consequently, the lack of explicit supervision for the entire 3D space makes it difficult to differentiate target humans from other semantically similar voxels, resulting in ambiguous 3D representations. Therefore, to improve occluded human mesh recovery, we consider investigating a fusion framework that integrates 2D and 3D features for **2D\(\&\)3D alignment**, along with a **global supervision** strategy to obtain a semantically clear 3D feature space. By leveraging the complementary information from both 2D and 3D representations, the network could overcome the limitations of using only a single 2D representation, enabling obscured human parts to be detected in 3D representations and achieving 2D\(\&\)3D alignment. Given a global supervision strategy, we could explicitly supervise the entire 3D space to highlight the representation of target humans and distinguish them from other semantically similar voxels, resulting in a semantically clear 3D feature space. Based on the above motivation, this paper proposes a novel framework, 3D JOint contrastive learning with TRansformers (JOTR), for recovering occluded human mesh using a fusion of multiple representations as shown in Fig. 0(a). Unlike existing methods such as 3DcrowdNet [10] and BEV [57] that employ 3D-aware 2D sampling techniques, JOTR integrates 2D and 3D features through transformers [60] with attention mechanisms. Specifically, JOTR utilizes an encoder-decoder transformer architecture to combine 3D local features (_i.e._, sampled 3D joint features) and 2D global features (_i.e._, flatten 2D features), enhancing both 2D and 3D alignment. Besides, to obtain semantically clear 3D representations, the main objective is to strengthen and highlight the human representation while minimizing the impact of irrelevant features (_e.g._, occlusions and background). Accordingly, we propose a new approach, 3D joint contrastive learning (in Fig. 0(b)), that provides global and explicit supervision for 3D space to improve the similarity of semantically similar voxels (_i.e._, human joints), while maintaining discrimination from other voxels (_e.g._, occlusions). By carefully designing 3D joint contrast for 3D representations, JOTR can mitigate the effects of occlusion and acquire semantically meaningful 3D representations, resulting in accurate localization of 3D human joints and acquisition of meaningful 3D joint features. We conduct extensive experiments on both standard 3DPW benchmark [61] and occlusion benchmarks such as 3DPW-PC [56, 61], 3DPW-OC [61, 76], 3DPW-Crowd [10, 61], 3DMH [76] and CMU Panoptic [22], and JOTR achieves state-of-the-art performance on these datasets. Especially, JOTR outperforms the prior state-of-the-art method 3DCrowdNet [10] by **6.1** (PA-MPJPE), **4.9** (PA-MPJPE), and **5.3** (MPJPE) on 3DPW-PC, 3DPW-OC, and 3DPW respectively. Moreover, we carry out comprehensive ablation experiments to demonstrate the effectiveness of our framework and 3D joint contrastive learning strategy. Our contributions are summarized as follows: * We propose JOTR, a novel method for recovering occluded human mesh using a fusion of 2D global and 3D local features, which overcomes limitations caused by person-person and person-object occlusions and achieves 2D\(\&\)3D aligned results. JOTR achieves state-of-the-art results on both standard and occluded datasets, including 3DPW, 3DPW-PC, 3DPW-OC, 3DMH, CMU Panoptic, and 3DPW-Crowd. * We develop a 3D joint contrastive learning strategy that supervises the 3D space explicitly and globally to obtain semantically clear 3D representations, minimizing the impact of occlusions and adapting to more challenging scenarios with the help of cross-image contrast. ## 2 Related Work Based on the incorporation of a human body model [46, 53], Deep Neural Network-based 3D Human Mesh Recovery methods [9, 10, 12, 15, 21, 25, 28, 29, 30, 31, 32, 33, 36, 41, 42, 50, 55, 56, 57, 59, 72, 76] can be divided into two categories. The first, SMPL-based approaches [21, 25, 55, 56, 57, 30, 31, 57, 72], maps input pixels to SMPL parameters [46] such as pose and shape and reconstructs meshes by SMPL models, while the second, SMPL-free methods [33, 41, 42], directly maps raw pixels to 3D mesh vertices without the assistance of SMPL models. In this paper, we mainly consider the first method as the implementation approach. **Human Mesh Recovery.** Usually, human mesh recovery methods estimate 3D human mesh of a single person within a person bounding box, which is scaled to the same size. This allows us to assume that the distance between each individual and the camera is roughly equivalent in the cropped image patch. Early works [25, 32] employ spatial averaging on CNN features for obtaining global features and utilize Multi-Layer Perceptrons (MLPs) to regress SMPL parameters. However, global pooling is not suitable for achieving pixel-aligned results, leading to subpar performance in real-world scenarios. PARE [31] proposes using part segmentation masks to enhance pixel alignments. Zhang [76] make use of occlusion segmentation masks to allow the model to attend to the visible human body parts, which also helps to reconstruct complete human mesh. OCHMR [28] employs load-global center maps to make the model regress the mesh of the referred person. While these methods make progress in occluded human mesh recovery by enhancing the ability to represent 2D information, they overlook the 3D structural information. 3DcrowdNet [10] and BEV [57] introduce 3D representations to locate human joints in 3D space. However, these approaches also have limitations since they extract 2D CNN features in the corresponding region of located 3D joints, thereby overlooking the full potential of 3D representations. Therefore, we design a fusion framework to integrate 2D and 3D features for mutual complementation. **Multi-Modality Transformers.** Following the success of vision transformers in processing image [3, 6, 11, 45] or video [67, 68, 2], multi-modality transformers [7, 37, 38, 39, 74, 34, 37, 39, 75] are capable of processing input data from multiple modalities, such as text, image, audio, or video, in a single model. The attention mechanism is a key component of transformers, which enables them to selectively focus on relevant parts of the input sequence when generating the output. Returning to the present task, it is worth noting that although there is only one visual modality, two distinct representations (, 2D and 3D representations) are available. Thus, we propose using transformers with attention mechanisms to integrate multi-representation features, rather than relying on global average pooling or joint feature sampling in CNN features. **Contrastive Learning.** Contrastive learning [4, 14, 17, 5] is a type of unsupervised learning that aims to learn a similarity metric between data samples. The goal of contrastive learning is to bring similar examples closer together in feature space while pushing dissimilar examples farther apart. With regards to the current objective, in 3D space, human body joints occupy a relatively small proportion, with the majority of voxels in the space being occupied by other objects (, another person's body, background elements, and empty elements). This poses a significant challenge in learning a similarity metric between data samples in 3D space. To address this issue, we propose a novel joint-based contrastive learning strategy inspired by the recent success of pixel contrastive learning in semantic segmentation [1, 19, 62, 77], which enables the network to learn a clear similarity metric in 3D space. ## 3 Method **Human Body Model.** SMPL [46] represents a 3D human mesh by 3 low-dimensional vectors (, pose, shape, and camera parameters ). Following previous methods [10, 25, 31, 32, 56], we use the gender-neutral shape model. The SMPL model generates a 3D mesh through a differentiable function. By applying a pretrained linear regressor, we obtain the 3D joint coordinates, where, conveniently. Additionally, we obtain the 2D joints by projection. **Overview.** We propose a method called JOTR, which utilizes transformers to fuse 2D and 3D features for 2D\(\&\)3D alignment and a novel contrastive learning strategy to globally supervise the 3D space for target humans. Our pipeline is depicted in Fig. 1(a) and explained in Sec. 3.1, where JOTR regresses SMPL parameters by fusing 2D and 3D features obtained from a cropped image patch. The proposed 3D joint contrastive learning is illustrated in Fig. 3 and explained in Sec. 3.2, including two contrastive losses: joint-to-non-joint contrast and joint-to-joint contrast. ### Fusing 2D and 3D Features with Transformers As analyzed in Sec. 1, relying solely on 2D features for achieving 2D alignment to reconstruct the human mesh in occluded scenarios may result in suboptimal performance. To overcome this limitation, we propose integrating both 2D and 3D features with transformers in the reconstruction process. Drawing inspiration from the success of transformer models in multi-modality fusion [24, 34], we propose an encoder-decoder transformer architecture that enables the mutual complementation of 2D and 3D features for 2D\(\&\)3D alignment. **Lifting Module.** Unlike previous method [10] that lifts 2D features to 3D features via MLPs without integrating inductive bias or prior knowledge, we draw inspiration from bird's eye view representations [8, 40, 54, 57, 63, 65] in 3D space. As analyzed in BEV [57], the farther a voxel is from the camera, the less information it carries. To put this hypothesis into practice, BEV employs pre-defined 3D camera anchor maps to impact the 3D feature. Similar to BEV, we design learnable Rescaled Relative 3D Coordinates (RRC) \(C_{3D}\in\mathbb{R}^{D\times H\times W\times 3}\) in range \((0,1)\) to provide 3D spatial prior knowledge. In this representation, \(C_{ijk}\in\mathbb{R}^{3}\) represents the relative location of voxel \((x_{k},y_{j},z_{i})\) and the \(x\) and \(y\) coordinates are uniformly distributed with equal intervals. For \(Z\) axis, we utilize a monotonically increasing convex function \(\psi\) to rescale \(z\) coordinates unevenly as \(z^{\prime}=\psi(z)\). In practice, we employ \(\psi(z)=z^{\lambda},\lambda>1\) as rescaling function and \(\lambda\) is a learnable parameter with initial value of \(3.0\). The whole pipeline can be written as: \[\tilde{F_{\textit{3D}}} =MLP(F_{\textit{2D}}),\] \[\tilde{F_{\textit{3D}}} =CNN(Concat(\tilde{F_{\textit{3D}}},C_{\textit{3D}})),\] \[H_{\textit{3D}} =TransformerEncoder(\tilde{F_{\textit{3D}}}).\] JOTR first lifts pose-guided 2D feature \(F_{\textit{2D}}\in\mathbb{R}^{H\times W\times C}\) which is obtained from image and joint heatmap through CNN encoder to coarse 3D feature \(F_{\textit{3D}}^{\textit{c}}\in\mathbb{R}^{D\times H\times W\times C}\) via MLPs without any inductive bias or prior knowledge. Then, JOTR concatenates \(\tilde{F_{\textit{3D}}}\) and \(C_{\textit{3D}}\) in channel dimension. Following CoordConv [44], we apply a convolutional block to refine the concatenated feature to achieve space-aware 3D feature \(\tilde{F_{\textit{3D}}}\in\mathbb{R}^{D\times H\times W\times C}\). Finally, we utilize a transformer encoder (_i.e._, 3D transformer encoder in Fig. 2a) to enhance the global interaction of 3D space via self-attention mechanism, \[Attention(Q,K,V)=softmax\left(\frac{QK}{\sqrt{C}}\right)V, \tag{1}\] achieving the hidden state \(H_{\textit{3D}}\in\mathbb{R}^{D\times H\times W\times C}\). For the sake of simplicity, we omit the positional encoding and rearrangement of tensor in Eq. (1). **Fusion Transformer.** In contrast to prior 2D alignment technologies such as spatial averaging and 2D joint feature sampling, we propose the use of attention mechanisms to selectively focus on semantically distinct areas (_i.e._, visible human parts). Moreover, to estimate hidden information for achieving 3D alignment, we extend 2D features with 3D joint feature sampling. Drawing inspiration from the successful fusion of image and text representations in MDETR [24] and Moment-DETR [34], we design a transformer decoder-based fusion transformer to integrate 2D and 3D features and regress SMPL parameters in a coarse-to-fine manner leading to 2D \(\&\) 3D alignment. **SMPL/Joint Query.** Instead of concatenating or pooling on 2D and 3D features, JOTR decouples the SMPL parameters and 2D/3D joint features into separate query tokens, \(Query\in\mathbb{R}^{N_{q}\times C}\), comprising two distinct parts. The \(N_{s}\) tokens belong to SMPL token, where \(N_{s}=3\), and are responsible for regressing pose, shape and camera parameters \(\{\theta,\beta,\pi\}\) respectively. The remaining \(N_{j}=N_{q}-N_{s}\) tokens are responsible for locating 3D joints of the human and extracting corresponding 3D joint features, which refine the SMPL parameters and provide auxiliary supervision for the 3D space. **2D-Based Initial Regression.** As shown in Fig. 2c, we initially have no prior knowledge about the 3D joint loca Figure 2: (a) The overview of our method. JOTR achieves 2D and 3D features from a cropped image patch and fuses them with a fusion transformer for 2D and 3D alignment. (b) The detail of the lifting module, which is responsible for lifting pose-guided 2D features to space-aware 3D features. (c) The fusion transformer is applied for fusing 2D and 3D features with attention mechanisms. (d) The refining layer combine sampled 3D joint features and 2D global features to refine the regression. tions. We regress the SMPL parameters and initial 3D joints with a transformer decoder reasoning in 2D hidden state \(H_{2D}\in\mathbb{R}^{H\times W\times C}\) which is obtained by a transformer encoder (_i.e_., 2D transformer encoder in Fig. 1(a)) working on \(F_{2D}\). \(H_{2D}\) is set as \(K\) and \(V\), and \(Query\) tokens are treated as \(Q\) in Eq. (1). Subsequently, we obtain initial predictions for pose, shape, camera parameters, and 3D joint coordinates via MLPs working on the output of transformer decoder. **Refining with 3D Features.** To conserve computing resources, we avoid directly concatenating the hidden states (_i.e_., \(H_{2D}\) and \(H_{3D}\)). Instead, we use the initial prediction of 3D joints \(J^{\prime}_{3D}\in\mathbb{R}^{N_{j}\times 3}\) as reference points to sample "local" 3D joint features \(H_{J_{3D}}=\mathcal{F}\left(H_{3D},J^{\prime}_{3D}\right)\in\mathbb{R}^{N_{j} \times C}\) like [78] in Fig. 1(d), where \(\mathcal{F}(\cdot)\) denotes feature sampling and trilinear interpolation. We then concatenate \(H_{2D}\) with \(H_{J_{3D}}\) and feed them into another transformer decoder (_i.e_., a stack of refining layers in Fig. 1(c)) as \(K\) and \(V\) in Eq. (1). Note that \(Z\) axis is not uniform in our 3D space. When sampling "local" 3D joint features, we also apply \(\psi\) to rescale \(z\) in \(J^{\prime}_{3D}\) as mentioned earlier. Since the refining process consists of several identical transformer decoder layers, we naturally consider utilizing the outputs of each layer \(H_{d}\in\mathbb{R}^{L\times N\times C}\) as a cascade refinement, \[\beta^{l+1}=\beta^{l}+MLP\left(\beta^{l},H^{l}_{d}\right), \tag{2}\] where \(l\) denotes the \(l\)-th refining layer and \(MLP\left(\beta^{l},H^{l}_{d}\right)\) is responsible for learning the residual for correcting parameters via MLPs. Besides, we also regress and \(J^{\prime}_{3D}\) and the input of Vposer [53] with cascaded refinement as shown above. ### 3D Joint Contrastive Learning As analyzed in Sec. 1, due to the lack of explicit "global" supervision for 3D representations, the "local" 3D joint coordinates may not provide accurate enough supervision for the 3D features. Especially when the target person is obstructed by other individuals, similarities in their semantic appearances could result in confusion. To address this challenge, we propose a 3D joint contrastive learning strategy inspired by the success of pixel contrastive learning in semantic segmentation [62]. This approach enhances the representation of the target person while distinguishing them from other objects (_e.g_., other people, occlusions, and background). **Vanilla Contrastive Learning.** In computer vision, contrastive learning was originally applied for unsupervised representation learning, where the goal is to minimize the distance between similar images (_i.e_., an image with its augmented version) while maximizing the distance between dissimilar images (_i.e_., an image with another image in training set) in an embedding space. Usually, InfoNCE [16, 51] is used as the loss function for contrastive learning, \[\mathcal{L}^{\text{NCE}}_{I}\!=\!-\log\frac{\exp(\boldsymbol{i}\!\cdot\! \boldsymbol{i}^{+}/\tau)}{\exp(\boldsymbol{i}\!\cdot\!\boldsymbol{i}^{+}/\tau) \!+\!\sum_{\boldsymbol{i}^{-}\in\mathcal{N}_{I}}\exp(\boldsymbol{i}\!\cdot\! \boldsymbol{i}^{-}/\tau)}, \tag{3}\] where \(I\) is the anchor image, \(\boldsymbol{i}\in\mathbb{R}^{C}\) is the representation embedding of \(I\), \(\boldsymbol{i}^{+}\) is an embedding of a positive for \(I\), \(\mathcal{N}_{I}\) contains embeddings of negatives, '\(\cdot\)' denotes the inner (dot) product, and \(\tau\!>\!0\) is a temperature hyper-parameter. Note that all the embeddings in the loss function are \(\ell_{2}\)-normalized. **Joint-to-Non-Joint Contrast.** As shown in Fig. 2(a), to better distinguish occlusion cases, we consider _joint-to-non-joint contrast_ between the \(n\)-th round predicted joints (in Fig. 1(c)) and the entire 3D space, as there are many voxels outside the joints. We augment Eq. (3) in our joint-to-non-joint contrast setting. Since we employ trilinear interpolation to acquire the joint embedding from \(H_{3D}\), the joint embedding is a weighted sum of the 8 voxel embeddings in the 3D space. As a result, for an anchor joint \(j\), the positive samples are other predicted joints (not restricted to belonging to the same class), and the negative samples are the voxels that have no contribution to any joint embeddings. The joint-to-non-joint contrastive loss is defined as: \[\mathcal{L}^{\text{NCE}}_{j2n}\!=\!\frac{1}{|\mathcal{P}_{j}|}\!\!\sum_{ \boldsymbol{j}^{+}\in\mathcal{P}_{j}}\!\!-\!\log\frac{\exp(\boldsymbol{j}\! \cdot\!\boldsymbol{j}^{+}\!\!/\tau)}{\exp(\boldsymbol{j}\!\cdot\!\boldsymbol{j} ^{+}\!\!/\tau)\!+\!\sum_{\boldsymbol{n}^{-}\in\mathcal{N}_{n}}\!\exp( \boldsymbol{j}\!\cdot\!\boldsymbol{n}^{-}\!\!/\tau)}, \tag{4}\] where \(\mathcal{P}_{j}\) is joint embedding collections of positive samples and \(\mathcal{N}_{n}\) denote non-joint voxel embedding collections of negative samples, for joint \(j\). **Joint-to-Joint Contrast.** As shown in Fig. 2(b), to strengthen the internal connections among joints of the same category, we consider _joint-to-joint contrast_ among human joints. We extend Eq. (3) for applying to our joint-to-joint contrast setting. Essentially, the data samples in our contrastive loss computation are the \(n\)-th round predicted joints (in Fig. 1(c)) and ground truth 3D joints. For an anchor joint \(j\) from predicted joints with its corresponding semantic label \(\bar{c}\) (_e.g_., head, right hand, and neck), the positive samples are ground truth joints that also belong to the class \(\bar{c}\), and the negative samples are the \(n\)-th round predicted joints belonging to the other classes \(\mathcal{C}\setminus\{c_{j}\}\). As a result, the joint-to-joint Figure 3: (a) The detail of joint-to-non-joint contrastive learning. (b) The detail of joint-to-joint contrastive learning. contrastive loss is defined as: \[\mathcal{L}_{j2j}^{\text{NCE}}=\frac{1}{|\mathcal{P}_{j}|}\!\!\!\sum_{j^{+}\in \mathcal{P}_{j}}\!\!\!\!-\!\!\log\frac{\exp(\mathbf{j}\!\cdot\!\mathbf{j}^{+}\!\!/\tau)} {\exp(\mathbf{j}\!\cdot\!\mathbf{j}^{+}\!\!/\tau)+\sum_{\mathbf{j}\!-\!\mathbf{j}\in\mathcal{N} _{j}^{\text{NCE}}}\!\!\exp(\mathbf{j}\!\cdot\!\mathbf{j}^{-}\!\!/\tau)}, \tag{5}\] where \(\mathcal{P}_{j}\) and \(\mathcal{N}_{j}\) denote joint embedding collections of the positive and negative samples, respectively, for joint \(j\). Note that the positive and negative samples, as well as the anchor joint \(j\) in both _joint-to-non-joint_ and _joint-to-joint_ contrast are not necessarily limited to the same 3D space. The joint-to-non-joint contrastive loss in Eq. (4) and joint-to-joint contrastive loss in Eq. (5) are complementary to each other; the former enables the network to learn discriminative joint features that are distinctly different from those of other non-joint voxels (_e.g._, occlusions), while the latter helps to regularize the joint embedding space by improving intra-class compactness and inter-class separability. ### Loss Function. Finally, we obtain refined SMPL parameters \(\{\theta,\beta,\pi\}\). We can achieve mesh vertices \(M=\mathcal{M}(\theta,\beta)\in\mathbb{R}^{6890\times 3}\) and 3D joints from mesh \(J_{3D}=W\mathcal{M}\in\mathbb{R}^{N\times 3}\) accordingly. We follow common practices [10, 25, 32] to project 3D joints on 2D space \(J_{2D}=\mathbf{\Pi}(J_{3D},\pi)\in\mathbb{R}^{N\times 2}\) and add supervisions with 2D keypoints. Meanwhile, when 3D annotations are available, we also add 3D supervision on SMPL parameters and 3D joint coordinates. Overall, the loss function can be written as follows: \[\mathcal{L}= \lambda_{3D}\mathcal{L}_{3D}+\lambda_{2D}\mathcal{L}_{2D}+ \lambda_{SMPL}\mathcal{L}_{SMPL} \tag{6}\] \[+\lambda_{j2n}\sum\nolimits_{j}\mathcal{L}_{j2n}^{\text{NCE}}+ \lambda_{j2j}\sum\nolimits_{j}\mathcal{L}_{j2j}^{\text{NCE}},\] where \(j\) is the sampled anchor joints and the first three is calculated as: \[\mathcal{L}_{\text{3D}} =\|J_{3D}\ -\ \hat{J_{3D}}\|,\] \[\mathcal{L}_{\text{2D}} =\|J_{2D}\ -\ \hat{J_{2D}}\|,\] \[\mathcal{L}_{\text{SMPL}} =\|\theta\ -\ \hat{\theta}\|+\|\beta\ -\ \hat{\beta}\|,\] where \(\|\cdot\|\) denotes L1 norm. \(\hat{J_{2D}}\), \(\hat{J_{3D}}\), \(\hat{\theta}\), and \(\hat{\beta}\) denote the ground truth 2D keypoints, 3D joints, pose parameters and shape parameters, respectively. ## 4 Experiments **Implementation Detail.** This proposed JORR is validated on the ResNet-50 [18] backbone. Following 3DCRwdNet [10], we initialize ResNet from Xiao _et al_. [64] for fast convergence. We use AdamW optimizer [47] with a batch size of 256 and weight decay of \(10^{-4}\). The initial learning rate is \(10^{-4}\). The ResNet-50 backbone takes a \(256\times 256\) image as input and produces image features with size of \(2048\times 8\times 8\). We build the 3D features with size of \(256\times 8\times 8\) and 2D features with size of \(256\times 8\times 8\). As for weights for multiple different losses, we follow [27] to adjust them dynamically using learnable parameters. For joint-to-non-joint contrast, we sample \(100\) anchor joints per GPU in each mini-batch, which are paired with \(1024\) positive and \(2048\) and negative samples. For joint-to-joint contrast, we sample \(100\) anchor joints per GPU in each mini-batch, which are paired with \(128\) positive and \(256\) and negative samples. Both contrastive losses are set to a temperature of \(0.07\). More details can be found in the supplementary material. **Training.** Following the settings of previous work [10, 25, 32], our approach is trained on a mixture of data from several datasets with 3D and 2D annotations, including Human3.6M [20], MuCo-3DHP [48], MSCOCO [43], and CrowdPPose [35]. Only the training sets are used, following the standard split protocols. For the 2D datasets, we also utilize their pseudo ground-truth SMPL parameters [49] for training. **Evaluation.** The 3DPW [61] test split, 3DOH [76] test split, 3DPW-PC [56, 61], 3DPW-OC [61, 76], 3DPW-Crowd [10, 61] and CMU-Panoptic [22] datasets are used for evaluation. 3DPW-PC and 3DPW-Crowd are the _person-person_ occlusion subset of 3DPW, 3DPW-OC is the _person-object_ occlusion subset of 3DPW and 3DOH is another _person-object_ occlusion specific dataset. We adopt per-vertex error (PVE) in mm to evaluate the 3D mesh error. We employ Procrustes-aligned mean per joint position error (PA-MPJPE) in mm and mean per joint position error (MPJPE) in mm to evaluate the 3D pose accuracy. As for CMU-Panoptic, we only report mean per joint position error (MPJPE) in mm following previous work [10, 21, 56]. \begin{table} \begin{tabular}{l|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**3DPW-OC**} & \multicolumn{3}{c|}{**3DOH**} & \multicolumn{3}{c|}{**3DPW-PC**} & \multicolumn{3}{c}{**3DPW-Crowd**} \\ & **MPJPE\(\downarrow\)**PA-MPJPE\(\downarrow\)**PVE\(\downarrow\)** & **MPJPE\(\downarrow\)**PA-MPJPE\(\downarrow\)** & **PVE\(\downarrow\)** & **MPJPE\(\downarrow\)**PA-MPJPE\(\downarrow\)** & **PVE\(\downarrow\)** & **MPJPE\(\downarrow\)** & **PA-MPJPE\(\downarrow\)** & **PVE\(\downarrow\)** \\ \hline 2L-MeshNet [50] & 92.0 & 61.4 & 129.5 & - & - & - & 117.3 & 80.0 & 160.2 & 115.7 & 73.5 & 162.0 \\ SPIN [32] & 95.5 & 60.7 & 121.4 & 110.5 & 71.6 & 124.2 & 122.1 & 77.5 & 159.8 & 121.2 & 69.9 & 144.1 \\ PyMAF [72] & 89.6 & 59.1 & 113.7 & 101.6 & 67.7 & 116.6 & 117.5 & 74.5 & 154.6 & 115.7 & 66.4 & 147.5 \\ ROMP [56] & 91.0 & 62.0 & - & - & - & - & 98.7 & 69.0 & - & 104.8 & 63.9 & 127.8 \\ OCHMR [28] & 112.2 & 75.2 & 145.9 & - & - & - & - & - & - & - & - & - & - \\ PARE\({}^{\text{e}}\)[31] & 83.5 & 57.0 & 101.5 & 109.0 & 63.8 & 117.4 & 96.8 & 64.5 & 122.4 & 94.9 & 57.5 & 117.6 \\ 3DcrowdNet [10] & 83.5 & 57.1 & 101.5 & 102.8 & 61.6 & 111.8 & 90.9 & 64.4 & 114.8 & 85.8 & 55.8 & 108.5 \\ \hline Ours & **75.7** & **52.2** & **92.6** & **98.7** & **59.3** & **104.8** & **86.5** & **58.3** & **109.7** & **82.4** & **52.0** & **103.4** \\ \hline \end{tabular} \end{table} Table 1: Comparisons to the state-of-the-art methods under severe occlusion. The units for mean joint and vertex errors are in mm. PARE* use a HRNet-32 backbone, others are with ResNet-50. ### Comparison to the State-of-the-Art on Occlusion Benchmark **3DPW-OC [61, 76]** is a person-object occlusion subset of 3DPW and contains 20243 persons. Tab. 1 shows our method achieve a new state-of-the-art performance on 3DPW-OC. **3DOH [76]** is a person-object occlusion-specific dataset and contains 1290 persons in testing set, which incorporates a greater extent of occlusions than 3DPW-OC. For a fair comparison, we initialize PARE with weights that are not trained on the 3DOH training set, resulting in different performances from the results reported in [31]. Tab. 1 shows our method surpasses all the competitors with \(59.3\) (PA-MPJPE). **3DPW-PC [56, 61]** is a multi-person subset of 3DPW and contains 2218 persons' annotations under person-person occlusion. Tab. 1 shows our method surpasses all the competitors with \(58.8\) (PA-MPJPE). **3DPW-Crowd [10, 61]** is a person crowded subset of 3DPW and contains 1923 persons. We slightly surpass previous state-of-the-art as shown in Tab. 1. **CMU-Panoptic [22]** is a dataset with multi-person indoor scenes. We follow previous methods [10, 21] applying 4 scenes for evaluation without using any data from training set. Tab. 3, shows that our method outperforms previous 3D human pose estimation methods on CMU-Panoptic, which means our model also works well for indoor and daily life scenes. ery performance. For the utilization of 2D features, flatting shows better performance than sampling, which supports our hypothesis that sampling joint features in obscured regions could have a negative impact. For 3D features, we do not conduct experiments for flatting 3D features due to memory limitations. Moreover, we believe that 3D joint feature sampling is adequate for alleviating occlusion problems by attending to the accurate depth. Fig. 4 shows the attention weights in the last refining layer. The query tokens significantly pay more attention to 3D features, which validates the usefulness of our fusion framework. **Validation of coarse-to-fine regression:** We validate the accuracy of intermediate predictions of fusion transformer in Tab. 5, which shows the coarse-to-fine regression process in JOTR. **Decoupling SMPL query:** JOTR performance improvement is observed in Tab. 6 by decoupling SMPL query from joint query. In the experiment without decoupling, we employ mean pooling on the decoder's output and regress SMPL parameters through MPLs. Decoupling SMPL query is presumed to enhance performance by reducing interference in executing other tasks (_e.g_., joint localization) during SMPL parameter regression. **3D Joint contrastive learning:** The impact of 3D joint contrastive learning on the performance of JOTR is presented in Tab. 7. Both joint-to-non-joint and joint-to-joint contrastive losses result in improved performance, with the former being more effective as it incorporates global supervision for the entire 3D space. Our contrastive losses also lead to more compact and well-separated learned joint embeddings, as shown in Fig. 5. This indicates that our network can generate more discriminative 3D features, producing semantically clear 3D spaces and promising results. using an encoder-decoder transformer architecture to achieve 2D\(\&\)3D alignment. Furthermore, we introduce two noevel 3D joint contrastive losses that enable global supervision of the 3D space of target persons, producing meaningful 3D representations. Extensive experiments on 3DPW benchmarks show that JOTR achieves the new state of the art. **Limitations and Broader Impact.** 1) JOTR relies on the human pose predictor to detect 2D keypoints, leading to long inference times. 2) In the future, JOTR has the potential to be integrated with bottom-up 3D human mesh recovery methods for real-time applications. **Acknowledgements.** This work was supported by the Natural Science Foundation of Zhejiang Province (DT23F020008) and the Fundamental Research Funds for the Central Universities (226-2023-00051).
この研究では、単一の画像から3D人間のメッシュを復元する問題に焦点を当てています。最新の方法は、2Dの合致技術を向上させるために空間平均や2D関節採点などの方法を開発しています。しかし、彼らは3Dの合致に重要な側面を無視し、3D表現を向上させることに集中しています。さらに、最近の方法は、密接なシーンで、対象の人の奥行きを分離することが難しく、3D空間を対象の人の3D関節座標をローカルの監督で最適化しています。これらの問題に対処するため、2Dと3Dの特徴を融合するためのフレームワークと、3D空間を世界的に最適化する戦略が求められます。そこで、本論文では、遮蔽された3D人間のメッシュの復元を行うためのフレームワークである、3DJOintContrastive Learning with TRansformers(JOTR) を提案しています。本方法は、
2309.09937
TransientViT: A novel CNN - Vision Transformer hybrid real/bogus transient classifier for the Kilodegree Automatic Transient Survey
The detection and analysis of transient astronomical sources is of great importance to understand their time evolution. Traditional pipelines identify transient sources from difference (D) images derived by subtracting prior-observed reference images (R) from new science images (N), a process that involves extensive manual inspection. In this study, we present TransientViT, a hybrid convolutional neural network (CNN) - vision transformer (ViT) model to differentiate between transients and image artifacts for the Kilodegree Automatic Transient Survey (KATS). TransientViT utilizes CNNs to reduce the image resolution and a hierarchical attention mechanism to model features globally. We propose a novel KATS-T 200K dataset that combines the difference images with both long- and short-term images, providing a temporally continuous, multidimensional dataset. Using this dataset as the input, TransientViT achieved a superior performance in comparison to other transformer- and CNN-based models, with an overall area under the curve (AUC) of 0.97 and an accuracy of 99.44%. Ablation studies demonstrated the impact of different input channels, multi-input fusion methods, and cross-inference strategies on the model performance. As a final step, a voting-based ensemble to combine the inference results of three NRD images further improved the model's prediction reliability and robustness. This hybrid model will act as a crucial reference for future studies on real/bogus transient classification.
Zhuoyang Chen, Wenjie Zhou, Guoyou Sun, Mi Zhang, Jiangao Ruan, Jingyuan Zhao
2023-09-18T16:55:41
http://arxiv.org/abs/2309.09937v1
TransientViT: A novel CNN - Vision Transformer hybrid real/bogus transient classifier for the Kilodegree Automatic Transient Survey ###### Abstract The detection and analysis of transient astronomical sources is of great importance to understand their time evolution. Traditional pipelines identify transient sources from difference (\(D\)) images derived by subtracting prior-observed reference images (\(R\)) from new science images (\(N\)), a process that involves extensive manual inspection. In this study, we present TransientViT, a hybrid convolutional neural network (CNN) - vision transformer (ViT) model to differentiate between transients and image artifacts for the Kilodegree Automatic Transient Survey (KATS). TransientViT utilizes CNNs to reduce the image resolution and a hierarchical attention mechanism to model features globally. We propose a novel KATS-T 200K dataset that combines the difference images with both long- and short-term images, providing a temporally continuous, multidimensional dataset. Using this dataset as the input, TransientViT achieved a superior performance in comparison to other transformer- and CNN-based models, with an overall area under the curve (AUC) of 0.97 and an accuracy of 99.44%. Ablation studies demonstrated the impact of different input channels, multi-input fusion methods, and cross-inference strategies on the model performance. As a final step, a voting-based ensemble to combine the inference results of three _NRD_ images further improved the model prediction reliability and robustness. This hybrid model will act as a crucial reference for future studies on real/bogus transient classification. keywords: data analysis - techniques: image processing - surveys ## 1 Introduction Time-domain astronomy refers to the study of astronomical objects that change with time. In the quest for finding transient astronomical events, a multitude of large-scale optical sky surveys have been conducted, including the All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al. (2014)), Dark Energy Survey (DES; Collaboration: et al. (2016)), Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Kaiser (2004)), Sloan Digital Sky Survey (SDSS; York et al. (2000)), and Zwicky Transient Facility (ZTF; Bellm et al. (2018)). These comprehensive all-sky surveys have enabled a thorough exploration of the optical sky by generating an overwhelming amount of image data, propelling astronomy into the currently emerging big data era (Zhang & Zhao (2015)). Conventional image processing pipelines locate and extract potential transient candidates, i.e. point sources, from the difference images constructed by subtracting a template image (taken at the time of observation) from the science image (Cabrera-Vives et al. (2017)). Subsequently, candidates are classified as either true transients with real astrophysical significance or 'bogus' detections that are discarded as artifacts. This process typically requires extensive manual inspection. Given the ephemeral nature of transient events, their prompt and near real-time detection is crucial. A single night of observation can yield a plethora of bogus detections, rendering manual inspection of each candidate unfeasible. This has led to an increasing demand for rapid and accurate algorithms to distinguish true transients from artifacts. Extensive efforts have been dedicated to integrating machine learning (ML) methods into image processing pipelines to facilitate the detection of transients. The pioneering work of Romano et al. (2006) allowed the application of support vector machines (SVMs; Hearst et al. (1998)) towards supernovae detection. Bloom et al. (2012) and Brink et al. (2013) employed random forest (RF; Breiman (2001)) classification algorithms to distinguish real and bogus transient candidates. Goldstein et al. (2015), Du Buisson et al. (2015), and Wright et al. (2015) further compared several well-known traditional ML algorithms for real/bogus classification tasks and found RFs to exhibit superior performance compared to other mainstream approaches. Deep learning (DL) methods involving convolutional neural networks (CNNs; Fukushima (1980)), frequently applied to diverse computer vision recognition tasks, are known to outperform conventional ML approaches (Wang et al. (2019)). Wright et al. (2017) proposed an approach combining manual classification with CNN-based recognition for transient search. Furthermore, several flavors of CNN-based real/bogus classifiers have been put forward by Andreoni et al. (2017), Cabrera-Vives et al. (2017), Duey et al. (2019), Hosenie et al. (2021a), Takahashi et al. (2022), and Acero-Cuellar et al. (2022). Yin et al. (2021) extended the framework by developing a fully convolutional one-stage (FCOS; Tian et al. (2019)) algorithm for supernova detection. Despite their effectiveness, CNNs face limitations in describing low-level features beyond the effective receptive fields. Therefore, it is not conducive to making full use of the context information to capture the features of images. Stacking deeper convolutional layers aids in extracting higher levels of image features, but substantially increases the computational costs (Chen et al. (2022)). To address the aforementioned issue, we propose a hybrid CNN - vision transformer (ViT; Dosovitskiy et al. (2020)) model, named TransientViT, for real/bogus transient classification. ViTs can facilitate classification as they utilize a self-attention mechanism that enables global information integration, rather than being limited to local information specific to individual transients. CNN-ViT hybrid models are equipped with the locality of CNNs as well as the global connectivity of ViTs (Manzari et al. (2023)). Additionally, we emphasized the reduction of computational and inference time costs for the proposed real/bogus classifier, considering the high computational demand of the original ViT model. The remainder of this paper is structured as follows: In Section 2, we introduce our dataset obtained from the Kilodegree Automatic Transient Survey (KATS) telescope array. Section 3 describes the overall architecture of the proposed TransientViT model. In Section 4, we present our experimental results and compare them to other mainstream ViT- and CNN-based models. Finally, we summarize and present our conclusions in Section 5. ## 2 Dataset ### Kats-T 200k We propose a novel KATS-T 200K dataset, consisting of nine images per set of observation data, encompassing both long- and short-term images. The images in our dataset were acquired from KATS conducted at the Xingming Observatory, Xinjiang, China. KATS comprises an array of six 0.28 m Rowe-Ackermann Schmidt Astrograph (RASA) telescopes, with a field of view of 6.7 \(\times\) 6.6 square degrees. For the transient survey, 30 s exposure images are taken without the use of filters, yielding a typical limiting magnitude of 19 mag. The dataset consists of transient candidate detections captured by KATS from April 1, 2023 to July 29, 2023. From a total of 201,358 samples, 561 confirmed transients were reported in the Transient Name Server (TNS)1, along with 200,797 bogus detections. The dataset was split into training, validation, and test sets as shown in Table 1. Footnote 1: [https://www.wis-tns.org](https://www.wis-tns.org) Transients are identified within difference images derived by subtracting the new image, taken at the time of observation, from the reference image, captured on a prior date. The proposed KATS-T 200K dataset incorporates these commonly used difference images as well as long-term images (observed over larger intervals) and short-term images (observed over shorter intervals). In Fig. 1, the long-term images are presented in a vertical sequence (top-to-bottom: difference, new, and reference image), and the short-term images, captured on the same day at varying intervals, are presented in a horizontal sequence. The combination of long- and short-term images allows for a more continuous temporal sequence, providing multidimensional data for real/bogus classification. ### Data preprocessing The preprocessing procedure for the KATS-T 200K dataset is shown in Fig. 2. The vertical axis represents the long-term difference (\(D\)), new (\(N\)), and reference (\(R\)) images, while the horizontal axis represents the images at varying times within the same day with shorter intervals. The image is first divided into three short-term data segments horizontally. For each segment, the \(N\), \(R\), and \(D\) images are stacked (in that order) to generate an _NRD_ three-channel image (Hossenie et al. (2021)). In other words, each sample is processed into three _NRD_ images with temporal information. ### TransientVIT Components #### 3.2.1 Stem An input image is divided into non-overlapping patches using two consecutive 3x3 convolutions with a stride of 2. These patches are then transformed into \(D\)-dimensional embeddings. After each convolution, batch normalization (BN) and rectified linear unit (ReLU) activation (Agarap (2019)) are applied to the embeddings. #### 3.2.2 Downsample Blocks TransientViT utilizes a hierarchical structure, which implements downsampling before the stage layer to decrease the feature map resolution by a factor of 2. #### 3.2.3 Convolutional Blocks The _Conv_ blocks (Fig. 3) constitute stages 1 and 2, both comprised of residual convolutional blocks. The output of the _Conv_ block can be expressed as (Hendrycks and Gimpel (2023)) \[\hat{x}=\mathrm{GELU}[\mathrm{BN}(Conv_{3x3}(x))], \tag{1}\] \[x=\mathrm{BN}(Conv_{3x3}(\hat{x}))+x. \tag{2}\] #### 3.2.4 Hierarchical Attention The hierarchical attention structure is incorporated in stages 3 and 4, which was first proposed for the FasterViT model (Hatamizadeh et al. (2023)). It decomposes the quadratic time complexity of global self-attention into multiple simpler attention mechanisms, effectively mitigating computational overhead. The approach initiates by adopting the use of local windows, as employed for the Swin Transformer (Liu et al. (2021)). Subsequently, carrier tokens (CTs) are introduced to summarize elements for the entire local window. Global information is summarized and propagated by applying the first attention block to the CTs. To ensure localized access, the local window tokens and CTs are concatenated, allowing each local window to exclusively access its corresponding set of CTs. By applying self-attention to the concatenated tokens, efficient exchange of both local and global information is enabled while minimizing computational costs. The concept of hierarchical attention was formulated by alternating between sub-global (CTs) and local (windowed) self-attention. Conceptually, CTs can be further grouped into windows, featuring a higher order of CTs. #### 3.2.5 Adaptive Cross-Attention Head The feature information extracted from stage 4 is directed into the adaptive cross-attention head to facilitate feature fusion and classification (Fig. 5). By processing three _NRD_ images, three corresponding feature maps are generated. The adaptive cross-attention head dynamically selects two feature maps to perform cross-attention computation. Subsequently, the selected feature maps are concatenated and subjected to layer normalization (LN) expressed as (Ba et al. (2016)) \[\hat{x}_{tc}=\gamma_{c}\frac{x_{tc}-\mu_{t}^{In}}{\sqrt{(\sigma_{t}^{In})^{2}+ \epsilon}}+\beta_{c}, \tag{3}\] where \(t\) and \(c\) are the indices and embeddings of a token, respectively, \(\epsilon\) is a small positive constant to avoid a zero denominator, \(\gamma_{c}\) and \(\beta_{c}\) are two learnable parameters in the affine transformation, and the LN normalization constants, \(\mu_{t}^{In}\) and \((\sigma^{2})_{t}^{In}\), can be expressed as Figure 4: Structure of the hierarchical attention block. Figure 3: Architecture of the TransientViT model. \[\mu_{t}^{In}=\frac{1}{C}\sum_{c=1}^{C}x_{tc}, \tag{4}\] \[\sigma_{t}^{In}=\sqrt{\frac{1}{C}\sum_{c=1}^{C}(x_{tc}-\mu_{t}^{In})^{2}}. \tag{5}\] Finally, we apply the multi-layer perceptron (MLP; Pinkus (1999)) and fully connected (FC) layers. TransientViT effectively models the spatial information in the _NRD_ images and captures temporal information from short-term data segments, amplifying the capability of the model to distinguish between real and bogus transient candidate detections. #### 3.2.6 Cross Inference After preprocessing, the three _NRD_ segments from the KATS-T 200K dataset were directed into the TransientViT model for inference, generating individual prediction results. Ultimately, a voting-based ensemble was applied to the individual inferences to obtain the final classification result (Fig. 5). ## 4 Results ### Implementation details The TransientViT was implemented using PyTorch 1.12.0 (Paszke et al. (2019)) on Python 3.7. To train the TransientViT model, we leveraged 8 Nvidia GeForce RTX 3090 GPUs, each equipped with a VRAM of 24GB. During training, we employ the AdamW (Loshchilov and Hutter (2019)) optimizer with a low learning rate (lr Figure 5: Adaptive cross-attention head architecture. Figure 6: Cross-inference process. = 0.0001) and a batch size of 32. The learning rate scheduler followed a cosine decay strategy (Loshchilov and Hutter (2017)). Our model utilized the cross-entropy loss function expressed as \[\text{CE}=-(y\log(p)+(1-y)\log(1-p)), \tag{6}\] where \(y\) is the binary indicator for a class and \(p\) denotes the probability assigned to that class. We employ offline data augmentation to mitigate the risk of over-fitting, thus avoiding poor generalization of the model for the test set. Our primary data augmentation techniques encompassed color jitter, RandAugment (magnitude 9; Cubuk et al. (2019)), random horizontal flip, and random vertical flip. These offline data augmentation strategies noticeably amplify the variations in the training dataset. ### Metrics We considered real transients as positives (\(p\)) and bogus detections as negatives (\(n\)). The probability generated by TransientViT to indicate whether the source at the center is to be classified as a bogus or real transient is denoted as \(P\). In order to arrive at a definitive decision, it is imperative to establish a probability threshold. Our primary objective was to minimize false positives (FP), while maintaining the lowest feasible level for false negatives (FN; ideally below 5% of the total number of candidates). Beyond assessing the classification performance, we also incorporated additional metrics as explained below: * Precision (Prec): This metric calculates the number of real transients among all the objects classified as transients by TransientViT. A high precision score indicates that the model is consistently accurate in predicting the positive class representing real transients. However, the dataset used in this study was highly imbalanced. Thus, precision might not serve as a reliable indicator of model performance within this context. The metric is defined as \[\text{Prec}=\frac{\text{TP}}{(\text{TP}+\text{FP})}.\] (7) * Recall: This metric gauges the quantity of the accurately classified transients within the dataset. A high recall score signifies the adeptness of the model in detecting a substantial proportion of transients. This metric is defined as \[\text{Recall}=\frac{\text{TP}}{(\text{TP}+\text{FN})}.\] (8) * Receiver Operating Characteristic (ROC) Curve: This metric measures the relationship of the true positive rate (TPR) with the false positive rate (FPR). An ideal model would exhibit a vertical line at \(x=\text{FPR}=0\) and a horizontal line at \(y=\text{TPR}=1\). This metric provides a visual representation of the overall performance of the model, allowing a comprehensive evaluation of its efficacy. Here, FPR and TPR are defined as \[\text{FPR}=\frac{\text{FP}}{\text{TN}+\text{FP}}\] (9) \[\text{TPR}=\frac{\text{TP}}{\text{TP}+\text{FN}} \tag{10}\] * Precision-Recall (P-R) curve: This metric demonstrates the model performance for classifying the \(p\) class (real transients). * Area Under the Curve (AUC): This metric measures the overall performance of a binary classifier. The AUC value is within the range [0.5-1.0], where the minimum value represents the performance of a random classifier and the maximum value corresponds to a perfect classifier (i.e., with a classification error rate of zero). (Melo (2013)) \[\text{AUC}=\int\text{TPR}d(\text{FPR}) \tag{11}\] ### Experiment results Based on the predicted and actual classes, we constructed a confusion matrix to have an overview of the classification results (Fig. 7). The confusion matrix displays the fraction of correctly classified candidates as the diagonal elements (TP and TN). The off-diagonal values show the misclassified examples (FP and FN). The ROC curve is shown in Fig. 9b, with an area under the curve (AUC) of 0.97. The loss and accuracy curves during the training and validation processes are shown in Fig. 10. The model converged at the 50th epoch during training. The validation loss demonstrated a tendency to stabilize in the later phases of training, implying that the model did not significantly overfit. \begin{table} \begin{tabular}{l c} \hline \hline Configuration & Parameters \\ \hline pretrain & ImageNet-1k (224\(\times\)224) \\ optimizer & AdamW \\ base learning rate & 1e-4 \\ warmup learning rate & 1e-5 \\ weight decay & 0.1 \\ batch size & 256 \\ training epochs & 200 \\ learning rate schedule & cosine decay \\ warmup epochs & 20 \\ randaugment & (9, 0.5) \\ mixup & None \\ cutmix & None \\ random erasing & 0.2 \\ label smoothing & 0.1 \\ \hline \hline \end{tabular} \end{table} Table 2: TransientViT training settings. Figure 7: Confusion matrix for TransientViT. The predicted labels for all the classifiers are obtained using a threshold of 0.5. The values within each row are normalized. The numbers presented outside the parentheses represent the raw counts. ### Ablation study #### 4.4.1 Backbone We conducted a performance evaluation of multiple classification models trained on the KATS-T 200K dataset, and compared them with the proposed TransientViT model (Table. 3). For similar parameter count, TransientViT exhibited superior performance compared to other transformer-based models, such as EfficientViT and Swin Transformer, as well as CNN-based models, such as ConvNeXt and ResNet, for transient classification. It is important to note that, due to the distinct characteristics of TransientViT, we adjusted the adaptive cross-attention head to a conventional MLP head during the backbone comparison, facilitating a standard single-image input. #### 4.4.2 Image channel We conducted experiments to assess the training performance of different image channels using TransientViT (Table 4). Training with the _NRD_ images yielded superior metrics in comparison to using a single Diff channel. #### 4.4.3 Multi-input fusion We conducted several sets of experiments for multiple input images and explored various fusion methods for TransientViT (Table. 5), namely: 1. **SuperImage:** We directly utilized grayscale images with a patch size of 3\(\times\)3 as input for the model. 2. **Feature Concatenation:** We concatenated the features extracted from two image segments using TransientViT, along the channel dimension, resulting in an increase in feature depth from 1\(C\) to 2\(C\). Subsequently, the concatenated feature representation was utilized as the input for the conventional MLP head. 3. **Feature Addition:** We performed element-wise addition of the features extracted from two image segments, while preserving the original feature depth. 4. **Adaptive cross-attention:** See Section 3.2.5 for details. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Channel & Test Accuracy & Test AUC \\ \hline \multirow{2}{*}{TransientViT} & Diff & 99.23 & 97.36 \\ & NRD & **99.24** & **97.90** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy and AUC for different channel images employed in TransientViT. Figure 8: Real (red) / bogus (blue) probability distribution. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline Model & Image Size & Parameters (M) & Validation Accuracy & Validation AUC & Test Accuracy & Test AUC \\ \hline ResNet-50 & 224 & 23.51 & 97.93 & 96.05 & 98.81 & 97.84 \\ ResNet-101 & 224 & 42.5 & 95.20 & 95.08 & 97.95 & 94.07 \\ ConvNeXt v2 nano & 224 & 14.98 & 97.87 & 95.99 & 99.19 & 96.77 \\ ConvNeXt v2 tiny & 224 & 27.87 & 98.40 & 96.39 & 99.35 & 90.68 \\ Swin Transformer v2 tiny & 256 & 28.35 & 98.52 & 98.10 & 99.34 & 96.46 \\ Swin Transformer v2 small & 224 & 49.73 & 98.28 & 97.92 & 99.14 & 95.89 \\ EfficientViT b1 & 224 & 7.50 & 95.50 & 96.28 & 98.56 & 93.86 \\ EfficientViT b2 & 224 & 21.77 & 95.56 & 95.51 & 98.60 & 94.27 \\ EfficientViT b3 & 224 & 46.09 & 96.21 & 96.14 & 98.82 & 95.76 \\ **TransientViT (ours)** & 224 & 30.89 & **98.58** & **98.72** & 99.44 & **97.88** \\ \hline \hline \end{tabular} \end{table} Table 3: Detailed comparison of different classification models. Figure 10: (a) Loss for the training and validation sets and (b) accuracy for the validation for TransientViT. Figure 9: (a) P-R and (b) ROC curves for TransientViT. #### 4.4.4 Cross-inference We evaluated the performance of cross-inference on the test set with random and unique sampling (Table. 6). Random sampling refers to the approach of randomly selecting three _NRD_ segments for a given instance, which may result in duplicate samples. On the contrary, unique sampling ensures that no _NRD_ segment across the whole image is sampled more than once, thereby guaranteeing uniqueness. Cross-inference outperformed regular inference in terms of both AUC and accuracy. ## 5 Conclusion In this study, we introduced TransientViT, a novel CNN-ViT hybrid real/bogus transient classification algorithm for KATS. We highlighted the limitations of existing CNN-based methods in describing low-level features and the need to improve the efficiency in capturing both local and global features. TransientViT employs convolutional layers to decrease image resolution and utilizes a hierarchical attention mechanism to model the global feature map. It overcomes the aforementioned limitations by combining the locality of CNNs and the global connectivity of ViTs. Extensive experiments showcased the superiority of TransientViT over various ViT- and CNN-based backbone architectures. TransientViT achieves an AUC of 0.97 and an accuracy of 99.44% for the KATS-T 200K test dataset. The ablation studies provided insights into the impact of different input channels, multi-input fusion methods, and cross-inference strategies on model performance. Our proposed adaptive cross-attention mechanism played a crucial role in achieving this impressive performance, effectively fusing spatial and temporal features. The utilization of a voting-based ensemble to combine three inference results further contributed to the reliability and robustness of the TransientViT model predictions. Upon the incorporation of TransientViT into the KATS pipeline, the number of transient candidates generated each night was reduced to approximately 1/10, thereby dramatically reducing the requirement for manual inspection in the real/bogus transient classification process. Moreover, the reduction in the number of transient candidates did not compromise the system's ability to accurately detect real transient events. We will continue our work to further reduce FPR in the future. There are defects in most of the images in the dataset (e.g., streaks caused by equatorial mount failure). To some extent, image quality limits the performance of TransientViT. We also plan to further reduce the number of parameters of TransientViT in future studies so as to boost its classification efficiency. TransientViT codes and pre-trained model is open source and available at [https://github.com/TimeDevBlocker/TransientViT](https://github.com/TimeDevBlocker/TransientViT). ## 6 Acknowledgements We wish to thank the Xinjiang Astronomical Observatory for providing data storage and hardware support for the Kilodegree Automatic Transient Survey. We also wish to acknowledge Quanzhi Ye for his advice on the development of the classifier and Xing Gao for his contributions to the Kilodegree Automatic Transient Survey. This study employed Astropy:2 a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al., 2013, 2018, 2022). Footnote 2: [http://www.astropy.org](http://www.astropy.org)
transient astronomicalソースの検出と分析は、時間的な進化を理解するために非常に重要である。従来のパイプラインは、新しい科学画像 (N) から、過去の観測された基準画像 (R) を引き抜いた差分画像 (D) でtransient sources を識別する。このプロセスには、広範な手動検査が含まれている。本研究では、Kilodegree Automatic Transient Survey (KATS) に適用されるハイブリッド畳み込みニューラルネットワーク (CNN) - ビジョンTransformer (ViT) モデルである TransientViT を提案する。TransientViTは、CNNを用いて画像の解像度を削減し、階層的な注意機構を用いて、グローバルな特徴をモデル化する。KATS-T 200Kという、差分画像と長期画像と短期画像の組み合わせを提示した新しいKATSのデータセットを提案する。このデータセットを入力として TransientViT は、
2301.03386
Need of 6G for the Metaverse Realization
The concept of the Metaverse aims to bring a fully-fledged extended reality environment to provide next generation applications and services. Development of the Metaverse is backed by many technologies, including, 5G, artificial intelligence, edge computing and extended reality. The advent of 6G is envisaged to mark a significant milestone in the development of the Metaverse, facilitating near-zero-latency, a plethora of new services and upgraded real-world infrastructure. This paper establishes the advantages of providing the Metaverse services over 6G along with an overview of the demanded technical requirements. The paper provides an insight to the concepts of the Metaverse and the envisaged technical capabilities of 6G mobile networks. Then, the technical aspects covering 6G for the development of the Metaverse, ranging from validating digital assets, interoperability, and efficient user interaction in the Metaverse to related security and privacy aspects are elaborated. Subsequently, the role of 6G technologies towards enabling the Metaverse, including artificial intelligence, blockchain, open radio access networks, edge computing, cloudification and internet of everything. The paper also presents 6G integration challenges and outlines ongoing projects towards developing the Metaverse technologies to facilitate the Metaverse applications and services.
Bartlomiej Siniarski, Chamitha De Alwis, Gokul Yenduri, Thien Huynh-The, GÜrkan GÜr, Thippa Reddy Gadekallu, Madhusanka Liyanage
2022-12-28T05:13:09
http://arxiv.org/abs/2301.03386v1
# Need of 6G for the Metaverse Realization ###### Abstract The concept of the Metaverse aims to bring a fully-fledged extended reality environment to provide next generation applications and services. Development of the Metaverse is backed by many technologies, including, 5G, artificial intelligence, edge computing and extended reality. The advent of 6G is envisaged to mark a significant milestone in the development of the Metaverse, facilitating near-zero-latency, a plethora of new services and upgraded real-world infrastructure. This paper establishes the advantages of providing the Metaverse services over 6G along with an overview of the demanded technical requirements. The paper provides an insight to the concepts of the Metaverse and the envisaged technical capabilities of 6G mobile networks. Then, the technical aspects covering 6G for the development of the Metaverse, ranging from validating digital assets, interoperability, and efficient user interaction in the Metaverse to related security and privacy aspects are elaborated. Subsequently, the role of 6G technologies towards enabling the Metaverse, including artificial intelligence, blockchain, open radio access networks, edge computing, cloudification and internet of everything. The paper also presents 6G integration challenges and outlines ongoing projects towards developing the Metaverse technologies to facilitate the Metaverse applications and services. Metaverse, 6G, AI, Blockchain, Edge Computing, Security, Privacy, vertical applications. ## I Introduction The term 'Metaverse' has been coined to further facilitate the digital transformation in every aspect of our physical lives [1]. The Metaverse is a virtual world where you can live a synchronous life through your avatar. The concept is similar to an online game, however, instead of shooting targets or driving cars, users will be engaged in real-life activities. These activities could include attending meetings, catching up with friends, attending music festivals, going door to door selling digital collectables, or buying and selling land, apartments or assets. Virtual interactive worlds or early Metaverses have already been introduced primarily in video games with releases such as Fortnite, Minecraft, Decentralized, Ifland. The list isn't extensive and users are gravitating toward other Metaverse ecosystems that are emerging today. The Metaverse embraces a social interaction accelerated through a virtual environment and driven by novel technologies such as Web 3.0, 5G, Artificial Intelligence (AI) and Extended Reality (XR). The XR - which includes everything from Virtual Reality (VR) to Mixed Reality (MR) to Augmented Reality (AR) and haptics - have enormous potential to transform both industry and society. The widespread adoption of XR was slowed down recently by a number of issues including limited processing power, storage and battery life of small head-mounted displays (HMDs). The 5G made it possible to overcome some of these challenges by offloading a portion of XR processing to the mobile network edge. In addition to this, the 5G QoS framework makes it possible to establish QoS flows that provide optimized network treatment for specific traffic flows, in addition to the default QoS flow used for mobile broadband (MBB). Such additional QoS flows can be established either using 5GC QoS-exposure application programming interfaces to communicate service requirements or by traffic detection together with pre-provisioned service requirements, such as relying on standardized 5G QoS identifier characteristics. Although the Metaverses have the potential to be transformational for both business and society, widespread adoption has previously been hindered by issues such as heat generation and the limited processing power, storage, and battery life of small form factor head-mounted devices. The time-critical communication capabilities in 5G make it possible to overcome only some of these challenges by offloading XR processing to the mobile network edge. By evolving the already existing 5G or B5G networks, mobile network operators are in an excellent position to enable the realization of the Metaverse on a large scale. The 6G aims to achieve high spectrum and energy efficiency, low latency, and massive connection due to the exponential growth of the Internet of Things (IoT) devices. 6G will also effectively link the physical, and digital worlds by providing seamless and ubiquitous services such as extreme-scale environmental monitoring and control, virtual reality/virtual navigation, telemedicine, digital sensing, and robotics. This will result in a network that connects us to one another, to information, to knowledge, and to purpose. As a result, 6G networks will enhance the efficiency of technologies such as computer vision, blockchain, AI, the IoT, robotics, and user interfaces which are critical for the metaverse realization. In summary, 6G will enhance every feature of the 5G network that benefits the user to improve areas such as smart cities, farming, manufacturing, and robots. 6G will provide enhanced productivity, capabilities, and better user experiences. The main use of 6G in the Metaverse is summarized below: **Near-zero-latency**: In virtual interaction, 6G will continuously provide users with a near-zero-latency sensory interconnection experience, such as the user's virtual movement in the Metaverse, virtual meetings, virtual paintings, and other interactive, immersive holographic experiences. **New services**: 6G is the main driver of multiple new service models. For example, 6G communication technology provides users with precise service models in autonomous driving, industrial control, e-health, Internet robots, and autonomous systems, bringing a more convenient lifestyle. **Upgraded real-world infrastructure available for use in the Metaverses**: 6G infrastructure mainly includes information infrastructure, fusion infrastructure, and innovation infrastructure. In particular, the 6G communication system integrates infrastructure such as ground, UAV, and satellite Internet. 6G also features high bandwidth, low latency, strong reliability, and global coverage. ### _Motivation_ The main motivation of this paper is to realize if mobile network operators can enable large-scale XR and at the same time further development of Metaverses by introducing time-critical communication capabilities in 6G networks. The 5G networks already contribute to considerable improvement in data rate, latency, and packet loss since the last network generation (4G) and users already enjoy comfortable viewing experiences. However, as the resolution of video increases from 4K to 8K and 12K/24K for 3D video and the number of worldwide users increases, the 5G network will not be sufficient to support many use cases. Some of the main cloud and network providers are defining the evolution of the service experience into the fair-experience, comfortable experience, and ideal-experience phases [2], where each has its own network KPI requirement to be met. Table 1 summarizes those KPI requirements based on different use cases envisaged to be a part of future metaverses. In this work, we aim to establish and explain the main advantages of providing the Metaverse services over 6G and provide an overview of the technical requirements. Furthermore, we aim to establish what role will 6G play in the Metaverse operation and if the envisaged architecture of 6G will be capable of supporting the upcoming technology portrayed by the tech industry. ### _Related Surveys and Contributions_ Our work is exclusively focused on the networking aspects of the Metaverse and the role that 6G will play in the Metaverse deployment. Though there are some Metaverse-focused surveys we found it is lacking a comprehensive, and detailed discussion on the role of B5G/6G technologies as indicated by Table 2. The table also includes the limitations of the related works in the context of technical challenges, security and privacy, and research directions, which we have already addressed in this paper. The surveys [3] and [4] investigate technical aspects and security threats comprehensively. However, those papers are not focused on the future networks and the role of 6G in the Metaverse specifically. The surveys [5], [1] and [6] include an interesting view on the potential use of the Metaverse in different domains and clearly define network requirements. The limitations in [5], [1] and [6] include the lack of coverage of future network aspects and the discussion on the security and privacy issues is weak. Surveys [7] and [8] discuss implementation challenges and analyze the fusion of 6G-enabled edge with the Metaverse, however, the security issues and research directions are only partially covered. Therefore, we contribute to addressing this gap in our work on the comprehensive discussion on 6G for the Metaverse. ### _Paper Organization_ The rest of this paper is organized as follows. Introduction and discussion of the role of 6G networks in the Metaverse are presented in Section I. Section II covers the expected improvements from 5G to 6G and the impact it will have on the Metaverses. Section III investigates the state-of-the-art solutions provided by 6G for the Metaverse from technical perspective, followed by Section IV that discusses in detail how different 6G technologies will help to achieve the Metaverse aims. Section V identifies expected 6G challenges that would have to be approached before the introduction of Metaverses to wider community. Finally, Section VI provides an overview of related research projects. ## II 66 and the Metaverse: Preliminaries The preliminary introduction to 6G and the Metaverse is presented in this section, followed by the role of 6G in the Metaverse. ### _Preliminary to 6G_ Since the middle of 2019, commercial 5G mobile networks have been standardized and deployed globally, with significant coverage in some countries. Numerous new applications and use cases are being developed, placing existing networks' capabilities to the test. The capacity of current 5G networks to handle the Internet of Everything (IoE), holographic telepresence, collaborative robotics, and deep-sea and space tourism is limited [9]. This has prompted researchers to reconsider and work toward the development of the next generation of mobile communications networks called the sixth-generation of mobile networks 6G. Each time mobile communication technology is upgraded and iterated, its performance metrics improve by a factor of ten to hundred times over the preceding generation [10]. Researchers from all over the world propose AI/machine learning (ML), quantum communication/quantum machine learning (QML), blockchain, tera-hertz and millimetre wave communication, tactile Internet, non-orthogonal multiple access (NOMA), small cell communication, fog/edge computing, etc. as the key technologies for the realisation of 6G communications. 6G aims to achieve high spectrum and energy efficiency, low latency, and massive connection due to the exponential growth of the IoT devices. 6G will make feasible intelligent traffic, environmental monitoring and control, virtual reality/virtual navigation, telemedicine, digital sensing, high definition (HD), and full HD video transmission in connected drones and robotics. 6G will also effectively link the physical, and digital worlds. This will result in a network that connects us to one another, to information, to knowledge, and to purpose. 6G wireless networks operate in the terahertz band, with a peak rate of 1T b/s and a network with ultra-reliable and low-latency communication (URLLC) of less than 1 ms, considerably improving the overall quality of experience (QoE) for consumers [11]. 6G has a high positioning accuracy of 1 m outdoors and 10 cm indoors [12] which also improves positioning accuracy of deep-sea and space tourism. 6G utilises endogenous security technology to increase its resistance to unknown security threats [13]. As a result, 6G networks can enhance the efficiency of technologies such as computer vision, blockchain, AI, the IoT, robotics, and user interfaces [14]. To summarize, 6G will enhance every feature of the 5G network that benefits the user. 6G will improve areas such as smart cities, farming, manufacturing, and robots. 6G will provide enhanced productivity, capabilities, and better user experiences. Improved and expanded functionality is an in \begin{table} \begin{tabular}{p{113.8pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}} \hline \hline **Type of interaction /sec case** & **Network KPI requirement** & \multicolumn{2}{c|}{**Fail-experience**} & \multicolumn{1}{p{56.9pt}|}{**Confirable-experience**} & \multicolumn{1}{p{56.9pt}}{**Ideal-experience**} \\ \hline & & In the four-experience phase, most current i-clock, and the internal screen resolution in 2K to 4K. & In the comfortable-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. & In the dual-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. & In the dual-experience phase, most current i-clock, the internal screen resolution in 2K or 24K. \\ \hline **Work-interaction** & Bitrate & 2-40 Mbits (4K) & Full-size: 2-90 Mbits & 2-1000 Mbits (24K) & 2-1000 Mbits (24K) \\ & & & FOV: 2.50 Mbits & FOV: 2.15 Mbits (12K) & 2-50 Mbits (24K) \\ & & & FOV: 2.50 Mbits (24K) & 2-50 Mbits (24K) \\ & & & FOV: 2.440 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & & FOV: 2.20 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & & FOV: 2.20 Mbits (12K) \\ & & & FOV: 2.20 Mbits (24K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-1600 Mbits (24K) \\ & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2.20 Mbits (12K) & 2-870 Mbits (24K) \\ \hline & & & FOV: 2. evitability over successive generations. Even with 6G, this will be the case. 6G will improve upon 5G by optimising and lowering costs to increase adoption. Information management and consumption will be simplified with the advent of 6G's new human-machine interfaces. The touchscreen interface of the future will instead be controlled by voice instructions, gestures, or even brain signals. The comparison of the features related to 5G and 6G are depicted in Table 3 ### _Preliminary to the Metaverse_ The Metaverse is a network of three-dimensional virtual environments dedicated to social interaction. It is frequently depicted in futurism and science fiction films. The worldwide virtual environment will be made possible by the usage of VR and AR devices [15]. The term "Metaverse" is not entirely unfamiliar in the technological world. Neal Stephenson coined the term "Metaverse" in 1992. His science fiction novel Snow Crash envisioned an online universe in which people may explore and escape the physical world using digital avatars [16]. Decades later, major technology firms like Meta, Axie Infinity, The Sandbox, and Decentraland have begun developing their versions of a futuristic Metaverse. The overview of the enabling technologies, services, and technical requirement is depicted in the Fig. 2 \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Ref & \begin{tabular}{c} Technical \\ aspects \\ challenges \\ \end{tabular} & \begin{tabular}{c} Security \\ and \\ privacy \\ issues \\ \end{tabular} & \begin{tabular}{c} The role \\ of 6G in \\ Metaverse \\ \end{tabular} & \begin{tabular}{c} Research \\ directions \\ (6G) \\ \end{tabular} & Remarks & Limitations \\ \hline [5] & **H** & **M** & **H** & **M** & This paper aims to show the roadmap to the Metaverse in terms of communication and networking in 6G, including requirements (limited) and challenges for 6G to realize the Metaverse, and discussing the fundamental technologies to be integrated in 6G to drive the implementation of the Metaverse. & The paper investigates AI-based methods concerning six technical aspects that have potentials for the Metaverse: natural language processing, machine vision, blockchain, networking, digital twin, and neural interface, and being potential for the Metaverse. & The paper is missing some important references and the requirements are not discussed in detail except for what is depicted in Fig. 2. \\ \hline [3] & **H** & **M** & **L** & **M** & The security threats to the Metaverse are explained comprehensively. The paper includes the countermeasures to those threats. From the security and privacy perspective, it is a comprehensive survey. & The future networks enablers are not discussed in this paper. The paper provides good state-of-the-art survey, however it is lacking future directions especially in the networking aspect. & The paper does not cover any security or privacy issues or the importance of future networks in designing meta worlds. \\ \hline [4] & **M** & **H** & **L** & **M** & The security threats to the Metaverse are explained comprehensively. The paper includes the countermeasures to those threats. From the security and privacy perspective, it is a comprehensive survey. & The future networks enablers are not discussed in this paper. The paper provides good state-of-the-art survey, however it is lacking future directions especially in the networking aspect. & The paper does not cover any security or privacy issues or the importance of future networks in designing meta worlds. \\ \hline [7] & **H** & **M** & **L** & **M** & This survey changes how enablers of the Metaverse can be implemented at a scale at mobile edge networks. The implementation challenges are also discussed in this survey. & The survey mentions the role of B5G/6G, but it doesn’t cover how 6G will enable the Metaverse. \\ \hline [8] & **M** & **L** & **M** & **M** & This survey analyzes the fusion of 6G-enabled edge AI and the Metaverse, and introduces the features, architecture, technologies, and applications of the Metaverse. & The paper only partially covers privacy and security issues. The 6G requirements are discussed only to certain level and the 6G enablers are not covered in full. \\ \hline Our - #### 1.1.1 Enabling Technologies of the Metaverse The immersive experience of the Metaverse will be enabled by cutting-edge technologies such as blockchain, AR and XR, AI, and the IoT. * Blockchain: Blockchain technology enables decentralised and transparent digital proofs of ownership, collectibility, value transfer, governance, accessibility, and interoperability in the Metaverse [17]. Blockchain also enables individuals to exchange assets while working and interacting in the Metaverse. * Extended reality: XR enables the creation of 3D computer-rendered virtual environments in the Metaverse. XR allows users to interact with these virtual goods through head tracking devices or physical controls [18]. As XR technology matures, it will be able to broaden the Metaverse experience by including physical simulations using XR equipment. Users will then have the ability to sense, hear, and interact with people from all around the world. * Artificial intelligence: AI will allow users of the Metaverse to construct incredibly realistic avatars and have multilingual accessibility. AI will help people make better decisions in the Metaverse. A better human-computer interface will also be provided by AI [19]. AI can also help detect, prevent, and recover from cyber attacks in the Metaverse. * Internet of things: IoT will enable the Metaverse to map data from real life and emerge it into virtual reality [20]. The data provided from the IoT devices to the Metaverse will help professionals to solve real-world problems. The Metaverse with the help of IoT will support the users to collect real-time data-driven decisions with minimum need for training and computation. * Edge Computing: Edge computing enables mobility and the border-less Metaverse for the users [21]. Edge computing improves the availability of data in the Metaverse by bringing data closer to end consumers for retrieving and storing data at remote data centre locations. Edge computing will help data transferring with ultra-reduced latency in the Metaverse which will help the users to make quick and effective decisions. #### 1.2.2 Applications of the Metaverse The Metaverse has made its way into many sectors, capturing the enthusiasm of entrepreneurs across the world. The Metaverse will have a huge impact on applications like healthcare, real estate, manufacturing, tourism, Entertainment, and shopping. * Healthcare: Smart healthcare has contributed to resolving several healthcare difficulties, including linking patients to doctors situated throughout the world during the COVID-19 epidemic. This prepared the door for the application of the Metaverse in healthcare, which is facilitated by medical IoT, VR, and AR [22]. The Metaverse gives users control over how the virtual and physical worlds interact. This enhances doctors' ability to provide consistent and customised patient care. Through the use of VR technology, the Metaverse can aid in remote health monitoring, clinical data collection, and improved robotic surgery, while 3D immersive technology will enable surgeons to practise through \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Features** & **5G** & **6G** \\ \hline Data Rate & 1 Gbps to 20 Gbps. & 1 Tbps. \\ \hline Application Types & Enhanced Mobile Broadband, Ultra-Reliable Low Latency Communications.Massive Machine Type Communications. & Massive Broadband Reliable Low Latency Communication. Massive-URLLC Human-Centric Services. & Broadband Reliable Low Latency Communication. Massive-URLLC Human-Centric Services. \\ \hline Device Types & Smartphones, Drones, and Sensors. & Sensors & DLT devices,BCI and XR equipment, CRAS, and Smart implants. \\ \hline Frequency Band & Sub 6 GHz and mm wave for fixed access. & Sub 6 GHz mm wave for mobile access exploration of THz bands. Non-RF bands. \\ \hline Latency & 5 ms & \(<\)1 ms \\ \hline Architecture & Dense sub 6 GHz smaller BS5 with umbrella macro BSs. & Cell free smart surfaces at high frequencies. Temporary hotspots provided by drone-mounted BSs. Trials of tiny THz cells. \\ \hline Spectral and Energy Efficiency & 10 x in bps/Hz/m\({}^{2}\). & 1000 x in bps/Hz/m\({}^{2}\). \\ \hline Traffic Capacity & 10 Mbps/m\({}^{2}\). & 1 to 10 Gbps/m\({}^{2}\). \\ \hline Reliability & 10\({}^{-9}\). & 10\({}^{-9}\). \\ \hline Localization Precision & 10cm. & 1cm in 3D. \\ \hline User Experience & 50 Mbps. & 10 Gbps. \\ \hline Mobility & 500 km/h & 1000 km/h \\ \hline Connection density & 10\({}^{6}\) devices/km\({}^{2}\) & 10\({}^{7}\) devices/km\({}^{2}\) \\ \hline \end{tabular} \end{table} TABLE III: The comparison of 5G and 6G Features Figure 2: Preliminary to the Metaverse simulations that will raise their success rate in the real world. * Real Estate: The Metaverse allows organisations to establish retail and experience centres on its virtual land [23]. Rather than downloading many applications, users can access the Metaverse, which contains all currently available applications. As a result, the value of virtual land will increase. Property ownership in the Metaverse is limitless, and owners are free to use their virtual holdings. Digital property owners can make, run, lease, and build billboards for advertising. * Manufacturing: Manufacturers can create digital factories in the Metaverse to assist in organising production and effective usage of machinery. This allows simulation of human-machine interaction throughout the manufacturing process [24]. As a result, firms can use virtual production systems to train new employees and staff on how to use them in the real world, which would boost the manufacturing of products with a very low error rate. The metaverse also allows mass personalization of the product and allows the user to track the product from its development to delivery, which will improve the trust of the users in the organization. * Tourism: The Metaverse has the potential to create the most immersive experiences ever seen in the tourism sector. The Metaverse allows hotel chains, tourism boards, online travel agencies, and other businesses to advertise their services [25]. Users can virtually visit those locations and decide whether or not to visit them in person. They can go through two distinct locations without leaving their homes, comparing and evaluating places through the use of 3D imagery. The Metaverse will give users an experience that will be better than any kind of communication that exists in the present day, including video and audio interaction. * Entertainment: The Metaverse will completely revolutionise entertainment with its rich 3D environment and avatars. Entertainment's growth is highly linked to the development of VR in the Metaverse. The Metaverse-based entertainment, including movies and video games, can be enjoyed in a virtual world that users can access from the comfort and privacy of their home [26]. It also allows users to attend virtual concerts and sporting events from first-row seats or to ride virtual roller coasters at theme parks. * Shopping: Customer experiences in the Metaverse will evolve constantly as a result of XR technologies, and organisations selling products in metamalls will have more creative freedom to express themselves and attract customers than they do in traditional shopping. These spaces will encompass much more than the basic services seen on the majority of e-commerce sites today, including user engagement, avatar customization, event attendance, and skill acquisition [27]. Furthermore, the products sold in the Metaverse will include both physical and virtual items. Consumers may feel and touch the object with the use of sensors, which will completely alter the traditional buying experience. Additionally, customers can purchase things on the go while engaged in real-world activities. #### 3.2.3 Technical Requirements of the Metaverse Privacy, security, storage, and interoperability are the important technical requirements of the Metaverse. * Privacy: The Metaverse is a social platform that employs interactive technology such as VR, AR, and AI that requires sensitive user data. Since behavioral-learning and recommender systems collect vast quantities of personal information, they pose a threat to the privacy of the Metaverse users [28]. Therefore, the use of such technologies poses a substantial risk to the privacy of users' data. The Metaverse must guarantee privacy protection for such sensitive information, and users must have complete control over their data, which will increase their trust in the Metaverse. Even though blockchain technology can help protect privacy in the Metaverse, there are no specific rules designed for privacy protection in the Metaverse, which makes it a critical requirement. * Security: In the Metaverse, attackers and AI bots can and will emerge from any location and at any time. The Metaverse networks should have a high level of security, and related protocols to incorporate continuous awareness into these networks [5]. In addition to existing passwords, multi-factor authentication, enhanced firewalls, and advanced threat detection technologies, the Metaverse must be incorporated with higher transparency and analysis to detect anomalies and uncover malicious activities to maintain user security. Data must be secured and safeguarded during transmission and storage. To assure the security of the Metaverse in the future, it is vital to draw on and build upon information from the past. * Storage: The Metaverse is a collection of technologies. It is a huge concept which involves the simultaneous integration of multiple technologies. The list includes high-performance networks, sophisticated computing and sensors, hardware, AR/VR, AI, 3D modelling, blockchain, IoT, and many other technologies. The data produced from these technologies and their related application will be enormous [29]. The formation of the Metaverse itself necessitates voluminous data storage. Decentralised storage based on blockchain technology can be used to store this massive amount of data. This storage distributes data to numerous independent network nodes using open-source applications and algorithms. It also improves the privacy of data, the redundancy of data backups, and the transparency of data in the Metaverse. * Interoperability: Interoperability across services, technology, and virtual worlds is a crucial aspect of the Metaverse [30]. A cross-chain protocol is an optimal approach for maintaining interoperability between diverse Metaverse services, technologies, and configurations. Among other protocols, this one permits the exchange of assets like avatars, non-fungible tokens, and currency. To make the Metaverse more interoperable, different devices that use the same technology need to follow the same rules and standards. ## III 6G for the Metaverse: Technical Perspective This section investigates the state-of-the-art solutions provided by 6G for the Metaverse from the technical perspectives, including validation of digital assets, cross-platform integration, efficient support of AI, privacy and security, high-speed data connection, content creation and storage, user interactivity, low latency communication, and computer vision. ### 6G for Validation of Digital assets in the Metaverse Non-fungible Token (NFT) is one of the key components of a digital asset in the Metaverse. A visual asset, such as a virtual building, can be represented by an NFT that can be represented as a digital asset with unique data coding. When a purchaser buys an NFT, a private digital key password will be generated, that can certify that the purchaser owns a particular digital asset. Through this private key, the owner of the NFT can sell or transfer the digital asset to others [31]. The blockchain associated with the specific Metaverse will record the NFT for a digital asset in the Metaverse. For instance, the Ethereum blockchain stores "Decentraland Metaverse", which is highly popular. Ethereum records the digital asset transactions of the NFT for Decentraland [32]. Digital assets in the Metaverse can be created in the form of NFTs by the users. These digital assets can be anything ranging from virtual goods to digital art to virtual real estate, which is minted into the NFTs that are securely stored on the blockchain. The owners of these digital assets can see these digital assets which are in the form of NFTs in the form of crypto for purchasing other digital assets in the Metaverse [33]. Content creators and artists can afford to have an opportunity to monetize their assets uniquely due to NFTs and blockchain. For instance, artists need not depend on auction houses or galleries for selling their art. The artists can sell their art directly to the customer in the form of an NFT that allows them to keep the majority of the profits. The artists can even receive royalties whenever their art is sold to a new customer. Like art, the other digital assets also can be traded through NFTs in the Metaverse [34]. The process of creating NFTs and transferring them from one virtual world to another requires a network that is highly reliable and secure. Digital assets in the Metaverse, represented by NFTs are verified and tracked on the blockchain. Every NFT will have a unique transaction hash that makes it non-replicable. All the transactional data related to the NFTs collected by the blockchain and are stored in blocks, that forms a blockchain. The information stored in the blockchain is stored forever and can be viewed and verified by the public. Verification of the digital assets in the Metaverse that has AR and other MR technologies incorporated, needs significant amount of bandwidth to create a more immersive experience add also to reduce the load times. The validation and verification of the digital assets in the blockchain incurs heavy computation in the blockchain, which needs significant bandwidth so that the users can see the results in near real-time, as depicted in Fig. 4. The transactions between the different entities in the Metaverse are also powered by the consensus mechanism of the blockchain, which requires huge amounts of data transfer between nodes. This creates a requirement for a network that is both transparent and capable of real-time communication. These challenges faced during the creation, transfer, and validation of digital assets in the Metaverse can be solved by 6G due to its low latency, reliability, transparency, and high-speed connectivity [35]. ### 6G for Cross Platform Integration/interoperability in the Metaverse One of the hurdles in realizing the full potential of the Metaverse is the lack of interoperability. Lack of interoperability [36, 37] is a major hurdle in a mass adaption of the Metaverse that makes the navigation of the users free from one Metaverse application to the other challenging. The Metaverse should mimic the interoperability that is experienced in the physical world. For instance, in the real/physical world, we can take physical assets/objects from one place to another easily. The users in the Metaverse too should be able to navigate seamlessly and freely to other Metaverses. This is possible through interoperability that can form a global interconnected Metaverse where various Metaverses are integrated across the platforms as experienced in the real world [38]. Realization of interoperability in the Metaverse is a significant challenge as heavy objects such as digital avatars, 3D holograms etc. have to be navigated across in feature-rich Metaverse in near real-time. It requires a communication infrastructure with high bandwidth and low latency. 6G network with its high bandwidth and ultra-reliable low latency communication infrastructure can solve the issue of seamless communication in the Metaverse, as depicted in Fig. 5. With the help of supporting technologies like ORAN and ZSM, the 6G network can be the common platform that provides an interoperable infrastructure for multiple Metaverses. Network slicing, software-defined networking, symbiotic radio, and network function virtualization are the 6G techniques that promote network interoperability and agility in the Metaverse. Intelligent collaboration between diverse wireless signals is supported by symbiotic radio. The SDN/NFV offers open interfaces that facilitate interoperability between several Metaverses and assist produce network slices for any vertical application such as gaming and shopping over the common physical infrastructure among different Metaverses [39]. ### 66 for Efficient Support of AI in the Metaverse The Metaverse is a virtual world where the users will play games, interact with each other and the 3D objects in the virtual world, and build things in the virtual world. VR and AR along with blockchain and AI are the key enabling technologies in realizing the Metaverse. The applications of AI in Figure 4: 6G for Validation of Digital Assets in the Metaverse Figure 5: 6G for Cross Platform Integration/interoperability in the Metaverse Figure 3: 6G for the Metaverse:Technical Perspective the Metaverse include speech processing, content analysis, computer vision, etc [3]. These applications of AI can be used to help build important components of the Metaverse as discussed below: **Avatars:** Avatars is one of the important and interesting concepts of the Metaverse, where people in the physical world will create a digital avatar in the virtual world. People would like to get creative in the virtual world and they would like to see themselves in a different way which may not be possible in the physical world. They can change their clothing, hairstyle, and body language, which is not their regular norm in the real world. AI plays a major role in the users designing their avatars in the virtual world. AI can be used to analyze 3D scans or user images to create accurate, realistic, and innovative avatars [40]. Some organizations such as Ready Player Me are making use of AI to create avatars for the Metaverse. **Digital Humans:** In the Metaverse, 3D chatbots are termed as digital humans. The digital humans respond and react to the actions of humans in the virtual world. They are usually non-playing characters that can be a character in a game of virtual reality whose actions and responses depend on a set of rules or automated scripts. They try to understand what the users are communicating by listening and observing them. Human-like interactions and conversations can happen in the Metaverse between humans and digital humans through body language and speech recognition [41]. AI plays a significant role in the successful implementation of digital humans in the Metaverse. Some of the key functionalities of digital humans like speech recognition, body language identification, object detection, etc. can be realized through AI [6]. **Language Processing:** In the Metaverse users from across the globe can communicate and interact easily without language barriers. This is made possible with the help of AI. AI can break the human language such as English into a format that can be read by machines. The AI can then analyze the sentences, and respond to the users in their language [42]. From the above discussion, it is obvious that AI plays a significant role in the realization of some of the key features of the Metaverse. In the Metaverse, huge volumes of heterogeneous big data will be generated at a very fast rate. 6G, with its characteristics such as fast communication infrastructure, and near real-time processing, can help in processing/analyzing this big data to uncover the patterns existing in the data that trains the AI/ML algorithms in near real-time to make quick decisions/predictions through which several components of the Metaverse can communicate easily. ### 6G for High Speed Data Connection in the Metaverse The wide adaption of AR and VR technologies is the key to the transition to the Metaverse. It is expected that data usage to be increased by 20 times to what is being used today due to the revolution of the Metaverse by 2022. To realize the full potential of the Metaverse with real-time experience of AR and VR technologies, truly immersive 3D experiences. The end-users should be able to access high-speed data connections that can deliver the data at speeds of approximately 1 Gbps [43]. Some of the key requirements that will be needed to realize the true potential of the Metaverse are as follows: * To create virtual reality worlds in real-time, high-speed data connection is required. * The communication infrastructure should high-speed transmission in near real-time with very low latency, typically, below 10 milliseconds. * The existing 4K video resolution may not be sufficient to convey the pixels for creating immersive worlds. Higher-resolution videos have to be supported by the data carriers. * Next-generation video compression techniques that can compress and decompress huge data files in the Metaverse in real time are the need of the hour. The key features of 6G with high bandwidth and URLLC [44] promise is a key enabling technology to realize the high bandwidth requirement of the Metaverse. The use of Edge AI-enabled 6G can also help applications and the Metaverse devices address these issues. Edge AI is the combination of edge computing and AI to run machine learning tasks directly on connected edge devices. Edge AI computes and processes data locally, which helps the Metaverse devices be efficient and responsive in their communication. This also reduces the amount of data sent from the Metaverse devices to the cloud, thereby saving a huge amount of bandwidth. ### 6G for Efficient User Interaction in the Metaverse The Metaverse enables the interaction between real-world entities and virtual objects. It is a digital environment that incorporates social networking, real estate, online gaming, AR, VR, and cryptocurrencies. In the Metaverse, with the help of virtual objects, sounds, and other sensory input, AR tries to enhance the user's perception of the real world [5]. Each time a user enters the Metaverse, the objects around them undergo a dynamic transformation based on the requirements. Figure 6: 6G for Efficient User Interaction in the Metaverse Everything in the Metaverse is constantly changing, which indicates the dynamic nature of the Metaverse. Changes to a physical object are mirrored in its virtual counterpart in the Metaverse because of their digital twins, which are linked to real-world objects. People can also change objects by interacting with them. Instead of just looking at digital objects in the Metaverse, users will be able to experience a place and interact with them [45]. The creation of new objects will require complex inputs and will demand high-quality user interaction with the objects in the Metaverse. The Metaverse poses three crucial challenges for effective user interaction, as depicted in Fig. 6: **Interacting with existing objects:** Users' physical interactions with these virtual worlds are an important consideration [46]. For the Metaverse to persist, this is a fundamental challenge that must be overcome. When the user is unable to control the interaction, they will stop using it immediately. When a user is completely immersed in a virtual world and finds themselves unable to perform a task that they could do in the real world, they become frustrated and annoyed. **Modifying existing objects:** As technology gets better and the real world keeps changing, the Metaverse objects will need to be changed to make them seem more real [47]. Realistic objects need more precise modelling algorithms, just like real faces. Even in the Metaverse, where scenes and avatars are always changing and interacting, objects have to be changed all the time to reach this level of realism. **Creation of new virtual objects:** The Metaverse is a virtual 3D universe comprised of virtual 3D objects [48]. The Metaverse requires the creation of immersive experiences based on real-world artefacts to accomplish its objective of combining the digital and physical worlds. In the Metaverse, a lot of digital objects will need constant sensor inputs from their physical counterparts to produce this realistic immersive experience for the users. The Metaverse will also enable its users to create virtual objects by providing them with various tools and applications. As a result, it creates a huge requirement for bandwidth, which is a challenge to achieve with the present technology. From the above discussion, it is obvious that efficient user interaction plays a significant role in the creation, interaction, and modification of digital objects in the Metaverse. This requires massive input from real-world objects. 6G's URLLC and real-time processing abilities will aid in the building of a highly immersive 3D environment in the Metaverse. ### 66 for Low Latency Communication in the Metaverse Low latency communication is the capability of the communication network to deliver large quantities of data with minimal delay and high accuracy. These networks are designed to support operations that require access to rapidly changing data in real-time. Advanced technologies like self-driving cars, holographic telepresence, remote surgery, deep-sea and space tourism, and other AR and VR innovations are becoming part of the Metaverse [49]. For instance, we had been accustomed to virtual communication using Zoom, Skype, Microsoft Teams, and other platforms. Future developments in VR and AR are well on their way to making an office where people can talk to each other in a fully immersive way. This integration of advanced technologies into the Metaverse creates a huge demand for next-generation networks with enhanced bandwidth and latency. The present network infrastructure cannot provide the bandwidth and latency required for the Metaverse and its applications. The capacity of current 5G networks to handle the IoE, holographic telepresence, collaborative robotics, and deep-sea and space tourism is limited. These applications require multiple terabytes of bandwidth as they depend on real-time inputs from the real world. From the discussion, it is clear that the Metaverse necessitates the highest network positioning accuracy and multiple terabytes of bandwidth. The 6G network, with its advancements like greater use of the distributed radio access network (RAN) and the terahertz (THz) spectrum to increase capacity and improve spectrum sharing, will provide effective and low-latency communication required for the Metaverse [50]. ### 66 for Computer Vision in the Metaverse Computer vision is the study of how computers perceive and interpret digital images and videos. Computer vision encompasses all activities done by biological vision systems, including seeing or sensing a visual signal, interpreting what is being seen, and extracting complicated information in a form usable by other processes [51]. Using sensors, comput Figure 7: 6G for Computer Vision in the Metaverse ers, and machine learning algorithms, this multidisciplinary field replicates and automates essential parts of human vision systems. The objective behind computer vision is to develop artificial intelligence systems that can see and understand their surroundings. In the Metaverse, computer vision plays an important role in enabling humans to experience the virtual environment, as depicted in Fig. 7. Through the use of digital avatars, VR and computer vision provide a near-to-lifelike experience in the Metaverse [52]. In order to connect to this virtual world, the user needs to use XR devices, which are built on the foundation of computer vision. XR applications rely heavily on computer vision. Visual information in the form of digital images or videos is often processed, analyzed, and interpreted with the help of computer vision and visual information. This helps people make effective decisions in the Metaverse. As a result of computer vision, VR and AR environments can be built that are more accurate, trustworthy, and user-friendly than their real-world counterparts. Human position tracking is a computer vision challenge that tries to figure out where people are located in an environment that is constantly changing. In the Metaverse, the healthcare, military, construction, manufacturing, education, and retail sectors will rely largely on computer vision. For example, doctors can improve surgical processes and study data from 3D scans in real time using computer vision. The computer vision will assist doctors in detecting, diagnosing, and treating potential diseases and enable them to examine patients from anywhere in the world [53]. Computer vision in the Metaverse will evolve at an accelerated rate, and even 5G cannot compete with the rapidly evolving technological requirements of the Metaverse's computer vision capabilities. The computer vision requires the continual collaboration of heterogeneous devices to provide immersive experiences for the users, which requires uninterrupted network service, and should provide symmetric uploading and downloading speeds for users to quickly upload all their content while concurrently downloading the content of others. 6G supports a higher number of device connections, which is very crucial for computer vision in the Metaverse for delivering its fully immersive services to customers [54]. The independent frequency, higher data transmission rates, and large coverage of 6G will enhance the QoS of computer vision in the Metaverse. ### _6G for high transaction integration scalability_ To date, the Metaverse implementations used a centralized cloud-based approach for avatar physics emulation and graphical rendering. The centralized design is unfavourable as it suffers from several drawbacks caused by the long latency required for cloud access. Further deployments of Metaverses will also bring scalability issues to the physical layer due to the increased number of computing tasks mainly generated by extremely demanding applications. The traditionally deployed centralized architectures are unlikely to support a large number of Metaverses and users, so the introduction of de-centralized Metaverse systems including frameworks and protocols is inevitable. There are several approaches that can be taken, starting with leveraging Mobile Edge Computing (MEC) technology. For example, [55] proposed the blockchain-based MEC architecture, where base stations allocate their computation and communication resources for providing video streaming and the use of a series of smart contracts enables a self-organized video transcoding and delivery service without a centralized controller. Using the MEC more efficiently will not fulfil the requirements in full, so the decentralized architecture will have to further distribute the communication and computational cost among different nodes present in the virtual space. The concept of de-centralizing the Metaverse applications was presented by authors of Solipsis [56] - a system that allows the adaptive streaming of 3D models including avatars, sites and objects in a completely decentralized fashion. In general, to overcome challenges related to a high number of transactions, massive resource demands and scalability concerns a novel framework should be proposed to address those emerging challenges for the development of future Metaverses. In such framework, the Metaverse Service Provider (MSP), which is a service provider that offers applications or services such as games, conferences or concepts should be able to get paid for provided services and in addition to this, the MSP should be allowed to negotiate with the Metaverse User (MU) to use MUs computational resources in return for discounts or rewards. The blockchain, which is provided by the MSP can contain all interactions between the MSP and MU in terms of transactions. The MetaChain [57] describes a similar concept that could serve basis for future deployments. In this particular proposal, the blockchain shards are used to allow the MUs to contribute their computational resources to the Metaverse application provided by the MSP. This is done in exchange for digital assets such as tokens or service access. With this approach, more users can be attracted to a particular Metaverse. However, attracting users to contribute resources is going to be particularly challenging. The reason is that the service provider will not be able to directly allocate the user resources to the shards. Sharding is so far one of the most practical solutions to achieve a scale-out system where the processing, storage, and computing can be conducted in parallel. As such, the capacity and throughput being linearly proportional to the number of participating nodes or the number of shards become possible, while preserving decentralization and security. The consideration has to be taken when creating shard-based systems as users (by human nature) will aim to maximize their profits and concentrate resources on the shards that pay more. Nevertheless, whichever form such framework will take, a pay-per-share protocol is required to off-load computational workload onto the Metaverse user devices. Metaverses should offer their users an extraordinary immersive experience in virtual environments, such as entertainment games and smart cities, using its enabling technologies. The metaverse can track users' physical actions and physiological responses and may expose confidential information about their habits and physiological nature to third parties. If hackers get their hands on such sensitive information, it could lead to harassment and theft of digital assets, which could make users lose faith in the security and privacy of the metaverse. These issues can be addressed by utilizing privacy-protection technologies like "Private Copy" and "Clone Cloud", as depicted in Fig. 8. The creation of private copies and clone clouds is dependent on high connectivity and continuous integration with the metaverse environment. The edge intelligence facilitated by 6G can support the needs of these technologies in the metaverse. The use of a blockchain-based digital twin wireless network and an edge computing federated learning architecture can further enhance the users' privacy and data security [8]. Together with 6G, AI can optimize connectivity while also enabling traffic prediction and improving security in the metaverse. To avoid information breaches, physical layer communication may use a machine learning-based antenna design. Machine learning and quantum encryption can also be used to protect the security of communication devices in the metaverse. The metaverse's security may be increased by using early warning systems and AI-enabled 6G to identify network anomalies. The use of distributed and federated AI in a 6G network also eliminates the necessity for data sharing across the metaverse devices, which preserves the privacy of the users. IV Role of 66 Technologies for the Metaverse 6G will play a key role in the Metaverse operation since such an environment requires pervasive connectivity for full-fledged and omnipresent Metaverse immersion. Essentially, very-high bitrates and ultra-low delay are crucial for a satisfactory Metaverse experience. An important factor on this performance is the smart management of connectivity resources/services, scalable infrastructure and very low latency communications. Therefore, Edge AI and cloud infrastructure are necessary for efficient and performant handling of relevant use cases in the Metaverse. Edge AI is an important enabler since it facilitates AI-driven optimized network management and minimizes delay with distributed and close-to-the-user computing paradigms. This technology will be compounded with the AI native design of 6G which will be embedded for numerous functions ranging from physical layer control to service management. Furthermore, the required flexibility and scalability for network and service environment requires moving towards cloud-native technologies which can also form telco clouds for more efficient and scalable Metaverse infrastructure in the backend. In the cyber-physical domain, another aspect of the Metaverse regarding 6G will IoE and robotics play a key role. Additionally, 6G will have the essential toolbox to enable AR/VR, which is critical since the Metaverse will be the main vessel for AR/VR experience. An appropriate immersive experience in the Metaverse will be possible with those technologies enabled by 6G communication and computation functions. As a transversal technology similar to AI, blockchain can also help the distributed and open nature of the Metaverse and enable the transferability of digital assets which will be an important capability for the Metaverse use cases. A depiction of these technologies and their roles is provided in Fig. 9 and the summary of all related works is presented in Table 4. ### _AI_ #### Iv-A1 Introduction Based on the combination of many advanced technologies, the Metaverse should be built to convey a groundbreaking immersive experience to users, in which AI has played a vital role in the foundation and development of the Metaverse regarding numerous aspects, including core services and applications. Besides the responsibility of ensuring the reliability of the Metaverse's infrastructure, AI can help developers in designing and building a smarter and more beautiful virtual world and further allows users to acquire hyperreal creation using built-in tools. In 6G systems, numerous challenging tasks and applications can be solved and enabled by advanced ML algorithms with different learning mechanisms (i.e., supervised learning, unsupervised learning, and reinforcement learning) to achieve high performance and low latency. Especially, DL with the advantage of effectively learning complex patterns from practical large and messy datasets will be the key technology to polish many aspects of the Metaverse, from the intelligence of AI agents and virtual assistants (a.k.a., chatbots) to the visual quality of 3D worlds [58]. Indeed, the presence of AI in the Metaverse can be realized in the interactions between a user (represented by an avatar) and other objects (e.g., non-player characters) by automatically analyzing sensory data for multiple tasks, such as speech recognition and understanding, facial expression analysis, body movement tracking, and gesture recognition. Besides, Figure 8: 6G for Security and Privacy Protection AI can be applied to preserve users' identity and their digital assets from cyberattacks, especially in the scenario in which the Metaverse is built with a cross-chain bridge. #### 3.2.2 How 6G AI can help, on which features In the Metaverse, natural language processing (NLP) plays an important role to deploy intelligent virtual assistants (including chatbot) [59], which helps the Metaverse comprehensively understand what users are typing and talking, from simple sentences to complicated and long conversations, over unlimited, to smooth user interaction accordingly. Empowered by AI with ML and DL algorithms, chatbots can immediately respond to the users and adapt to an environment with reinforcement learning to consolidate operation and improve the performance of an overall virtual assistant system [60]. In the NLP domain, language modelling aims to predict linguistic components in sentences and paragraphs by mining syntactic and semantic relations of words and phrases, which is commonly developed for machine translation and text recommendation. Several advanced language modelling methods have exploited DL with RNN, LSTM, and CNN architectures to improve the overall system efficiency and addressed many fundamental tasks [61], such as identifying long-term dependency in long sentences in complicated scenarios, recognizing hyphenated words, misspelt words, suffixes, and prefixes. Especially, language modelling should be taken into consideration with different popular languages, such as English, Chinese, Spanish, and French [62, 63] to attract as many as possible users from over the world to join the Metaverse. Some advanced structures in deep networks, such as bidirectional LSTM, bidirectional gated recurrent unit (GRU), and channel-wise attention connection, have been leveraged to deal with some challenging problems, such as sentiment analysis, question type classification, and answer identification with multiple sentences [64, 65], which accordingly improved readability, interpretation, and rationality of virtual assistant agents. Some other specific AI-based NLP tasks (e.g., context retrieval, semantic notation, and named entity recognition) can be considered to uplift text-based and speech-based user interactive experiences in the Metaverse. Commercial headset devices with VR/XR technology have been designed to bring 3D viewing experiences to users, including high video quality (i.e., high resolution and high frame rate) and wonderful wearing comfort thanks to the advancement of AI. In [66], an eye fixation prediction method was introduced for gaze-based applications (e.g., video rendering and content visualization), in which a DL framework with hierarchical CNN architectures was exploited to process different data types, including VR images, gaze data and head data collected by wearable sensors. Some recent works have studied advanced ML and DL algorithms to precisely identify periodic behaviours of VR gear's user (e.g., gaming controllers and head-mounted display) for automatic identity authentication and health issues detection [67]. Some deep networks with CNN architectures have been designed to Figure 9: 6G key technologies and their roles for the Metaverse. assess the quality of images/videos (e.g., color saturation, brightness, resolution, and frame rate) displayed on the screen of VR devices, and then automatically adjust screen settings to optimize the visualization based on video contents and user conditions [68]. Along with the VR/XR technologies, computer vision is one of the most important sectors to build a beautiful virtual world in the Metaverse and enable it to be more intelligent from the user's viewpoint with the adoption of AI, especially DL in a few years [69; 70; 71]. Many sophisticated CNN architectures have been designed for different fundamental image processing and computer vision tasks, such as object detection, semantic segmentation, and scene understanding [72; 73]. For instance, atrous convolution was introduced by DeepLab [74] for semantic segmentation to capture more meaningful features by enlarging the receptive field of kernels to enhance the learning efficiency of a deep network while obtaining a small network size. Static and dynamic objects can be detected and located in the virtual worlds accurately by several recently advanced DL models to provide useful information to users and be synthetized for higher-level tasks like scene understanding and detailed captioning. Some image/video quality distortion problems, such as blurring, noise, and frame corruption, can be addressed effectively by AI technology to guarantee the high-class visual perception of a user when experiencing the Metaverse [75]. In addition, the activities, including single actions and interactions, of users and non-player characters in the Metaverse can be automatically detected and recognized by AI-powered human pose estimation and action recognition methods [76]. Some convolutional networks have exploited cutting-edge layer structures, such as dense connection, skip connection, and attention connection, to estimate complex human poses and classify grouped activities while handling other challenges like varying viewpoints, object occlusion, and complicated actions in practice. For example, generative statistic models and hybrid LSTM-CNN architectures are suggested in [77] to precisely examine pose transition in the spatiotemporal domain, thus increasing the accuracy of action recognition and activity classification. To preserve the Metaverse from different cyberattacks, especially protect users' digital goods and cryptocurrency assets, many advanced ML algorithms and DL models can be deployed in multiple layers (e.g., network and services layers) of the Metaverse's platform for intrusion detection [78; 79; 80], in which various malicious attacks can be automatically and accurately detected and classified to immediately provide an efficient security solution. In [81], a holistic security method with sufficient assurability and explainability was proposed to quickly and sequentially detect abnormalities and time-dependent abnormal events in IoT systems and software-defined networks, in which zero-bias neural networks are transformed into performance-assured binary abnormality detectors to increase detection accuracy while presenting the lowest latency based on false alarm constraints. In the effort to exploit DL denoising autoencoder (DAE) for many fusion security problems, many variants of DAE, including stacked autoencoder, stacked sparse autoencoder, stacked noise autoencoder, stacked contractive autoencoder, deep belief network, were benchmarked in for performance comparison with different practical intrusion detection datasets [82]. Reinforcement learning (RL) with the capability of learning environmental factors to adapt learnable parameters was also exploited to deal with different types of cyberattacks (such as malware, denial-of-service attack, and man-in-the-middle attack) [83]. In addition, privacy Figure 10: The roles of AI for the development and advancement of the Metaverse. preservation in the Metaverse should be uplifted comprehensively with the help of AI to ensure that there are no leakable risks and threats to users' big data in the virtual world. For instance, a privacy-aware and asynchronous DL-based method was introduced in [84] to maintain the confidentiality of data among different collaborative data collection sites. In [85], an optimal centralized privacy-preserving aggregate mobility data release mechanism was proposed to minimize the data and information leakage, in which deep RL models and the Asynchronous Advantage Actor-Critic algorithms are combined to optimize the privacy-preserving method. The above-mentioned privacy-preserving DL-based methods can be recommended for the Metaverse to combat information leakage threats and adversary attack effectively. #### 3.1.3 Summary In the Metaverse, AI has presented a plentiful foundation and development in numerous aspects and helped to construct a more beautiful virtual world with intelligent and secured services, thus bringing a wonderful experience to users. Several advanced ML algorithms and DL architectures have been deployed to take care of the comfortableness of VR users, and the interaction between users with virtual assistants, and automatically provide useful information about the virtual worlds to users. Besides some popular domains like NLP and computer vision, AI has great potential for deployment in other sectors: protecting users' digital assets from hackers, early detecting intrusions for data security and privacy preservation, improving the performance of URLLC with intelligent MEC, enhancing the intelligence of AI agents in real-time strategy and fighting games, and analyzing mental state with the brain-computer interface as illustrated in Fig. 10. Although some advanced ML and DL models can conduct a high performance in many detection and classification tasks, they represent black boxes that lack the capability of explainability and interpretability. Therefore, there remains room for AI research and development in the Metaverse. ### _Blockchain_ 1) Introduction In the Metaverse, data privacy and virtual asset (e.g., cryptocurrency and NFT) security of users should be guaranteed as the top priority. In this context, blockchain technology represents a promising solution with many unique features at once, for example, decentralization, transparency, and immutability. Fundamentally, blockchain is an innovative technology that permanently records transactions in a decentralized and public database so-called a ledger [86]. Although all transactions are transparent (i.e., being available to check by anyone), the decentralized recording system of blockchain is very difficult to fool or control. Some blockchains like Ethereum and Solana are programmable through smart contracts with different consensus mechanisms, such as proof-of-work and proof-of-stake, which can meet high-security requirements of e-commerce platforms and enable the revolution of the digital ecosystem in the Metaverse, especially supporting virtual asset payment and trading activities. A smart contract on the blockchain could be used to establish the ownership of any digital object, such as artwork and music, over NFT specialized by unique and nonreplaceable (i.e., no one else can claim the ownership of that digital product on the blockchain even if they have a copy version on computers). The role of blockchain in the Metaverse relies on ensuring data privacy and security, enabling seamless and secured data sharing, data interoperability and integrity with some common applications and services, such as decentralized finance and NFT market [87]. Besides that, blockchain allows digital goods to be tradable safely in a virtual world and enables the connection of physical objects to the Metaverse over NFTs. Notably, if two virtual worlds are interoperable, the blockchain has to authenticate the proof of ownership of digital goods in both virtual worlds. Indeed, blockchain bridges the real world and the virtual world besides playing as the gateway for users to access the Metaverse. #### 3.2.2 How 6G BC can help, on which features Data acquisition is one of the most fundamental processes to build the virtual world in the Metaverse, which collects big data from different modalities. Notably, the sensitive data collected from users to train AI models for several special modules (such as decision-making of virtual assistant, recommendation system, digital product development, and automated market maker) in the Metaverse should be secure. For secure and large-scale environment data acquisition, the work in [88] proposed a blockchain-based system which is specialized by one valuation layer to assess the quality of acquired data, one consensus layer to encourage and incentivize high-quality data acquisition, and one ledger layer to record transactions and qualified environmental data. In [89], a blockchain-based efficient data collection and secure data sharing mechanism was introduced for reliable industrial IoT systems. This mechanism has exploited the Ethereum blockchain to maximize the amount of acquired data and the deep reinforcement learning algorithm to obtain highly secure and reliable shared data. To guarantee the users' privacy in crowdsourcing systems, Li _et al._[90] designed a blockchain-based decentralized framework for data collection and sharing. There were three standard smart contracts on blockchain executed for the whole process of data acquisition to achieve such crowdsourcing information as task posting, receiving, and assignment. The proposed method was implemented and verified on an Ethereum test network with real-world data, which demonstrated usability, feasibility, and scalability to be suitable for distributed crowdsourcing systems. Although blockchain technology can ensure highly secure and reliable data supplied to the Metaverse, its drawback is low latency due to the complicated and distributed nature of processing transactions with smart contracts and consensus mechanisms like PoW. Besides, the high transaction fee is also a realistic barrier for a low-income user to experience the Metaverse. In a large-scale Metaverse platform, data storage should be taken into consideration seriously because of the high velocity, big volume, and complicated variety of big data from a plentiful number of applications and services deployed in virtual worlds [91]. There exist many underlying risks, such as leakage, tampering, and loss if the Metaverse is built on a platform with centralized storage systems. Some sensitive data like biometric login data of the user (e.g., face and touch identification on iPhone) can become the target of cyberattacks to steal virtual assets. To overcome the above-mentioned issues of centralized systems, the work in [92] proposed a large-scale secured IoT data storage scheme by exploiting blockchain miners to manipulate IoT data stored in distributed hash tables (DHTs). In the blockchain system, a certifiateless cryptography scheme was applied to reduce redundancy in traditional public key infrastructure and authenticate IoT devices, where the generated public key pairs are broadcasted to all devices with verification done by the blockchain miners. In [93], the time-series data in the Metaverse was stored in a locality-aware auditable decentralized storage ecosystem that was designed and managed thanks to the advancement of blockchain technology. Some data storage systems with recovery functionality have been developed to effectively address multiple problems, such as low integrity, high cost, and easy tempering. Liang et al. [94] introduced a secure blockchain-based data storage scheme, wherein the incoming data packets are verified with smart contract and consensus mechanism, and then checked to early detect any threats before being stored on a decentralized system. Notably, when distortion occurs to the stored data, multiple nodes in the blockchain network can repair it successfully. As the mixture of numerous digital realms, the Metaverse demands manipulating and processing the big data that is acquired from incompatible infrastructures for different purposes, in which the standardizations of data for different applications and services in the virtual worlds are dissimilar. This reveals a serious concern about data interoperability when expanding the Metaverse with an interconnection capability among different virtual worlds. To ensure the interoperability between different virtual worlds in the Metaverse, building a cross-chain protocol or an inter-blockchain bridge becomes a promising solution in many specific domains like healthcare and e-commerce [95, 96, 97]. A blockchain bridge is a protocol connecting two economically and technologically separate blockchains (such as Bitcoin, Ethereum, Avalanche, Solana and Polygon) for interactions and acts like a physical bridge linking the ecosystems of one blockchain with another. As a result, blockchain bridges enable what is called interoperability means that digital assets and data hosted in Metaverses built on different chains can interact with each other [98]. Besides, blockchain bridges allow users to access new protocols on other chains and encourage collaboration between developers from different blockchains, thus promoting a virtual economy in the Metaverse. A novel blockchain framework, namely BiiMED, was introduced in [95]to uplift the data interoperability and integrity in electronic health Figure 11: The roles of blockchain for ensuring the security and privacy of data acquisition, data sharing, data storage, and data interoperability in the Metaverse. records (EHR) sharing systems. The proposed framework facilitated the medical data on EHR systems between different medical providers and healthcare institutions with a decentralized trusted third-party audior for interoperation validation. Some recent cross-chain protocols [99, 100] have been introduced to interconnect multiple blockchains for secure data utilization and management while obtaining full interoperability. In [99], a cross-chain interactive decentralized access model was designed with a gateway to reading the information on multiple chains and route cross-chain transactions, and a notary network with an interplanetary file system and BigchainDB to verify and confirm each transaction based on a voting mechanism. Such kinds of cross-chain protocols allow users to easily buy, sell, and trade virtual assets among different digital worlds without any intermediate tools, and consequently encourage the adoption of the Metaverse. Along with interoperability, data integrity has also received much attention in the Metaverse, in which blockchain technology was considered to verify and protect data integrity in decentralized cloud computing systems [101, 102]. In the Metaverse, a user can freely interact and trade virtual goods (including cryptocurrency and other virtual assets like NFT) with a virtual assistant and other users via decentralized exchanges (DEXs) integrated into the Metaverse to promote the development of decentralized finance (DeFi). As an ecosystem of financial applications built on blockchain networks, DeFi enables easy access to financial services, facilitates traditional financial systems, and has a modular framework with interoperability with public blockchains. Recently, GameFi, a fusion of words game and finance, refers to play-to-earn blockchain-based games with economic incentives to players, which is being developed and integrated in the Metaverse. A GameFi ecosystem uses cryptocurrency, NFTs, and blockchain technology to create a virtual gaming environment, where various GameFi ecosystems built on different chains can be involved in the Metaverse owning to chain bridges. In this context, it arises many privacy issues (e.g., the leakage of user identity and other personal information that can be stolen for illegal purposes) can be effectively handled by blockchain technology with immutability [103]. In a blockchain-powered Metaverse, third-party intermediaries are not permitted to manipulate the data of other parties. In [104], a blockchain-enabled crowdsourcing approach was proposed to deal with privacy preservation in mobile environments, where users can access the Metaverse using mobile devices. In secure 5G and 6G communication networks [105], blockchain was exploited to minimize privacy breaches by completely integrating authentication mechanisms with blockchain-based identification systems. #### 3.2.3 Summary With the distinctive features of decentralization, immutability, and transparency, blockchain technology has promoted the development and advancement of the Metaverse, where it has played an important role in any Metaverse platforms with some great contributions in terms of many technical aspects, including data acquisition, data storage, data interoperability, and privacy preservation. Besides ensuring the privacy of sensitive information and security in trading activities (e.g., buy/sell cryptocurrency, NFTs, and other virtual assets), blockchain has shown great achievement to revolutionize user's immersive experience, boosting the economic growth, and attracting new users to the Metaverse via numerous blockchain-aided applications and services supplied in the virtual worlds. However, it remains several challenging issues to concurrently attain security, scalability, and decentralization when the Metaverse must serve a huge number of users and a rapidly increasing number of transactions to process. Consequently, many research topics to optimize blockchain for the Metaverse should be continuously exploited in the future, such as consensus algorithms, blockchain interoperability, smart contract, and network management. ### Edge Computing and Edge Ai #### 3.2.1 Introduction The Metaverse is envisaged to map and simulate all our daily life activities in cyberspace at a huge scale while enriching such mapping with an immersive and interactive user experience. Cyber-physical and digital twin applications will also be integrated with the Metaverse application to offer realistic cyber representations of the physical world. In the ICT infrastructure, there will be the Metaverse engine which performs computations required to run virtual universe simulations carrying out computationally heavy tasks such as collision detection in the virtual universe and computation of 3D physics, and also other aspects of virtual universe that demand high computational power [106]. The Metaverse is striving to connect billions of users and create a shared world where virtual and reality merge [8]. Therefore, users interact in the physical-virtual world with the characteristics of diversification of information, identities, and modes under the requirements of ultra-low latency, massive resource demands, interoperability between applications, and security and privacy issues [107]. The promised Metaverse operation will require extremely low latency with highly elastic and omnipresent compute and storage resources. The latency and processing challenge for the Metaverse is in line with what is expected with 6G edge computing realization: For the Metaverse extended-reality computations to be offloaded, the entire process must be shortened so that input from the user device, a network trip, processing by the service, a return network trip and drawing the output on the user device fits in the 20ms time taken by a single network trip today [108]. Cloud-based processing for Metaverse operation can be unfavourable as it suffers from several drawbacks caused by the long latency required for cloud access, such as low-quality visualization in XR. 6G enables real-time, ubiquitous, and ultra-reliable communications for massive Metaverse devices with support for device mobility, which can reach 1020 Gbps [109]. To this end, Fog Computing [110] and Mobile Edge Computing [111] have been proven effective to tackle the issues faced by cloud-based systems, by moving the computational heavy load near the end-user and distribute it among edge devices; such approach can significantly reduce the latency and optimize the system performance. Furthermore, there is the cost-benefit: such an approach it would drive down the cost of XR devices and allow mass adoption. Verizon has estimated that any more than 20ms of motion-to-photon (total stack) latency causes many users to become nauseated; for comparison, well-built wireline broadband networks today typically have 20ms of network latency alone, and typical LTE latencies are 3x higher. Therefore, edge computing is an important technology. For instance, Zhang et al. [112] introduced the MEC into the Metaverse to improve the quality of users' experience. Xu et al. [7] discussed the potentials of AI, Edge computing and blockchain for ubiquitous, seamless access to the Metaverse. Similarly, Lim et al. [113] present the infrastructural architecture required for the Metaverse with a special focus on the convergence of edge intelligence and the infrastructure layer of the Metaverse. 6G-enabled edge intelligence opens up a new era of Internet of Everything and makes it possible to interconnect people-devices-cloud anytime, anywhere. In this context, industry, and academia have developed a new learning paradigm, Edge Artificial Intelligence (Edge AI) [114], which allows AI models to be deployed on devices and perform real-time data processing and model inference. 6G mobile communication technology provides edge AI with lower latency, more stable network connection, and more secure network architecture. Edge AI with 6G is expected to be applied to solve problems such as high bandwidth and high connection density in the Metaverse. However, the Metaverse still faces many challenges, such as users' privacy, network latency, and resource allocation issues. Moreover, the Metaverse places higher demands on the current edge AI architecture. As mentioned above, 6G edge intelligence has the advantages of low latency, computing offload, and high performance [115]. Overall, the application of 6G-oriented edge intelligence has the benefits of balanced data storage, efficient data transmission and high reliability. #### 4.2.2 How 6G EC and Edge AI can help the Metaverse As noted above, a high-speed and low-latency network connection and ubiquitous access to services is an important foundations for improving the user experience in Metaverse. Otherwise, issues such as visual jitter or delay and other undesirable phenomena might lead to the subbar performance of Metaverse. In that regard, to reduce network latency, an incentive mechanism framework for VR services was proposed in [116], which uses perceived quality as a criterion for measuring immersive experience and effectively evaluates the immersive experience in the Metaverse. [117] presents a novel MEC-based mobile VR delivery framework that is able to cache parts of the field of views (FOVs) in advance and compute certain post-processing procedures on demand at the mobile VR device. Jiang et al. [118] found that coded distributed computing (CDC) can improve the latency problem in the Metaverse and proposed a CDC and dual blockchain distributed collaborative computing framework. However, the computing, communication, and storage shortage will seriously affect the user's immersive experience. For the resource allocation problem, a new blockchain-based framework called Metachain was proposed in [87] which uses Stackelberg game theory analysis to propose an incentive mechanism, i.e., users obtain corresponding rewards by providing resources to blockchain shards. Based on the intelligent 6G edge network, a machine learning framework was proposed in [119] for decentralized learning and coordination of edge nodes to improve resource allocation strategies. For Edge AI and its applications in 6G, there are various challenges which are investigated by the research community. Edge AI paradigm and its applications still have the following issues that need to be optimized [8]: - High Latency: Since edge AI generally involves thousands of remote devices and needs to transmit and process massive amounts of data [120, 121], the high latency issue in the current network environment has always been one of the bottlenecks hindering the wide application of edge AI [122, 123]. - Fragile Stability: In edge AI, the training of large-scale models often requires powerful computing power and stable network connections, especially the training of large language models [124]. However, the current network environment is only suitable for the training of small-scale models [125]. This is due to the fragility of the network connection leads to the failure of large-scale model training. - Low Security: The current network architecture no longer meets the security needs of thousands of remote devices connecting to cloud servers today [120]. Furthermore, the openness of the network further challenges the security of the current network architecture. These issues are expected to be exacerbated with the utilization of Edge AI in 6G for the Metaverse applications. For instance, in [8], Chang et al. propose a self-balancing federated learning-based Metaverse framework to address the statistical heterogeneity faced by edge-cloud architectures. Besides, in [126], Lu et al. proposed a blockchain-based digital twin wireless network (DTWN) edge computing federated learning framework to solve the problem of user privacy data security. #### 4.2.3 Summary The integration of edge computing and realization of edge AI in 6G will provide various capabilities as well as challenges for the Metaverse. The key benefit is related to latency minimization needed for superb Metaverse user experience and pervasive services. Similarly, the inherent Edge AI support in 6G will also serve the Metaverse for smart edge services leading to better Metaverse services and device simplicity and flexibility. However, the potential benefits of 6G edge technologies should be supported with relevant research for improving on the aspects such as smart resource allocation, security, and privacy-preserving AI techniques. ### _6g Open RAN_ #### 1 Introduction Radio Access Network (RAN) is a very important component of a mobile communication system that can link individual devices like mobile phones or terminals to other parts of the network using cellular radio. RAN coordinates the resource management in the cellular network across the radio sites. RAN can send the signal from a mobile device that is connected wirelessly to the core/backbone network to several endpoints in the wireless network, thereby, enabling the signal to travel along with the traffic generated from other networks. A RAN will typically comprise of base stations, that can transmit and receive signals to and from mobile devices in the network. The signals are then digitized in the RAN-based station and are connected to the network. RAN contains radio units (RU), distributed units (DU), a centralised unit (CU), and the RAN Intelligent Controller (RIC) for the creation of an effective mobile communication system. RAN is very important for meeting the low latency and high-speed internet connectivity requirements of real-time applications [127]. RAN requires manual intervention if any network issues arise in software or connecting devices. Pointing out the cause and the origin of these issues in the network by the mitigation experts is difficult as RAN is black-box in nature. The process involved in the mitigation of these network issues requires significant cost and time, subsequently affecting the overall quality of the network. This necessitates the creation of open, intelligent, virtualised, and fully automated interoperable RAN for the next generation 6G networks [128]. Open RAN (ORAN) is one such technology that integrates AI to optimize radio resources and also automates the management and operations of infrastructure. 6G ORAN integrated with AI can be used to implement Self-Organizing Networks (SON) and Radio Resource Management (RRM) solutions that improve network coverage, capacity, handover, and interference. They could also be used to increase the spectral efficiency of massive MIMO systems by optimising their performance. AI/ML can also enhance the user experience through VoLTE/video quality optimization, terminal anomalies detection, and other Quality of Service/Quality of Experience (QoS/QoE)-based use cases [129]. The use of ORAN gives mobile network operators (MNOs) the flexibility to provide 6G connectivity in a cost-effective, secure, and energy-efficient way. The openness of ORAN also enables the MNOs a unique ability where all the vendors can share the RAN functionalities [130]. As a result, it avoids vendor lock-in by replacing vendor-proprietary interfaces with a fully disaggregated RAN based on open standards. #### 2 How 6G Open RAN can help the Metaverse The users navigate in the Metaverse frequently with the help of technologies such as AI, XR, digital twins, IoT, etc. As a consequence, the Metaverse demands continuous connectivity with sensors, hardware devices, and many other peripherals for providing high-quality and immersive services to the user. Any disruption in the network connectivity of these devices will cause the users extreme discomfort and make them feel that the surroundings are out of their control [131]. The standards for the Metaverse are substantially more demanding than those for the vast majority of internet applications in the present day. The current capacity of MNOs to handle the network requirements of devices connected to the Metaverse is rather questionable. This presents a challenge in the adaptation of the Metaverse. To solve these issues ORAN in 6G is a potential solution.ORAN in 6G with its AI, automation, and fully disaggregated open standards will enable the Metaverse to be cost-effective, secure, and energy-efficient way. Let us consider the application of the Metaverse in the healthcare domain. The Metaverse allows healthcare professionals to have better interactions with patients who are in different demographic locations, such as viewing a three-dimensional model of the human body while discussing diagnoses and treatments. This would allow doctors to simulate the effect of a proposed treatment on a patient's body before its application, creating a more personal and informative experience than is currently possible with two-dimensional images displayed on a screen. VR, AR, and MR technologies are currently being used for medical training and surgical procedures, These enabling technologies of the Metaverse demand reliable connectivity. If any failure of software or hardware occurs in the network at the time of medical intervention it will lead to serious catastrophic situations. ORAN in 6G enables devices to relay on multiple MNO, so, this will ensure the medical devices connected to the Metaverse with much reliable connectivity. The remote medical surgeries supported by the Metaverse require real-time insights. The network supporting these devices must be faster in recovering from the related issue and failures. The ORAN in 6G will provide the Metaverse with zero-touch network and service management capabilities which will automatically resolve the raised issues related to the network faster than the traditional RAN. The vital monitoring devices connected to the Metaverse require a latency-free and cost-efficient network. These devices connected to the Metaverse will be greatly benefited by ORAN service management and orchestration platform in 6G. ORAN service management and orchestration platform in 6G is an intelligent automation platform that applies automation reduces the complexity of networks, improves network performance, and enhances the customer experience in the Metaverse which minimizes ORAN operational costs, as depicted in Fig. 12. In the Metaverse, the possibilities of what can be created and purchased are nearly limitless. Users can purchase avatar skins, hairstyles, clothing, and accessories, as well as virtual land and property. Cryptocurrency and digital wallets will play a role in the Metaverse payments. Blockchain-based cryptocurrencies in the Metaverse or a crypto wallet are required to store and transport digital assets purchased in the Metaverse as well as between the virtual worlds. Digital wallets will be an alternative payment method that enables users to purchase digital goods securely. Thus the number of transactions occurring in the Metaverse will be limitless. Any breach or a critical update to the network will interrupt or halt these transactions and may affect the QoS/QoE of the customer in the Metaverse. ORAN in 6G will be less dependent on hardware which will reduce the risk associated with automated upgrades or isolated security breaches. The enhanced modularity available with open interfaces makes it easier for operators to serve the Metaverse towards a continuous integration/continuous delivery of services. Every trade or purchase that occurs in the Metaverse is recorded as a transaction, which results in huge network traffic because the data is to be stored in multiple peers. ORAN in 6G helps the Metaverse in better traffic management and also determines where to send traffic across the network. ORAN in 6G and AI enables the Metaverse to predict network conditions, such as congestion, so the controller can find an optimal path to send traffic over. This provides the users of the Metaverse with valuable insights about the network. #### 4.4.3 Summary ORAN in 6G with features like openness, better security, enhanced resource sharing, improved traffic management, and zero-touch network and services management provides the Metaverse with a network that is faster, reliable, cost-effective, automated, and intelligent. This is will help the Metaverse applications and services to be real-time. ORAN in 6G will help the users in the Metaverse with high-quality immersive experiences. The issues related to network software updates or threats will not affect the transactions in the Metaverse as ORAN in 6G is secured and depends less on hardware compared to the traditional RAN. ORAN in 6G allows AI to easily analyze the network and provide valuable insights for the Metaverse to persist. Though ORAN in 6G provides better network capabilities to the Metaverse it still faces challenges related to widespread adoption, technical support difficulties, system integration problems, and security risks. ### 6G cloudification and cloud nature technologies #### 4.4.1 Introduction A key aspect of 6G networks will be the cloud-native design of the overall ecosystem. With the actual realization of the Metaverse, the cloud, infrastructure, and telecom companies will have to provide a fully immersive Metaverse experience challenging servers with a 100x harder compute task than an AAA game server hosting Fortnite today, and telecom access networks facing 24x more traffic in a decade [108]. To address these compute-storage requirements, the State-of-the-art Metaverse architectures rely on a cloud-based approach for core Metaverse functions such as avatar physics emulation and graphics rendering computation. Specifically, XR places extraordinary demands on networks with native cloud design of 6G networks, on-board computation capability to eliminate external computing device dependency can still be delivered on simpler, lighter, and cheaper end-user devices if computationally intensive tasks can be offloaded to a cloud computing instance. The Metaverse leads to a clear need for cloud computing, considering the amount of storage and processing required to support a virtual reality universe: compute, storage and network loads [132]. As more performance and details will be demanded, remote cloud-based computers will become a necessary cost-effective way to solve that problem. The cloud computing technologies will be heavily exploited in two dimensions: First, by the Metaverse providers themselves built whether with private data centres or managed services. Due to their advantages, these compute- and graphics-intensive systems will be built on public cloud providers. Another option is to provide on-demand access compute and storage using pay-as-you-go models which can be done by public cloud providers with points of presence distributed globally. However, there is also the latency dimension: navigating the Metaverse smoothly through VR technology depends mainly on the network latency. As VR technologies are delay-sensitive and required very short latency, communicates with the Metaverse servers plays a pivotal role that leads to telco Figure 12. The role of 6G Open RAN for the development and advancement of the Metaverse. clouds where this concept is embedded in the telco network itself. For example, the validation of Non-Fungible Token (NTF) trading transactions requires tremendous computational power. This challenge is also valid for other Metaverse applications such as data processing of digital twin applications or AI-enabled services like storytelling and recommendation services that empower the virtual universe simulation [106]. Current state-of-the-art Metaverse implementations perform the computational on the cloud, which may limit the simulation capacity, and increase access latency. Moreover, there are several independent and fragmented Metaverses that rely on different hardware and software technologies. Since the service providers would like to exploit the advantages of controlling users' Metaverse data in their servers, we may end up with isolated Metaverses rather than a universal Metaverse. Additionally, due to capacity limitations, the number of users that can access each region may be limited by the cloud service provider's local computational and communication capacity. Such limitations defeat the purpose of a virtual world, which is supposed to accommodate avatars as much as the real-world location can physically accommodate people. Mobility support is also crucial since Metaverse will be a pervasive experience. Cloud can also help there as proposed by [106]. In this context, they propose a distributed architecture that can achieve a universal Metaverse, and solves the computational bottleneck. The advantage of layered architecture is twofold. Firstly, the users control their data, which enables organizations to access a universal Metaverse, rather than multiple separated Meta- verses. Secondly, the computational bottleneck is resolved by distributing the computational cost of heavy tasks. #### 4.2.2 How 6G cloudification can help the Metaverse The real-time interactive nature and high demands on data storage, streaming rates, and the processing power of Metaverse applications will accelerate the merging of the cloud into the network, leading to highly distributed tightly-integrated compute- and data- intensive networks becoming universal compute platforms for next-generation digital experiences [133]. For instance, Google Stadia [134] and Nvidia GeForce Now [135] instead offload such rendering tasks to a remote compute cloud--allowing the highest level of quality on weaker devices such as smartphones. less latency- and loss-tolerant (to provide satisfying responsiveness to inputs). To an even greater extent than AAA video games, VR and MR are highly computationally intensive. #### 4.2.3 Summary Cloud computing technologies and their adoption by telecom operators as telco clouds and cloud-native design in 6G have important implications for the Metaverse. First, they allow elastic Metaverse services which can be dynamically deployed and provisioned. Moreover, the Metaverse is expected to be a federated entity where different service providers, applications and users are present. Cloud computing enables such an environment where different Metaverse apps can easily reside together and integrate. Moreover, efficiency gains via consolidation and infrastructure sharing is possible. 6G clouds can support ultra-scalable compute storage for spatiotemporal changes in the Metaverse services. However, the trade-off between latency and cloud centralization is an important research topic [133]. ### 6G IoE #### 4.2.1 Introduction The growth of IoT applications results in increasing the number of IoT devices, which is expected to grow up to 24 billion by 2030 [136]. Furthermore, the total IoT market will also grow up to USD 1.5 trillion in 2030. The dawn of Internet of Everything (IoE) is envisaged to expand the IoT paradigm to weave a hyper-connected network of not only things but also data, people, and processes [137]. Therefore, IoE is expected to integrate "Everything" for connecting, identifying, monitoring, and making intelligent decisions towards realizing new applications and services. IoE will connect many ecosystems involving heterogeneous sensors, actuators, user equipment, data types, services, and applications [138]. Numerous heterogeneous sensors in IoE can obtain data related to various parameters ranging from location, speed, acceleration, temperature, ambient light, humidity and air pressure to biologists. This sensory information is paramount for the functionality of the Metaverse as real-world information provides inputs to form and update the virtual space and allow interactions between the real world and the virtual world. Furthermore, Human-Computer Interaction (HCI) can provide more flexible ways to access the Metaverse through human sensing (e.g. gesture recognition) [5]. Numerous cameras can capture video sequences from multiple angles to recognize human activities through advanced AI-enabled computer vision algorithms. In addition, the captured audio-visual information can be used to predict human emotions with the aid of smart wearables. These smart wearables can also capture data that are useful to obtain health metrics, such as heart rate, oxygen saturation level, body temperature, and electrocardiogram (ECG). 6G provides the ubiquitous, uninterruptible, ultra-high reliable/available and massive low-latency communication demanded by IoE [5, 137]. In addition, the edge-6G capabilities of 6G can process massive amounts of data collected from IoE devices to provide meaningful information for 6G applications and services. The integration of 6G and IoE will have the potential to enable many services, including the internet of medical things, smart healthcare, robotics, industry 5.0, smart grids, smart cities, and body area networks [137]. The superior connectivity offered through 6G with features such as, near real-time connectivity, extreme data rates, access to powerful computing resources at the network edge, and massive machine-type communication under strict delay constraints between heterogeneous sensory devices will facilitate the smooth operation of the Metaverse services and applications [139, 140]. #### 6.1.2 How 6G IOE can help the Metaverse 6G IoE plays an important role towards enabling the Metaverse by supporting an extremely large number of users, sensors, and devices to connect and communicate seamlessly with extremely high data rates, ultra-low delays, and jitters [137]. In addition, the data obtained through heterogeneous IoE devices can be processed using AI and ML through powerful Multi-access Edge Computing (MEC) resources in envisaged 6G networks. For instance, [141] discusses the expansion of IoE and how a multitude of sensors will enable the Extended Reality (XR) applications in the Metaverse. This work also explores the convergence of AI, MEC, Robots, and Distributed Ledger Technologies, such as blockchain, towards expanding the horizons of IoT towards IoE and beyond to provide a beyond smartphone experience. The proposed multisensory architecture is capable of integrating ubiquitous and pervasive computing towards enhancing human perception through advanced XR experiences. This is performed by utilizing wearables and nearby network resources in the 6G era. Hence, the dawn and the evolution of IoE will facilitate cross-reality environments, such as the Metaverse that can fuse real and virtual worlds with networked humans, avatars, and robots. In addition, 6G IoE enables "wireless sensing" to sense the behavior of surrounding humans and the environment [5]. The functionality of IoT is expanded from simply networking a large number of devices towards sensing the wireless network. Various wireless signals including Wireless Fidelity (WiFi), Zigbee, Bluetooth, and Radio-Frequency IDentification (RFID) are used as sensing mediums through analyzing the signal variation (e.g. signal blocking, signal reflection, and signal scattering) caused by surrounding humans and objects [142]. These variations may change signal properties, such as phase, frequency and amplitude, which can be inferred through parameters including Received Signal Strength (RSS), Channel State Information (CSI), and Doppler shift. Together with signal preprocessing techniques, such as filtering and de-noising to minimize the effect of signal interference and noise, changes in the environment can be recognized by identifying distinguishable unique features owing to ML models. The accuracy of such predictions can be enhanced through the widespread of mmWave and MIMO technologies. In addition, an Integrated Sensing and Communication (ISAC) system, where communication systems and IoE hardware are jointly designed can improve the accuracy of wireless sensing while enhancing spectrum efficiency and minimizing hardware implementation cost [143]. However, modelling such systems, providing real-time access to powerful computational resources for data processing through advanced AI and ML schemes, and providing real-time ultra-low latency communication with seamless coverage requires beyond 5G network capabilities that are expected to be facilitated by emerging 6G networks. #### 6.1.3 Summary The evolution of IoT towards IoE with the dawn of 6G provides seamless connectivity, extreme data rates, ultra-low latency and ultra-high reliable/available communication, and real-time access to powerful Edge-AI-enabled computational resources to facilitate the Metaverse applications. 6G IoE also facilitates advanced wireless sensing with mmWave and MIMO technologies. The development of ISAC harnessing extreme communication capabilities and Edge-AI processing of 6G networks can further improve the capabilities of 6G IoE that would enable emerging the Metaverse applications. 6G IoE features that enable the Metaverse applications are illustrated in Fig. 13. ### 6.6 Other 6G Technologies #### 6.6.1 Extended Reality Extended Reality (XR) combines Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) to blur the border between physical and virtual worlds with wearables supporting human-machine interactions with real and computer-generated environments [137]. 6G is capable of facilitating the massive low-latency, extremely low latency and extremely high data rate demanded by XR applications. Together with Edge-AI capabilities, 6G can facilitate the seamless 3C (computing, caching and communication) services for XR applications. Many sensors are used for the data collection on user location, orientation and movements. XR enables telepresence towards facilitating various aspects of human life, such as work, education, shopping, healthcare, tourism, and entertainment [144]. For instance, [145] explores how XR impacts six dimensions of workload, as defined by NASA Task Load Index (NASA-TLX), namely, mental demand, physical demand, temporal demand, performance, effort, and frustration, and the overall workload in the retail sector. The results of the study indicate that albeit VR alone did not have a significant impact on the various dimensions of workload, XR had a significant impact on performing shopping-related tasks. In addition, [146] Figure 13. 6G IoE for the Metaverse presents how users can actively engage with 3D content to stimulate healthy behaviour using XR in the Metaverse. This work discusses how XR can be effectively used in the Metaverse to address long-term health issues and challenges. Accordingly, XR can be identified as an important enabler to provide services efficiently using the Metaverse. However, challenges, such as limitations in physical and cognitive resources, lack of experience with VR environments, and difficulties in using existing XR devices for prolonged periods, need to be addressed towards utilizing XR for the Metaverse applications in future 6G networks. #### 3.4.2 Digital Twins The Metaverse applications demand next-generation networks to facilitate the high processing capabilities demanded by the Metaverse applications. These can be provided through the edge-AI capabilities of emerging 6G networks. Digital Twins (DT) can be an important enabler of the cloud-native network paradigm, which can efficiently support the Metaverse [147]. DTs act as a digital representation of humans and things in cyberspace. Cybertwins can provide a multitude of services for the Metaverse, including, acting as a communication assistant, logging network behavior, and own digital assets, in a flexible, scalable, secure and reliable manner. 6G IoE can play a key role towards facilitating DTs. In [148], the authors discuss how to utilize a cloud network operating system that can work distributively in a real-time multi-agent platform to allocate 3C resources, which are considered to be integral components of envisaged 6G networks [137]. In addition, the Metaverse applications demand 6G networks to support intelligent and fully autonomous operation. In response [149] proposes a Digital Twin Edge Network (DITEN). DITEN is able to combine Multi-access Edge Computing (MEC) together with DT to improve the network throughput, enhance network security, and reduce the cost of 3C services. DITEN continuously monitors the network status for DT modelling, updating and deployment and performs tasks such as routing and resource management efficiently to enable applications such as the Metaverse. However, there are several open issues and challenges, including high-precision DT modelling, DT migration for mobility and ensuring security and privacy. #### 3.4.3 Space-Air-Ground Integrated Network (SAGIN) Global sensing and seamless connectivity are paramount to providing uninterrupted access to the Metaverse applications through 6G networks. However, ground networks alone are not capable of providing ubiquitous connectivity to the Metaverse applications in a reliable and cost-efficient fashion [149]. This is even evident in mountain areas and in disastrous situations. As a solution, Non-Terrestrial Networks (NTN) Towards 3D Networking are proposed with 6G networks [137]. NTN provides 3D network coverage and backhauling through integrating Unmanned Aerial Vehicles (UAVs), satellites, balloons and High Altitude Platform (HAP) stations [150].3D networking expands the NTN paradigm through incorporating space, underground, and underwater communication [151]. For instance, project 3GPP TR 38.811 intends to support non-terrestrial networks by considering the architecture and channel models across satellite, air access, and terrestrial cellular networks [137]. In addition, multi-dimensional networks named Space-Air-Ground Integrated Network (SAGIN) envisage to deeply integrate of space nodes (e.g. satellites), air nodes (e.g. UAVs, drones, air balloons), and terrestrial network nodes (e.g. 5G and beyond network nodes) towards providing seamless connectivity [5]. However, the seamless inter-operation and resource management among multiple types of networks require unified access methods and network standards towards facilitating seamless connectivity for the Metaverse applications. ## 5 6G Integration Challenges In this section, we present the challenges raised by limited backwards compatibility with existing devices, lack of standards, accountability, resilience & privacy preservation, energy inefficiency, and radio design & carrier bandwidths while integrating 6G with the Metaverse. ### _Limited Backwards Compatibility with Existing Devices_ #### 5.1.1 Introduction to issues Effective communication in the Metaverse requires compatibility with previous-generation networks such as 4G and 5G. Despite that, some Metaverse applications can operate on existing network capabilities devices due to the deployment of 6G these devices become worthless. #### 5.1.2 Possible solutions A potential solution to address this issue is the backward compatibility of the 6G network with existing devices that enables the addition of high-capacity communication in the Metaverse and also delivers faster data rates for applications requiring real-time processing and integration. The 6G networks should support the features of the previous generations of communications like the 5G network for some time, enabling progressive migration of the Metaverse devices and lowering the overall cost of 6G and the Metaverse integration. In order to evaluate backward compatibility, mobile operators need to consider how the 5G and 6G core networks are connected and work on the 3GPP standard accordingly. ### _Lack of Standards_ #### 5.2.1 Introduction to issues There is a concern among users about the Metaverse's potential legal consequences. If a problem arises, there is no agreed-upon policy framework or set of standards for the integration of 6G with the Metaverse. Any problem with the integration of these technologies will affect the trust and the capabilities of the 6G networks and the Metaverse. #### -C2 Possible solutions These challenges may be resolved by establishing a forum involving service providers, researchers, and legal counsel to develop standards and policy frameworks that address concerns about user ethics, safety, and privacy while integrating 6G with the Metaverse. The users should be provided with complete control and transparency of their data transmitted over 6G networks, which ensures their privacy in the Metaverse. As a consequence, this will raise the bar for the 6G communication networks and the Metaverse, which will increase trust among the users. For example, though ORAN is not yet fully functional it has an alliance focusing on the integration issues of multiple service providers which will enhance the bandwidth availability and security of the overall networks. ### _Accountability, Resilience and Privacy Preservation_ #### -C1 Introduction to issues The functionalities across 6G integrated Metaverse will be mostly automated based on the decisions made by AI. Any misclassification made by these decisions that cannot be traced because of the black box nature of AI will have a direct effect on the accountability of the 6G integrated Metaverse. #### -C2 Possible solutions Explainable AI (XAI) is a promising solution for this issue which allows us to understand the misclassification issues and improve trust in the decisions made in the 6G integrated Metaverse. The usage of xAI will aid in pinpointing the problem's cause, assist the Metaverse's administrators in understanding the issue, and motivate them to prevent a recurrence - this enhances the transparency of auditing of issues related to the 6G integrated Metaverse. Additionally, existing and newly proposed AI algorithms need to be analysed considering their accountability, resilience and privacy preservation capabilities within the context of future networks. ### _Energy Inefficiency_ #### -D1 Introduction to issues The integration of processing, communication, sensing, and control capabilities inside a 6G network enables a seamless transition between the virtual and physical worlds, consequently contributing to the realisation of the Metaverse. To support the requirements of the Metaverse, the cellular capacity should be increased on top of the existing network infrastructure. This will require 6G to deploy more microscopic and even micro-cells in the network. This increases technological and network complexity and will further strain the energy efficiency and sustainability of the Metaverse. #### -D2 Possible solutions The integration of AI with 6G will address the issues of energy efficiency and network complexity, opening the door to a sustainable Metaverse ecosystem. The use of Zero touch network & Service Management (ZSM) in 6G provides an intelligent network for the Metaverse by enabling effective data access and cross-domain data exposure by permitting operational data to be maintained apart from the management applications. This will also improve the reliability of communication in the Metaverse. ### _Radio Design and Carrier bandwidths_ \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**6G for the Metaverse - technical perspective**} \\ \hline Ref. & Validation of digital assets & Cross platform integration and interoperability & Efficient support of AI & High speed data connection & Low Latency Communication & Computer Vision & High transaction and integration privacy \\ \hline 7 & & & & & & x & x \\ \hline 30-34 & x & & & & & x & x \\ \hline 35-38 & & x & & & & & x \\ \hline 39-41 & & & x & & & & \\ \hline 42-47 & & & x & x & x & & \\ \hline 48-49 & & & & x & x & x & \\ \hline 50-53 & & & x & & x & x & \\ \hline 54-56 & & & x & & & x & \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**The role of 6G technologies for the Metaverse**} \\ \hline Ref. & AI & Blockchain & Edge & OpenRAN & Cloud & IoE/IoT & XR & Digital Twin \\ \hline 57-84 & x & & & & & & x & \\ \hline 85-104 & & x & x & & & & \\ \hline 105-125 & x & & x & x & x & & \\ \hline 126-130 & & & & x & & & \\ \hline 131-134 & & & & & x & & \\ \hline 135-142 & & & x & & x & x & x & \\ \hline 2, 143-148 & x & x & x & & x & x & x \\ \hline \end{tabular} \end{table} TABLE IV: Summary of related works #### 5.2.1 Introduction to issues One of the main goals of 6G is to achieve Tb/s data rates, which requires large bandwidths (10-100 GHz spectrum for THz bands), which requires an aggregation of a large number of carriers to create larger bandwidth. Designing radios that work at sub-THz bands present a significant challenge to the industry and research due to the complexity of associated RF circuits. Finding the right balance in terms of transceiver efficiency, power generation, heat dissipation and the cost is critical for the successful adoption of radios to sub-THz bands. #### 5.2.2 Possible solutions 6G should provide more bandwidth and lower latency to improve the overall connectivity of the Metaverse. On 6G networks, there should be a 10 to 100-fold reduction in latency and an increase in bandwidth capacity for the users of the Metaverse to have the best immersive experiences. Every piece of networking hardware must have its material, component manufacture, and antenna architecture modified. To comply with the 6G standard, base station operations must change. 6G should depend on tightly focused, packaged radio signals rather than "ommidirectional" radio channels. Moreover, tightly focused radio signals use less energy, have high transceiver efficiency, less heat dissipation and less cost. ## 66 Metaverse projects This section provides an overview of research projects and developments that are already underway towards realizing the Metaverse by harnessing the extreme network capabilities of envisioned B5G and 6G mobile networks. ### _Meta_ Meta, formerly known as Facebook, is presently working on combining social media with VR and AR towards realizing the Metaverse for users to work, play and interact with other users online [152]. This is possible due to the extreme mobile broadband capabilities, near zero latency, extreme reliability and availability, and network intelligence of emerging mobile networks. Users can join the Metaverse using VR headsets. The envisaged applications will range from connecting people, education, training, healthcare and the workplace to gaming. For instance, education technologies are expected to broaden their horizons from platforms to passively absorb information to learn by doing and experiencing through 3D immersion. In addition, Meta is working on building the Metaverse responsibly ensuring a safe, secure, and transparent operation. Meta has also launched the Meta Immersive Learning Academy and Research Fund to collaborate in building a solid and interoperable Metaverse platform. In addition, their Spark AR platform enables the creation and sharing of AR experiences through their apps and devices. Furthermore, Meta is working on building economic opportunities in the Metaverse to maintain and thrive in a digital economy in the future. ### _Vr Hive_ VR Hive [153] aims to transform e-learning through VR from the comfort of home or workplace. This project aims to design and develop a fully immersive learning platform over 6G mobile networks to feature the Metaverse that can be used to provide education, training, holographic telepresence, and real-time communication. These features will be provided through the extreme network capabilities of emerging 6G networks, such as, near real-time ultra-reliable communication with ultra-low latency and edge intelligence. Relevant infrastructure and network-aware immersive and adaptive environments will be developed to facilitate education through the range of products offered through VR Hive. ### _66 Life_ 6G Life [154] aims to facilitate the envisaged digital transformation where 6G mobile networks will play a significant role in this revolution. The project not only aims to develop the digital infrastructure and high-performance computing platforms but also concentrates on political and social issues that are required to realize future 6G applications. Realizing 6G applications will require diverse communication capabilities including human-machine interaction in virtual worlds, such as the Metaverse. The project aims to provide innovative solutions in the areas of scalable communication, flexible software concepts, and adaptive hardware platforms. The four key aspects considered by the project are latency, resilience, security and sustainability. The research work, including both basic and applied research, is mainly performed considering Industry 4.0/5.0, and intelligent healthcare applications. ### _Decentraland_ Decentraland [155] is a decentralized virtual world where users can create objects, trade and interact with others in a virtual world. This also allows users to control policies on the operation of the virtual world. Decentralized operates as a Decentralized Autonomous Organization (DAO), where it owns smart contracts and assets on virtual land and estate contracts, wearables and other devices, and the marketplace to trade virtual assets. These developments can be realized through the capabilities of emerging 6G mobile networks, where extreme mobile connectivity will facilitate seamless connectivity to the virtual world. Furthermore, blockchain operation and smart contract execution will be enabled through the edge computing capabilities of the 6G networks. Similar projects, such as Sandbox [156], Axie Infinity [157], and Illuvium [158] also envisage harnessing the capabilities of blockchain and emerging mobile networks towards realizing the Metaverse. ### _Luxembourg Metaverse_ The Luxembourg Metaverse [159] project aims to build a digital twin of an area of Luxembourg City. These digital twins can be explored by the public and the industry to provide multiple working opportunities. Luxemburg 5G-6G network digital twin aims to enable seamless and highly capable network connectivity to facilitate real-time services banking on emerging communication networks, such as beyond 5G and 6G. This project will also raise awareness of the advantages and applications of the Metaverse to the public and the industry. Furthermore, the project expects to optimise and secure the Metaverse deployments while integrating the latest developments of networks in a cost-effective and cost-efficient manner. The 6G technological directions explored by the 6G metaverse projects presented in this section are tabulated in TABLE 5. ## VII Conclusion This paper presents the role of 6G towards realizing the Metaverse applications and services. The paper presents the role of 6G technologies in the immersive, smart, scalable and secure realization of the Metaverse. Furthermore, the paper presents how various 6G capabilities play a key role towards the realization of the Metaverse, including the role of 6G for, cross-platform integration, efficient support for AI, high-speed data connectivity, efficient user interaction, low latency communication, computer vision, high transaction integration, and security and privacy protection. Consequently, the integration challenges of 6G with the Metaverse are elaborated while providing several research directions towards realizing the Metaverse owing to the capabilities of future 6G networks.
メタバースの概念は、次世代アプリケーションとサービスを提供するためのフルスケール拡張現実環境を作成することを目的としています。メタバースの開発には、5G、人工知能、エッジコンピューティング、拡張現実など、多くの技術が支えられています。6Gの発展は、メタバースの開発における重要な進歩を示唆しており、ほぼゼロラテンシー、新しいサービスの豊富さ、現実世界インフラストラクチャの向上を可能にします。この論文では、6Gとメタバースサービスの利点を示し、必要となる技術的要件の概要を提供します。この論文では、メタバースの概念と6Gモバイルネットワークの想定される技術能力を明らかにします。その後、6Gがメタバース開発における役割を説明し、デジタル資産の検証、相互運用性、メタバースでのユーザーのインタラクションの効率性、関連するセキュリティとプライバシーの側面など、メタバースの開発に焦点を当て
2309.16817
Safe Non-Stochastic Control of Control-Affine Systems: An Online Convex Optimization Approach
We study how to safely control nonlinear control-affine systems that are corrupted with bounded non-stochastic noise, i.e., noise that is unknown a priori and that is not necessarily governed by a stochastic model. We focus on safety constraints that take the form of time-varying convex constraints such as collision-avoidance and control-effort constraints. We provide an algorithm with bounded dynamic regret, i.e., bounded suboptimality against an optimal clairvoyant controller that knows the realization of the noise a prior. We are motivated by the future of autonomy where robots will autonomously perform complex tasks despite real-world unpredictable disturbances such as wind gusts. To develop the algorithm, we capture our problem as a sequential game between a controller and an adversary, where the controller plays first, choosing the control input, whereas the adversary plays second, choosing the noise's realization. The controller aims to minimize its cumulative tracking error despite being unable to know the noise's realization a prior. We validate our algorithm in simulated scenarios of (i) an inverted pendulum aiming to stay upright, and (ii) a quadrotor aiming to fly to a goal location through an unknown cluttered environment.
Hongyu Zhou, Yichen Song, Vasileios Tzoumas
2023-09-28T19:51:39
http://arxiv.org/abs/2309.16817v1
# Safe Non-Stochastic Control of Control-Affine Systems: ###### Abstract We study how to safely control nonlinear control-affine systems that are corrupted with bounded _non-stochastic noise_, _i.e._, noise that is unknown a priori and that is not necessarily governed by a stochastic model. We focus on safety constraints that take the form of time-varying convex constraints such as collision-avoidance and control-effort constraints. We provide an algorithm with bounded _dynamic regret_, _i.e._, bounded suboptimality against an optimal clavvoyant controller that knows the realization of the noise a priori. We are motivated by the future of autonomy where robots will autonomously perform complex tasks despite real-world unpredictable disturbances such as wind gusts. To develop the algorithm, we capture our problem as a sequential game between a controller and an adversary, where the controller plays first, choosing the control input, whereas the adversary plays second, choosing the noise's realization. The controller aims to minimize its cumulative tracking error despite being unable to know the noise's realization a priori. We validate our algorithm in simulated scenarios of (i) an inverted pendulum aiming to stay upright, and (ii) a quadrotor aiming to fly to a goal location through an unknown cluttered environment. Non-stochastic control, online learning, regret optimization, robot safety ## I Introduction In the future, robots will be leveraging their on-board control capabilities to complete safety-critical tasks such as package delivery [1], target tracking [2], and disaster response [3]. To complete such complex tasks, the robots need to efficiently and reliably overcome a series of key challenges: _Challenge I: Time-Varying Safety Constraints:_ The robots need to ensure the safety of their own and of their surroundings. For example, robots often need to ensure that they follow collision-free trajectories, or that their control effort is kept under prescribed levels. Such safety requirements take the form of time-varying state and control input constraints: _e.g._, as robots move in cluttered environments, the current obstacle-free environment changes (Figure 1). Accounting for such constraints in real-time can be challenging, requiring increased computational effort [4, 5]. Hence, several real-time state-of-the-art methods do not guarantee safety at all times [6, 7]. _Challenge II: Unpredictable Noise:_ The robots' dynamics are often corrupted by unknown non-stochastic noise, _i.e._, noise that is not necessarily i.i.d. Gaussian or, more broadly, that is not governed by a _stochastic_ (probability) model. For example, aerial and marine vehicles often face non-stochastic winds and waves, respectively [8]. But the current control algorithms primarily rely on the assumption of known stochastic noise, typically, Gaussian, compromising the robots' ability to ensure safety in real-world settings where this assumption is violated [9]. _Challenge III: Nonlinear, Control-Affine Dynamics:_ The dynamics of real-world robots are often nonlinear, in particular, control-affine. For example, the dynamics of quadrotors and marine vessels take the form of \(x_{t+1}=f\left(x_{t}\right)+g\left(x_{t}\right)u_{t}+w_{t}\), where (i) \(x_{t}\) is the robot's state, (ii) \(f\left(x_{t}\right)\) and \(g\left(x_{t}\right)\) are system matrices characterizing the robot's dynamics, (iii) \(u_{t}\) is the control input, and (iv) \(w_{t}\) is the disturbance [10]. Handling nonlinear dynamics often requires complex control policies, requiring, in the most simple case, linearization at each time step, and, thus, additional computational effort [4]. The above challenges motivate control methods that efficiently and reliably handle control-affine systems, and guarantee the satisfaction of time-varying safety constraints even in the presence of non-stochastic noise. State-of-the-art methods to this end typically rely on robust control [11, 12, 13, 14, 15, 16] or on online learning [17, 18, 19, 20, 21, 22, 23]. But the robust methods are often conservative and computationally heavy: not only they simulate the system dynamics over a lookahead horizon; they also assume a worst-case noise realization, given a known upper bound on the magnitude of the noise. To reduce conservatism and increase efficiency, researchers have also focused on online learning methods via Online Convex Optimization (OCO) [24]. These methods rely on the _Online Gradient Descent_ (_OGD_) algorithm or its variants, offering bounded _regret_ guarantees, _i.e._, bounded suboptimality with respect to an optimal (possibly time-varying) clairvoyant controller that knows the future noise realization a priori [17, 18, 19, 20]. However, the current online methods address only linear dynamical systems and time-invariant safety constraints. **Contributions.** We provide an algorithm for the problem Fig. 1: **Safe non-stochastic control example: Autonomous flight inuttered environments subject to unknown wind disturbances. In this paper, we focus on safe non-stochastic control of control-affine systems where the robots’ capacity to select effective control actions fast is challenged by (i) time-varying safety constraints, (ii) unknown, unstructured, and, more broadly, unpredictable noise, and (iii) nonlinear control-affine dynamics. For example, in package delivery with quadrotors, the quadrotors are required to fly to goal positions. But during such tasks, (i) the quadrotors need to ensure collision avoidance at all times, which requires control actions that respect _time-varying_ state and control-input constraints, (ii) the quadrotors may be disturbed by unpredictable wind gusts, and (iii) they need to account for their nonlinear, in particular, control-affine dynamics. These challenges stress the quadrotors’ ability to decide effective control inputs fast, and to ensure safety. We aim to provide a control algorithm that handles these challenges, guaranteeing bounded suboptimality against optimal safe controllers in hindsight.**
We are researching how to safely control nonlinear control-affine systems that are corrupted with bounded non-stochastic noise, meaning that we don't know the noise in advance and it's not necessarily governed by a stochastic model. We focus on safety constraints that take the form of time-varying convex constraints, such as collision avoidance and control effort constraints. We propose an algorithm with bounded dynamic regret, meaning that we are constrained in the suboptimality against an optimal clairvoyant controller that knows the realization of the noise beforehand. Our goal is to address the future of robot autonomy, where robots will be able to autonomously perform complex tasks despite unpredictable disturbances in the real world, such as wind gusts. To develop the algorithm, we frame the problem as a sequential game between a controller and an adversary. The controller starts the game by choosing the control input, while the adversary chooses the noise's realization. The controller aims to minimize its cumulative tracking error while being unaware of the noise'
2306.17354
Toward Intelligent and Efficient 6G Networks: JCSC Enabled On-Purpose Machine Communications
Driven by the vision of "intelligent connection of everything" toward 6G, the collective intelligence of networked machines can be fully exploited to improve system efficiency by shifting the paradigm of wireless communication design from naive maximalist approaches to intelligent value-based approaches. In this article, we propose an on-purpose machine communication framework enabled by joint communication, sensing, and computation (JCSC) technology, which employs machine semantics as the interactive information flow. Naturally, there are potential technical barriers to be solved before the widespread adoption of on-purpose communications, including the conception of machine purpose, fast and concise networking strategy, and semantics-aware information exchange mechanism during the process of task-oriented cooperation. Hence, we discuss enabling technologies complemented by a range of open challenges. The simulation result shows that the proposed framework can significantly reduce networking overhead and improve communication efficiency.
Ping Zhang, Heng Yang, Zhiyong Feng, Yanpeng Cui, Jincheng Dai, Xiaoqi Qin, Jinglin Li, Qixun Zhang
2023-06-30T01:18:43
http://arxiv.org/abs/2306.17354v1
# Toward Intelligent and Efficient ###### Abstract Driven by the vision of "intelligent connection of everything" toward 6G, the collective intelligence of networked machines can be fully exploited to improve system efficiency by shifting the paradigm of wireless communication design from naive maximal approaches to intelligent value-based approaches. In this article, we propose an on-purpose machine communication framework enabled by joint communication, sensing and computation (JCSC) technology, which employs machine semantics as the interactive information flow. Naturally, there are potential technical barriers to be solved before the wide-spread adoption of on-purpose communications, including the conception of machine purpose, fast and concise networking strategy, and semantics-aware information exchange mechanism during the process of task-oriented cooperation. Hence, we discuss enabling technologies complemented by a range of open challenges. Simulation result shows that the proposed framework can significantly reduce networking overhead and improve communication efficiency. ## Introduction As the deep integration of wireless communications and artificial intelligence, we are at the dawn of the era of ubiquitous intelligence, in which machines with onboard sensors and computing units are connected seamlessly to enable profound progress in vertical industries, including intelligent vehicular network (IVN), telemedicine, and smart manufacturing [1]. It is expected that there will be over 125 billion intelligent machines (IMs) connected to the Internet by 2030 [2], pointing at a future where IMs are the major parts of the communication grid. This trend poses tremendous challenges for the current communication system, and calls for revolutionary innovations in network design philosophy to support intelligent machine-type communications (IMTC). In particular, the network key performance indicator (KPI) for IMTC system has contrasts and synergies with current communication system. The current communication systems have been mainly designed for content delivery, whereas the focus is to optimize the one-hop data transmission capabilities. In sharp contrast to that, sophisticated machine-based applications usually involve cooperation among groups of IMs to complete diversified tasks [3], which requires frequent and reliable sensing and control data exchange among individuals. The closed-loop communication capabilities, consisting of sensing data collection, data processing and fusion, and control data dissemination, are the focus and characteristic of IMTC optimization [4]. Under such emerging scenarios, the naive maximal approach of blindly improving data transmission capabilities may scale up the networking complexity in terms of signaling cost and protocol overhead. We argue that the collective intelligence of networked IMs can be fully exploited to obtain situational awareness and identify the purpose of communication. In this sense, the paradigm of network design for IMTC may shift toward generating the most valuable information that can be efficiently transmitted and reconstructed at the right time instantly, to complete a specific task. Note that a certain task is the ultimate goal of a single IM or multiple IMs, and this goal is composed of different machine purposes. Therefore, IMTC would have very stringent performance requirements in _both data transmission and data processing_ to support collaborative thinking and decision-making. In this article, we employ intelligent vehicular network (IVN) [5], which is identified as one of the most complicated scenarios of IMTC, as an illustrative application that would benefit from our proposed joint communication, sensing and computation (JCSC) enabled our-purpose machine communication paradigm. Here, we provide a general view of key challenges in the future IVN. **Whom to Communicate With:** To fully exploit the collective intelligence of IMs for task-oriented cooperation, a stable and concise topology has to be constructed to realize efficient information exchange. For example, frequent and timely information exchange is required among certain groups of vehicles for cooperative vehicle control in autonomous driving. To combat highly dynamic typology and time-varying communication environment, the ubiquitous computing and sensing capability at each vehicle should be fully exploited to enable fast networking based on the specific purpose of communication extracted from massive amounts of sensing data. **How to Transmit:** Considering the emergence of collective decision making demands, blind exchange of information by broadcasting may congest the network due to limited system resources, and may affect the privacy and security of data transmission. Moreover, the highly dynamic nature of vehicular networks poses great challenges in reliable data transmission. Considering the fact that vehicles can leverage their sensing capability for target positioning, the transmitted waveform should be carefully re-designed to integrate sensing and communication functionalities to achieve secured and efficient point-to-point information exchange. **What to Transmit:** Although the higher frequency band is proposed to be utilized in the 6G network, the spectrum resource still struggles to meet the explosive growth of data transmission demand. For example, a self-driving vehicle with ten high-resolution cameras produces two gigaplex per second [6]. The irrelevant and redundant information in sensing data leads to low system efficiency [7]. Therefore, it is of critical importance that valuable information can be extracted from raw data to realize concise and efficient information exchange. These challenges motivate us to conceive this article on an innovative perspective for intelligent and efficient machine-type communications (MTC). Note that the technical challenges for MTC network design stem from the nature of how machines sense information and their intrinsic purpose of networking. In what follows, this article first provides a systematic introduction of IMTC systems. Then a JCSC enabled on-purpose machine communication (JCSC-OMC) framework is proposed, as shown in Fig. 1. We also give an in-depth analysis of the enabling technologies, complemented by a range of open challenges. Finally, evaluation results of our proposed framework are demonstrated. ## 111 Intelligent Machine-Type Communications Considering the evolution of cellular systems, the wireless communication paradigm has evolved from human-centered voice service to ubiquitous information exchange for both human-type communications (HTC) and MTC. To support MTC, 5G systems are tailored to satisfy the need for machine-type connections [8]. As the integration of artificial intelligence and wireless communication toward 6G system [1], the concept of IMTC is proposed, where the collective intelligence of multiple IMs will be leveraged to achieve precise and efficient autonomous decision making. Different from traditional HTC design, IMTC calls for ultra-reliable and low-latency closed-loop communication and high-precision positioning to enable multi-node collaboration and autonomous decision making capabilities for IMs. In this article, we propose a novel framework of on-purpose machine communications based on the following considerations. To collaboratively accomplish specific tasks, massive sensing and control data is required to be exchanged between IMs. Assisted with JCSC technology, IMs could realize on-purpose communications by performing purpose analysis based on temporal-spatial correlations of sensing data, which can significantly reduce networking overhead and guarantee data privacy through secured directional transmission. Equipped with sensors and computing units, IMs can obtain human-like situational awareness by utilizing machine learning methods. To efficiently realize the on-purpose communications, IMs do not need to transmit all the raw data, but only the semantic information that they have reached a consensus on, which improves the spectrum efficiency obviously. Figure 1: JCSC-based framework for on-purpose machine communications. Equipped with sensors and computing units. IMs can obtain human-like situational awareness by utilizing machine learning methods. To efficiently realize the on-purpose communications, IMs do not need to transmit all the raw data. But only the semantic information that they have reached a consensus on, which improves the spectrum efficiency obviously. ## 7 Conclusion, Advantage and Framework As discussed in our previous article [9], the new IMTC paradigm proposed in this article is based on both system theory and information theory, to optimizing the whole network. More specifically, the system entropy is introduced in this article to evaluate the orderly evolution of the JCSC-OMC network, which enables the network topology to be continuously reshaped with the time-varying wireless environment. In view of this, to achieve multi-node collaboration and high-efficiency decision making for IMs, the JCSC-OMC network guided by system entropy is proposed in this article, aiming at realizing the on-purpose machine communication employing machine semantics as the interactive information flow, which can satisfy the ultra-reliable and low-latency closed-loop communication requirements for massive amounts of sensing, control and traffic data, and effectively improve resource utilization and communication efficiency. ## 8 Conclusion, Advantage and Framework As discussed in our previous article [9], the new IMTC paradigm proposed in this article is based on both system theory and information theory, to optimizing the whole network. More specifically, the system entropy is introduced in this article to evaluate the orderly evolution of the JCSC-OMC network, which enables the network topology to be continuously reshaped with the time-varying wireless environment. In view of this, to achieve multi-node collaboration and high-efficiency decision making for IMs, the JCSC-OMC network guided by system entropy is proposed in this article, aiming at realizing the on-purpose machine communication employing machine semantics as the interactive information flow, which can satisfy the ultra-reliable and low-latency closed-loop communication requirements for massive amounts of sensing, control and traffic data, and effectively improve resource utilization and communication efficiency. ## 9 Conclusion, Advantage and Framework As discussed in our previous article [9], the new IMTC paradigm proposed in this article is based on both system theory and information theory, to optimizing the whole network. More specifically, the system entropy is introduced in this article to evaluate the orderly evolution of the JCSC-OMC network, which enables the network topology to be continuously reshaped with the time-varying wireless environment. In view of this, to achieve multi-node collaboration and high-efficiency decision making for IMs, the JCSC-OMC network guided by system entropy is proposed in this article, aiming at realizing the on-purpose machine communication employing machine semantics as the interactive information flow, which can satisfy the ultra-reliable and low-latency closed-loop communication requirements for massive amounts of sensing, control and traffic data, and effectively improve resource utilization and communication efficiency. ## 10 Conclusion, Advantage and Framework As discussed in our previous article [9], the new IMTC paradigm proposed in this article is based on both system theory and information theory, to optimizing the whole network. More specifically, the system entropy is introduced in this article to evaluate the orderly evolution of the JCSC-OMC network, which enables the network topology to be continuously reshaped with the time-varying wireless environment. In view of this, to achieve multi-node collaboration and high-efficiency decision making for IMs, the JCSC-OMC network guided by system entropy is proposed in this article, aiming at realizing the on-purpose machine communication employing machine semantics as the interactive information flow, which can satisfy the ultra-reliable and low-latency closed-loop communication requirements for massive amounts of sensing, control and traffic data, and effectively improve resource utilization and communication efficiency. ## 11 Conclusion, Advantage and Framework In this article, the proposed framework is based on the servers in different layers can continuously maintain a shared semantic information library as the grounded knowledge accumulates incrementally. In this way, compared with the current communication paradigm transmitting all raw data, the same function can be realized by primitive-concise information between machines, thus facilitating the autonomous collaboration of IMs to accomplish tasks in the complex environment. JSC-Based On-Purpose Communication Framework To support the _on-purpose machine communication_ employing _machine semantics_ as the interactive information flow, we propose the JCSC-OMC framework as shown in Fig. 1. Moreover, the functions of each layer and the logical relationships between different layers are illustrated in Fig. 2. **Terminal Layer**: The terminal layer consists of various IMs equipped with different sensing units (camera, radar, LiDAR, etc.). The raw data can be further processed based on the global (regional) semantic information library issued by the cloud (edge) sever, that is, customized features can be extracted from the raw data according to different demands. The raw data is converted into customized semantic information suitable for the specific IMs, which could realize the same functions by sending fewer bits. The location of the potential communication targets can be discovered instantly by IMs based on the sensing data. Then the directional beams are transmitted to achieve on-purpose communications once IMs generate communication purposes. In addition, the distributed intelligent computation, such as federated learning, is considered for purpose analysis, data fusion, and feature extraction to realize autonomous sensing and cognition, fast networking, and decision making for IMs by instantly encoding (decoding) semantic information at the transmitter (receiver) and performing on-purpose communications. **Edge Layer**: The edge layer consists of base stations (BSs)1 and mobile edge computing servers (MECSs). The BS can utilize JCSC technology to accelerate the synchronization process with the IMs within its coverage range, reducing the communication overhead and realizing the on-purpose communications. In general, there are two types of synchronization approaches utilized by BS: satellite synchronization and IEEE 1588-v2 protocol synchronization. The BS can achieve bandwidth of more than 100M and capacity of Gb/s level. Since MECSs have powerful computing ability, MECSs can receive all kinds of sensing data from IMs and BSs for regional-level data fusion to achieve regional autonomy, enhancing the robustness of the edge network. Further, MECSs are installed with the regional-level semantic information library, which is customized for specific IMs in different regions. MECS can also update the regional-level semantic information library in real-time based on the state changes and functional evolution of IMs. Also, the digital twin technology can be used to construct virtual twins of various IMs in a certain region based on the sensing data from IMs and BSs, thus completing the mapping of IMs from physical space to digital space, so as to realize the life cycle management of IMs and provide logical guidance for IMs to perform on-purpose communications. Footnote 1: The RSU can be considered as a kind of BS. **Cloud Layer**: The cloud layer consists of clustered servers with powerful computing ability and storage capacity. The cloud servers can aggregate and process various types of data from MECSs in different regions, hence the cloud layer can perform global autonomous scheduling and decision making based on massive data, which realizes the self-optimization, self-configuration, and self-healing of the network. Different from the region-a-level semantic information library that is private for a certain region, the cloud servers are installed with the global-level semantic information library, which is used as a common library for all IMs in different regions. ## Enabling Technologies In this section, we present a brief overview of the enabling technologies along with a range of open challenges. Figure 2: The function description of JCSC-OMC framework. Joint Communication, Sensing and Computation The JCSC technology includes both the joint communication and sensing (ICS) technique and the intelligent computational technique. The state-of-the-art JCSC technique utilizes the unified radio frequency (RF) transceivers and frequency band resources to achieve both wireless communication and sensing functions [11, 12]. The JCS transceiver can receive the reflection of transmitted communication beams and conduct correlation detection between the received reflection signals and the known transmit signals to obtain the motion state data that is contained in the reflection channel state information, such as the range and radial velocity of objects in the beam direction. The comparison between conventional signal processing and intelligent JCS signal processing is shown in Fig. 3. It is highly possible that the correlation between communication channel state information and sensing data, such as the range and Doppler, can be revealed. The realization of this technique will greatly reduce the computation load and processing delay of gaining communication and sensing data. Within this perspective, intelligent JCS signal processing is the focus of future research on JCS systems and is also the focus of our future work. The sensing data obtained additionally with the JCS technique can be taken as priori information that contains semantics for intelligent computation. The use of semantic information can significantly reduce the amount of communication data transmission, which saves more wireless resources for sensing. Artificial intelligence methods, such as supervised learning and unsupervised learning can be used to enhance the channel estimation and prediction, which improves beamforming performance. Moreover, the reinforcement learning methods can be used to assist the beam alignment, which will greatly enhance communication reliability and power efficiency. With more reliable communication ability, the performance of collaborative computing and sensing can be further guaranteed. Besides, the intelligent computation ability not only boosts the acquisition of semantic information, but more importantly, it provides enough computing resources for the purpose analysis and generation of IMs at each time slot. In a nutshell, the JCSC technology is the key driving force of the proposed on-purpose machine communication paradigm. **Challenge:** The surge of massive wireless terminals makes the interference problem prominent in the JCSC-OMC network. It is a key challenge to design the JCS antenna arrays and transceivers that can be adaptively used for directional communication in various spectrum bands. The potential solution is to utilize the design of hybrid analog-digital transceiver to adjust the spatial interval of virtual antenna elements of massive antenna arrays for adapting to different bands. Ubiquitous Computing Ubiquitous computing intends to establish an environment full of computation and communication capabilities, emphasizing the concept of computation integrated with the environment, which provides computing resources for on-purpose communications. The key idea of ubiquitous computing is that IMs can utilize the multi-level computing resources of the cloud-edge-terminal framework optimally at anytime by intelligent and automated scheduling of these resources, which forms a computing resource sharing model. In this way, multiple IMs could utilize the computing resources from the MECSs and cloud servers on-demand to perform purpose analysis and feature extraction for semantic information in a Figure 3: Intelligent signal processing enabled by JCS. distributed and collaborative way. Furthermore, IMs can rapidly fuse massive amounts of sensing data, and predict the trajectory of communication targets in real-time, through online training to provide priori information for on-purpose communications, enabling fast communication and autonomous decision making. **Challenge:** The key idea of ubiquitous computing is to enable IMs to utilize the multi-level computing resources optimally at any time. Thus, how to pool the computing resources of various heterogeneous infrastructures and optimize the scheduling of computing resources is the focus of further research. The potential solution is to utilize supervised learning and unsupervised learning methods to make reasonable predictions for the usage of multi-level computing resources. Then reinforcement learning methods can be employed for efficient and rational scheduling of these computing resources. ## 1111 The Defect Network The digital twin is a real-time mirror of the physical entity in the digital world, which can truly reflect the operational state of the physical entity in digital space [13]. For this, with the powerful computation capacity of MECS, the digital twin edge network can be constructed in the edge layer, that is, the corresponding virtual twins are established in each region for the IMs within the coverage of the BS in this region. A mirror of the region covered by this BS is built from physical space to digital space, to realize the situation analysis and real-time decision making based on the digital space. Specifically, the regional control center in the edge layer can use the virtual twins to efficiently analyze, diagnose, simulate, and control the physical edge network based on massive data provided by the virtual twins, thus realizing the lifecycle management of the IMs and the edge network. In addition, MECS can maintain and continuously update the customized semantic information library required by an IM based on the digital data provided by the virtual twin corresponding to this type of IM. In this concept, a tightly coupled topology between digital space and physical space based on various sensing data can be constructed. Then based on this topological relationship, the IM could conduct purpose analysis rapidly and transmit data via directional beam to the targets, thus communication links can be quickly established to achieve on-purpose communications. **Challenge:** In the JSCC-OMC network, the data is analyzed in a wide range of dimensions, including performance, capacity, energy consumption, quality, cost, efficiency, and so on. Each dimension involves a variety of different situations, making the data collection and modeling more difficult. Accordingly, how to efficiently collect such large-scale, multi-dimensional data sets, then analyze and model the collected data sets is the key issue to be addressed in the process of constructing the digital twin edge network. The potential solution is to build a unified shared data warehouse, which could simplify the data collection and modeling process. ## 1111 The Higher Network The ability of on-purpose communications will be inspired with the assistance of sensing data. The introduction of JSCC technology during the networking process can discover potential communication targets instantly, which will present the accurate network topology information for the rapid and optimal decision making based on machine purpose [14]. Thus, the transmission of beacon data and the sensing of time-varying network topology can be realized simultaneously, which significantly reduces the communication overhead and suppresses the entry time. Thus, how of hidden terminals and temporary blindness of communication can be effectively addressed. Further, wireless resources can be allocated in advance according to network status and machine purpose, accelerating the networking process of IMs. More importantly, the on-purpose selection of relay nodes and adjustment of routing schemes can be realized based on the accurate sensing of network topology, thus the JSCC-OMC network will be endowed with greater flexibility and intelligence. **Challenge:** The priori information endures the network with a more powerful sensing ability. However, a critical issue raised in the on-purpose networking is to differentiate the identity of multiple IMs. The potential solution is to design a mapping entity between sensing and communication domains, which can provide the Internet Protocol (IP) address of the matching neighboring node with the given physical characteristics, and vice versa. ## 1111 Performance Evaluation Based on our existing works [12, 14, 15], the orthogonal frequency division multiplexing (OFMD) waveform and the orthogonal frequency division multiple access (OFDMA) technique are utilized in the JSCC system, and we present three case studies to demonstrate the benefits of our proposed JSCC-OMC paradigm. ## 1111 The Higher Network IMs can obtain rich prior information about their surroundings from onboard sensors and MECS. With the aid of ubiquitous computation capabilities, IMs can perform purpose analysis and quickly discover potential communication targets. Then IMs can directly exchange data by transmitting the directional beams using JCS technique, which significantly accelerates the process of topology construction. Fig. 4 shows the required time duration for topology construction as the number of IMs increases. The sensing beam width is set as p /6. The communication radius is set as 200 m, and the network coverage area is 2000 m \(\times\) 80 m. As shown in Fig. 4, with the aid of prior information from sensing data, the efficiency of typology construction is greatly improved, resulting in a reduction of 49.2 percent in the number of required time slots when there are 100 nodes. Note that the omnidirectional antennas can be regarded as the directional antennas with only one beam. Thus, the discussion and analysis based on directional antennas in this article are still applicable to the omnidirectional antennas. ## 1111 [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] [MISSING_PAGE_POST] The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network Higher Network [MISSING_PAGE_POST] Higher Network The Higher Network The Higher Network The Higher Network Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network The Higher Network Higher Network The Higher Network The Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network The Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network Higher Network The Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network Higher Network The Higher Network Higher Network The Higher Network Higher Network The Higher Network The Higher Network Higher Network Higher Network Higher Network Higher Network The Higher Network Higher Network Higher Network The Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network The Higher Network [MISSING_PAGE_POST] Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network [MISSING_PAGE_POST] Higher Network Higher Network Higher Higher Network Higher Network Higher Network Higher Higher Network Higher Network [MISSING_PAGE_POST] etwork Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Higher Network Higher Network Higher Network Higher Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Higher Network Higher Network Higher Higher Network Higher Network Higher Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Network Higher Higher Network [MISSING_PAGE_POST] igher Network Higher Network Higher Higher Network Higher Network Higher Higher Network [MISSING_PAGE_POST] images or videos captured by embedded cameras can be further processed by onboard computing units to extract features of targets (e.g., other vehicles, pavement marks). Then structural metadata or text descriptions are transmitted as auxiliary information to deliver road information. In this article, the semantic extraction network is based on Densenet backbone, which is pre-trained with ImageNet, Cityscapes, and Camvid datasets. Cityscapes and Camvid are taken from a vehicle perspective and, thus, suitable for IVN scenarios. The cropped resolution is 1024 \(\times\) 512 for Cityscapes and 960 \(\times\) 720 for CamVid. We use mini-batch stochastic gradient descent with a momentum of 0.95. The warmup strategy is adopted at the first 1000 iterations. As shown in Fig. 5, compared with transmitting a raw image, the spectrum efficiency is improved by ten times through semantics-aware information exchange. Note that by attending to various regions and their relations, a single inference parallelly generates a few text segments from one image that requires no more than one second. ## 6.2. **S**amilies marked driving capability enhancement The adoption of machine semantics can significantly reduce the amount of communication data transmission. Thus, in the JCSC-OMC network, more wireless resources can be used for sensing. Fig. 6 illustrates the sensing performance when transmitting different communication data volumes (corresponding to three different communication data volumes as shown in Fig. 5). Note that for each vehicle, the time-division JCS signal is adopted in the onboard JCSC system, and the radar mutual information is utilized to measure the sensing performance of this system [15]. The transmit power of onboard JCSC system is 10 W and the carrier frequency is 28 GHz with a bandwidth of 800 MHz. The antenna gain of transceiver is 18 dBi, and the transmission time of signal is 0.03 s. It can be revealed that when transmitting semantic information on purpose, compared with transmitting raw data, the sensing capability of JCSC system will be significantly enhanced, thus, providing higher quality sensing data for the IMs and RSU. Further, the IMs can quickly transmit their own sensing data in the form of machine semantics to the RSU for geophysical data fusion at MECs, to achieve regional autonomy. Besides, RSU can also transmit its sensing data to IMs in the form of machine semantics to provide vehicle infrastructure cooperation services, such as over-the-horizon awareness. In this way, the realtime decision-making could be improved. ## 7. Conclusion In this article, we proposed a joint communication, sensing and computation enabled on-purpose machine communication framework, employing machine semantics as the interactive information flow. More specifically, we discussed relat Figure 4. Time duration for topology construction. Figure 5. Raw data exchange versus semantic information exchange. ed concepts, including system entropy, machine purpose, and machine semantic. We outlined the on-purpose communication framework and pinpointed its enabling technologies and challenges. Numerical results verified the feasibility and benefits of our proposed framework. We hope that our work will spur interests and open new directions for intelligent and efficient 6G networks.
「あらゆるものとの「高度な接続」というビジョンを軸に6Gに向けて、ネットワーク化された機械の集学的知能を最大限に活用することで、無線通信設計の paradigmatic shiftにより、システム効率の向上を実現します。この論文では、相互作用情報の流れを機械semanticsに基づいて実現する、目的意識のある機械通信フレームワークを提案します。本フレームワークは、共同通信、センシング、そして計算 (JCSC) テクノロジーによって実現されます。当然ながら、目的意識のある機械通信の広範な普及には、機械の目的の概念、高速かつ簡潔なネットワーク戦略、そしてタスク指向的な協力過程におけるsemantics-awareな情報交換メカニズムの開発が必要です。そこで、この論文では、この技術を支える可能性のあるオープンな課題を議論します。シミュレーション結果から、提案されたフレームワークはネットワークのオーバーヘッドをSignificantly削減し、通信効率を向上
2305.19629
Measuring and Predicting the Quality of a Join for Data Discovery
We study the problem of discovering joinable datasets at scale. We approach the problem from a learning perspective relying on profiles. These are succinct representations that capture the underlying characteristics of the schemata and data values of datasets, which can be efficiently extracted in a distributed and parallel fashion. Profiles are then compared, to predict the quality of a join operation among a pair of attributes from different datasets. In contrast to the state-of-the-art, we define a novel notion of join quality that relies on a metric considering both the containment and cardinality proportion between join candidate attributes. We implement our approach in a system called NextiaJD, and present experiments to show the predictive performance and computational efficiency of our method. Our experiments show that NextiaJD obtains greater predictive performance to that of hash-based methods while we are able to scale-up to larger volumes of data.
Sergi Nadal, Raquel Panadero, Javier Flores, Oscar Romero
2023-05-31T07:54:47
http://arxiv.org/abs/2305.19629v1
# Measuring and Predicting the Quality of a Join for Data Discovery ###### Abstract. We study the problem of discovering joinable datasets at scale. We approach the problem from a learning perspective relying on profiles. These are succinct representations that capture the underlying characteristics of the schema and data values of datasets, which can be efficiently extracted in a distributed and parallel fashion. Profiles are then compared, to predict the quality of a join operation among a pair of attributes from different datasets. In contrast to the state-of-the-art, we define a novel notion of join quality that relies on a metric considering both the containment and cardinality proportion between join candidate attributes. We implement our approach in a system called NexiaID, and present experiments to show the predictive performance and computational efficiency of our method. Our experiments show that NexiaID obtains greater predictive performance to that of hash-based methods while we are able to scale-up to larger volumes of data. **Artifact Availability:** The source code, data, and/or other artifacts are available at [https://www.essi.upc.edu/dim/NexiaID/](https://www.essi.upc.edu/dim/NexiaID/). ## 1. Introduction Data discovery is the broad process of navigating a large set of data sources in order to find relevant datasets and meaningful relationships among them (Sergi et al., 2016; Chen et al., 2017). Discovery and integration of datasets is nowadays a largely manual and arduous task that consumes up to 80% of a data scientists' time (Kang et al., 2017). This only gets aggravated by the proliferation of large repositories of heterogeneous data, such as _data lakes_(Kang et al., 2017) or open data-related initiatives (Kang et al., 2017). Due to the unprecedented large-scale volumes of heterogeneous data sources, manual data discovery becomes an unfeasible task that calls for automation (Nadal et al., 2018). In this paper, we focus on the problem of discovering joinable attributes among datasets in a data lake. As an illustrative example of the challenges we face, take the reference dataset (\(D_{ref}\)) depicted in Table 1. Assume we have a collection of other datasets available in the same data lake such as those depicted in Table 3. In such setting, we aim at finding joinable combinations of pairs of attributes from the reference dataset to all the rest. A first observation is that purely schema-based methods, such as LogMap (Kang et al., 2017), would fail to propose the combination \(D_{ref}.Country=D_{1}.X\) due to the lack of embedded semantics in the schema of \(D_{1}\). Thus, we must also take into account the data values. Note, however, that checking only data values might result in proposing the combination \(D_{ref}.Schengen=D_{2}.Discount\), which is definitely not relevant for analysis. Furthermore, given positive pairs (i.e., likely to be meaningfully joinable), such as \(D_{ref}.Country=D_{1}.X\) and \(D_{ref}.Country=D_{2}.Country\), there should be a clear criteria to rank them (i.e., suggest which one is _better_). Ranking is relevant in data lake scenarios, where independent data files must be crossed. Most of the times, in such scenarios, the cardinality of such files is not excessively large, but their order (i.e., number of columns / attributes) tend to be. Consequently, current approaches tend to propose too many joinable pairs of attributes, which is overwhelming for the end-user validating them. The problem of finding joinable attributes among datasets is nowadays a topic of major interest for the data management community (Chen et al., 2017; Chen et al., 2017). We distinguish three approaches: _comparison by value, comparison by hash and comparison by profile_. Table 2, overviews recent contributions. Comparison by value relies on auxiliary data structures such as inverted indices or dictionaries to minimize the lookup cost. Alternatively, comparison by hash expects that the signature of values under locality-sensitive hashing schemes will collision in the same bucket, also employing index structures for efficient threshold index. Comparison by profile methods leverage on profiles extracted from datasets and their attributes, which are used to predict whether a pair of attributes will join. ### Data discovery at scale Unfortunately, as we experimentally show in Section 6, the state-of-the-art in data discovery does not meet the expectations for large-scale scenarios. Unlike traditional relational databases, these are characterized by _a)_ a wide heterogeneity among datasets (e.g., large differences on the number of attributes and / or their cardinalities); _b)_ massive number of datasets; and _c)_ the presence of a variety of topics, or domains. Overall, these distinguishing features, which we discuss as follows, deem current solutions ineffective due to their inability to scale-up and the low quality in rankings they provide. **Inability to scale-up.** Solutions that yield exact results (i.e., comparison by value) quickly suffer from scalability problems. Indeed, \begin{table} \begin{tabular}{|c|c|c|} \hline **Country** & **Happiness score** & **Schengen** \\ \hline Mexico & 6.595 & N \\ \hline Spain & 6.354 & Y \\ \hline United States & 6.892 & N \\ \hline France & 6.592 & Y \\ \hline \end{tabular} \end{table} Table 1. \(D_{ref}\) - Happiness score per country in 2019 \begin{table} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{|c|}{**Search accuracy**} \\ Exact & & & Approximate \\ \hline Comp. by value & Comp. by hash & Comp. by profile (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) & (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) \\ \hline Expensive & & & Efficient \\ \hline \end{tabular} \end{table} Table 2. Overview of approaches by technique, arranged according to accuracy and algorithmic complexity assuming that the sets of values for a pair of attributes \(A\), \(B\) are maintained in memory as dictionaries, the complexity of computing their containment (i.e., the inclusion coefficient) is \(O(min(|A|_{i},|B|))\). Alternatively, those solutions that discover joinable pairs with a bounded error (i.e., comparison by hash) require the construction and maintenance of index structures for efficient lookup. This is a task that becomes highly demanding in terms of computing resources on large-scale datasets. In fact, as we have empirically observed and discuss in Section 6, the available implementations on comparison by hash fail to handle datasets of few GBs overall. Another drawback of such approaches is that the estimation of similarity measures like containment is highly imprecise when the cardinality (i.e., the number of distinct values) of an attribute is comparatively larger than the other's [26], which is a common characteristic in real-world large-scale applications. As a result, the precision of current approaches is highly affected due to the large number of false positives and worsened in large-scale settings. **Low quality in rankings.** Comparison by hash and profile solutions aim at predicting set-based measures such as the inclusion coefficient (i.e., containment), denoted \(C\), or the Jaccard index, denoted \(J\), using locality-sensitive hashing techniques such as MinHash [6] or random projection [8] to determine joinability among pairs of attributes [5]. Hence, a pair of attributes will be ranked higher if the estimated overlapping of their instance sets is also high. Note, however, that such _syntactic_ definition does not discern pairs of attributes from different domains (e.g., there exist several musical bands that use city or state names), leading to a large number of false positives. While it might be feasible to manually discern such false positives on a handful of datasets, this task is unattainable at scale. In order to showcase the detrimental impact of using such measures to determine joinability, we designed an experiment collecting 138 datasets from open repositories such as Kaggle and OpenML1. Precisely, we devised an heterogeneous collection of datasets ranging different topics, which yielded a total of 110,378 candidate pairs of textual attributes, where 4,404 of those have a containment higher or equal than 0.1. We, then, manually labeled such pairs as either _semantic_ or _syntactic_, distinguishing whether a pair of attributes share common values and, respectively, do or do not refer to the same concept in a shared domain. Such ground truth is publicly available to the community and available in the paper's companion website. Footnote 1: Repository available at [https://mydisk.cs.upc.edu/s/GetwMfT2vsGqbX](https://mydisk.cs.upc.edu/s/GetwMfT2vsGqbX) As shown in Figure 1, even for high values of \(C\) and \(J\), the number of _syntactic_ pairs, and thus false positives, represent a substantial part. Then, in Table 4, we show the performance metrics (i.e., precision, recall and F-score) when considering different threshold values over \(C\) and \(J\) to discern syntactic (i.e., below the threshold) and semantic pairs (i.e., above the threshold) on the ground truth. We can observe that, overall, \(C\) has a lower precision than \(J\), indicating that it has a higher false positive rate. In contrast, \(J\) has a lower recall than \(C\), indicating that it has a higher false negative rate. Yet, overall, we can observe that in terms of F-score, which denotes the accuracy of the classifier, both metrics clearly fail at the task of identifying semantic pairs of joinable attributes. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.5} & \multicolumn{3}{c|}{\(J\) \(>\) 0.5} \\ \hline \(P=0.59\) & \(R=0.72\) & \(F=0.65\) & \(P=0.74\) & \(R=0.39\) & \(F=0.51\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.6} & \multicolumn{3}{c|}{\(J\) \(>\) 0.6} \\ \hline \(P=0.67\) & \(R=0.56\) & \(F=0.61\) & \(P=0.79\) & \(R=0.29\) & \(F=0.43\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.7} & \multicolumn{3}{c|}{\(J\) \(>\) 0.7} \\ \hline \(P=0.64\) & \(R=0.44\) & \(F=0.52\) & \(P=0.78\) & \(R=0.20\) & \(F=0.32\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.8} & \multicolumn{3}{c|}{\(J\) \(>\) 0.8} \\ \hline \(P=0.63\) & \(R=0.38\) & \(F=0.47\) & \(P=0.75\) & \(R=0.17\) & \(F=0.28\) \\ \hline \multicolumn{4}{|c|}{\(C\) \(>\) 0.9} & \multicolumn{3}{c|}{\(J\) \(>\) 0.9} \\ \hline \(P=0.61\) & \(R=0.30\) & \(F=0.40\) & \(P=0.80\) & \(R=0.16\) & \(F=0.26\) \\ \hline \multicolumn{4}{|c|}{\(C\) = 1.0} & \multicolumn{3}{c|}{\(J\) = 1.0} \\ \hline \(P=0.60\) & \(R=0.25\) & \(F=0.35\) & \(P=0.75\) & \(R=0.11\) & \(F=0.20\) \\ \hline \end{tabular} \end{table} Table 4: Performance metrics of using different thresholds over \(C\) and \(J\) to identify semantic pairs. \(P\), \(R\) and \(F\) denote, respectively, precision, recall and F-score Figure 1: Proportion of syntactic and semantic pairs for different ranges of containment (left) and Jaccard (right) values. Labels in each bar denote the count of pairs in the range. \begin{table} \begin{tabular}{|c|c|c|} \hline \(X\) & \(Y\) & \(Z\) \\ \hline Spain & 47M & 2020 \\ \hline United States & 330M & 2020 \\ \hline Mexico & 123M & 2020 \\ \hline Germany & 83M & 2020 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Y\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline Spain & 47M & 2020 \\ \hline United States & 330M & 2020 \\ \hline Mexico & 123M & 2020 \\ \hline Germany & 83M & 2020 \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(X\)} & \multicolumn{1}{c|}{\(X\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} \\ \hline \multicolumn{1}{|c|}{\(X\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn{1}{c|}{\(Z\)} & \multicolumn ### Computing and accurately predicting the quality of a join The discussion above highlights the limitations of current data discovery approaches over large-scale scenarios. Indeed, the first challenge lies in the definition of a similarity measure such that it prioritizes pairs of attributes with large overlapping and shared domains, as an indicator of semantic relationship. Next, the second challenge is that of efficiently computing such measure at scale. As previously discussed, value and hash-based data discovery approaches do not scale well. Alternatively, comparison by profile methods are a better suit since they rely on the detection of similarities or discrepancies between profiles. Working with summaries instead of data values is much more efficient from a complexity point of view. Yet, despite the clear performance benefits of profile-based approaches, there is nowadays a large gap in the trade-off regarding the quality of their results mainly due to the adoption of rather basic profiles (e.g. (Kang et al., 2017)) that do not accurately describe the underlying data or representative profiles (e.g. (Kang et al., 2017)) that are used to discover a binary class (e.g. joinable or non-joinable). In order to overcome these issues, in this paper, we extend our previous vision paper (Kang et al., 2017) and propose a novel approach to data discovery which aims to cover the gap generated by the low predictive performance of profile-based methods, as well as the limited precision and scalability of hash-based systems on large data lakes. We, first, propose a novel metric to measure the quality of a join. Opposite to the state-of-the-art, mostly focused on containment or Jaccard distance, we also consider the cardinality proportion between attributes as an indicator of a higher join quality. This allows us to get rid of a substantial amount of false positives, reducing the number of pairs to analyze. This is specially relevant in large-scale settings, where as shown in Figure 1, the number of candidate pairs is too large to manually disregard false positives. Second, we propose a novel learning-based method based on profiles to discover joinable attributes for large-scale data lakes. Our assumptions apply to scenarios where data is denormalized and file formats embed tabular data (i.e., not nested). We rely on state-of-the-art relational data profiling techniques (Beng et al., 2015) to compute informative profiles for datasets. This task, which can be done offline and parallelized over distributed computing frameworks (e.g., Apache Spark), allows us to extract and model the underlying characteristics of attributes. Next, profiles are compared in order to predict their expected join quality. We show that our method is generalizable and that proposes a meaningful ranking of pairs of attributes based on the predicted join quality. We further show that our model is generalizable for data lake-alike settings. **Contributions.** We summarize our contributions as follows: * We introduce a quantitative metric for join quality, which considers containment and cardinality proportion between attributes. * We learn a highly accurate and general model to predict and efficiently rank candidate pairs of joinable attributes. * We extensively evaluate our approach to show it is scalable and outperforms the current state-of-the-art, yielding higher predictive performance results. **Outline.** The rest of the paper is structured as follows. We discuss related work and introduce the formal background, respectively in Sections 2 and 3. Next, in Section 4 we present the definition of the proposed quality metric, while Section 5 shows our approach to predicting it. Section 6 presents exhaustive experiments to showcase the effectiveness and scalability of our approach. We finally conclude our paper and present future work in Section 7. ## 2. Related Work In this section, we survey related work for each category identified in Table 2. **Comparison by value.** SilkMoth (SilkMoth, 2017) proposes a method to generate signatures from a subset of attribute tokens. To select an optimal subset, it uses an heuristic. Such signatures are used in an inverted index to prune the search space. Then, a verification step is required on the remaining candidates to discard those that do not hold for a certain similarity measure. This approach supports edit distance and Jaccard coefficient as similarity measures. It assumes all signatures fit in memory. JOSIE (SilkMoth, 2017) proposes to optimize the number of comparisons by scanning only the required values. Tokens are extracted from each attribute to create a dictionary and an inverted index. A ranked list is built from the \(k\) most relevant candidate tables with highest containment, where attributes ranked at the top will have a larger number of common values. PPJoin (Kang et al., 2017) performs a different optimization by using prefix filtering to avoid computing similarity values for all possible values. This reduces the number of comparisons and hence improve efficiency. However, an inverted index requires a large space in memory. This approach proposes a similarity measure which combines tokens and characters. **Comparison by hash.** MinHash (Han et al., 2017) uses the minwise hash function from the LSH collection, with a collision probability equal to the Jaccard similarity. This requires, for every value, to compute the MinHash signature \(K\) times, where \(K\)'s magnitude is in the hundreds. This approach has a major limitation on performance, as well as a bias towards small sets introduced by the Jaccard similarity. To overcome this, under the observation that MinHash can be optimized providing a Jaccard threshold, LSH Ensemble (Shen et al., 2016) proposes to use containment similarity and convert it to a Jaccard threshold. It focuses on finding attributes with a high containment similarity, that is, to cover as many values as possible. For efficient indexing, LSH Ensemble partitions the sets according to the set size. GB-KMV (SilkMoh et al., 2017) aims to reduce the number of false positives generated by LSH Ensemble. Further, it considers that additional information (e.g., attribute cardinalities and value frequencies) to offer better performance in estimating the containment similarity. Another approach that aims to tackle the drawbacks of MinHash is Lazo (Kang et al., 2017). Here, the Jaccard similarity is redefined to consider set cardinalities, which allows to estimate the containment similarity. Instead of computing \(K\) times a hash function, Lazo implements the One Permutation Hashing (OPH) technique, hashing data values only once. A distinguishable feature of Lazo is that rehashing the entire dataset collection is not required when a new one is introduced. Aurum (Kang et al., 2017), represents relations between datasets and their attributes in a graph data structure (i.e., the _Enterprise Knowledge Graph_). In this graph, attribute nodes are related if their hashing signatures, generated from their instances, are similar in an LSH index. To determine similarity it uses two metrics: Jaccard (i.e., MinHash similarity) and cosine (i.e., TF-IDF). Finally, the approach presented by D3L (Dalalal et al., 2017), also employs LSH indexes generated from four kind of features (i.e., attribute names, values, formats and domain distributions) as well as word embeddings generated from the values. Hence, attribute joinability is based on the composition of the similarity of each of such five features. **Comparison by profile.** LSD (Kang et al., 2015) proposes a multi-strategy learning approach to automatically find related attributes among XML files. It applies multiple learner modules, where each module exploits different kind of information, either from schema or data values. Such predictions are combined to weigh each learner. LSD also exploits domain integrity constraints and user feedback. FlexMatcher (Marcher, 2015) extends LSD with more data types. A relevant aspect is that it considers pattern classifiers to filter data values. A limitation is that every time a discovery process is to be performed it requires to train new models providing a training sample of attributes that might join with the specific attribute. A different approach is SIMUBC (Kang et al., 2015), which aims to detect pairs of attributes sharing common values. SIMUBC extracts 28 kinds of metadata from attributes such as tokens, phonetic values or representatives. Such metadata are used to train Random Forest and Multilayer Perceptron models to predict whether two attributes are join candidates. To improve performance, weights are assigned to each model to compute the final prediction. A limitation of this work is that it requires to train the models each time a data discovery process is started. Then, PEXESO (Pexes et al., 2017) presents an approach to create high dimensional vectors (i.e., embeddings) from each record of a column. Then, attributes can be efficiently compared to each other via such embeddings. A major limitation of PEXESO is that it requires indexing in memory the embeddings of the complete data lake. To alleviate this issue, the paper presents partitioning techniques. The approach presented by DLN (Dalalal et al., 2017) is that of building a ML model to find join relationships from Microsoft's data lake Cosmos. The paper argues that two metadata-based features suffice to build a high quality model. On the one hand, the first feature is an embedding-enhanced column name similarity, for which they use word embeddings trained on software domain data and calculate the cosine similarity. On the other hand, the second feature is the column-name uniqueness, where the maximum ITF (inverse term frequency) of all individual tokens is used. Unfortunately, the paper does not provide reproducibility information or ground truth to compare with. Finally, WarpGate (Pexes et al., 2017), is a prototype system that targets data discovery over cloud data warehouses by applying an embedding approach. These are built from columns with the objective that joinable columns will be closer in the higher dimensional embedding space. One of the limitations of WarpGate is, however, the runtime complexity of building such embedding spaces for large datasets. ## 3. Preliminaries Here, we introduce the formal background of our approach. ### Measuring the quality of a join In this subsection, we fix the data model and formalize metrics for join quality. **Data repositories and datasets.** A data repository \(\mathcal{D}\) is a finite nonempty set of dataset names \(\{D_{1},\ldots,D_{m}\}\), where each \(D_{i}\) has a fixed arity \(n_{i}\). Let \(A\) be a set of attribute names, then each \(D_{i}\in\mathcal{D}\) is associated to a tuple of attributes denoted by \(att(D_{i})\). Henceforth, we will assume that \(\forall i,j:i\neq j\to att(D_{i})\cap att(D_{j})=\emptyset\) (i.e., relations do not share attribute names), which if required can be done prefixing attribute names with their relation name. Then, we use \(att(\mathcal{D})\) to refer to the set \(\{att(D_{1})\cup\ldots\cup att(D_{m})\}\). Then, let \(V\) be a set of values, a tuple \(t\) in \(D_{i}\) is a function \(t:att(D_{i})\to V\). For any dataset \(D_{i}\), \(tuples(D_{i})\) denotes the set of all tuples of \(D_{i}\). **Joinable pairs.** Given two distinct datasets \(D_{a},D_{b}\) and a pair of attributes \(\langle a,b\rangle\), such that \(a\in att(D_{a})\) and \(b\in att(D_{b})\) and value-sets \(A\) and \(B\), we say the pair \(\langle a,b\rangle\) is _syntactically joinable_ if \(A\cap B\neq\emptyset\). Following the definition from (Kang et al., 2015), we also say that such pair of attributes is _semantically joinable_ if they are _syntactically joinable_ and there exists a function \(h:A\to B\) denoting semantic equivalence between attributes (i.e., both refer to the same concept). In practice, attributes with a semantic relationship also have a syntactic one. When this is not satisfied, as happens for the pair _Country_ (in Table 1) and _Nation_ (in Table 3c), we refer to this relationship as _semantic non-syntactic_. **Quantifiable measures for joinability.** A quantifiable way to define that a pair of attributes are joinable is by using set-based coefficients (i.e., coefficients over well-defined collections of distinct values). As earlier discussed, two of the most commonly used coefficients are the inclusion coefficient (\(C(A,B)\)) and Jaccard coefficient (\(J(A,B)\)), which are formalized as: \[C(A,B)=\frac{|A\cap B|}{|A|}\qquad\qquad J(A,B)=\frac{|A\cap B|}{|A\cup B|}\] Note Jaccard similarity is symmetric, thus it can be biased towards smaller sets. Oppositely, containment measures the relative size of the intersection of two sets over the size of one. Hence, such measure is asymmetric. **Join quality metric.** A join quality metric is a function \(Q:(\mathcal{A},\mathcal{B})\rightarrow\mathbb{R}\) from the set of all sets of values \(\mathcal{A}\) and \(\mathcal{B}\) to the set of real numbers, such that, for any set of values \(A,B,C,D\) it holds that \(Q(A,B)>\mathcal{Q}(C,D)\) if the pair \(\langle A,B\rangle\) is semantically joinable and the pair \((C,D)\) is syntactically joinable. Note this generalization allows to include containment and Jaccard as join quality functions, yet it does not consider the possibility to rank joinable pairs of the same kind. This is due to the fact that there is no consensus in the literature on which metric is best. Hence, one of the contributions of this paper is on the proposal of a novel metric to determine the quality of a join. ### Predicting the quality of a join Since the computation of a join quality measure \(\mathcal{Q}\) might be unattainable at scale, we also consider the join discovery problem as a predictive task. **Profiles.** A unary profile \(P_{u}\) for an attribute \(A\), referred as \(P_{u}(A)\) is a set of meta-features \(\{m_{1},\ldots,m_{n}\}\). Each \(m_{i}\) is a summary or statistic about the structure or content of \(A\) (e.g., number of distinct values). We also consider binary profiles, which are meta-features that denote characteristics of a relationship between pairs of attributes. Hence, we define a binary profile \(P_{b}\) for a pair of attributes \(A,B\), denoted \(P_{b}(A,B)\), as a set of meta-features (e.g., Levenshtein distance between attribute names). **Join quality prediction function.** A join quality prediction function is a function \(\mathcal{P}:(\overline{P_{u}},\overline{P_{u}}^{\prime},\overline{P_{b}}) \rightarrow\mathbb{R}\) from a triple defined by the set of all unary profiles, from both the reference and candidate attributes, and the set of all binary profiles, to the set of real numbers, such that, for any set of values \(A,B,C,D\) if \(\mathcal{Q}(A,B)>\mathcal{Q}(C,D)\) then \(\mathcal{P}(P_{u}(A),P_{u}(B),P_{b}(A,B))>\mathcal{P}(P_{u}(C),P_{u}(D),P_{b}(C,D))\). **Problem statement.** We now formalize the predictive join discovery problem. The goal is to discover a ranking (i.e., a partially-ordered set) of equi-join predicates based on their predicted join quality. **Definition 3.1** (Discovery-by-attribute).: Let \(A_{q}\) be a query attribute, \(D_{ref}\) a reference dataset where \(A_{q}\in att(D_{ref})\), and \(\mathcal{D}\) a data repository where \(D_{ref}\notin\mathcal{D}\); obtain a partially-ordered set of joinable pairs \(R\) of the form \(R=\{(A_{q},A_{1}),\ldots,(A_{q},A_{n})\}\), where \(A_{1},\ldots,A_{n}\in att(\mathcal{D})\) such that \(\forall(A_{q},A_{i}),\langle A_{q},A_{j}\rangle\in R:(A_{q},A_{i})>\langle A_ {q},A_{j}\rangle\implies\mathcal{P}(P_{u}(A_{q}),P_{u}(A_{i}),P_{b}(A_{q},A_{i }))\geq\mathcal{P}(P_{u}(A_{q}),P_{u}(A_{j}),\)\(P_{b}(A_{q},A_{j}))\). The remainder of the paper is devoted to _a)_ present a novel instantiation of join quality metric (see Section 4), and _b)_ present an approach to instantiate the join quality prediction function (see Section 5). ## 4. A novel metric to measure the quality of a join Here, we define a novel metric to determine the quality of a join. ### On the cardinality proportion's role Unlike the state-of-the-art, which mainly uses containment and Jaccard similarities to decide the degree of joinability among pairs of attributes, we aim to define a metric to measure the expected join quality. As shown in Table 4, containment yields better results to determine the joinability of a pair attributes with respect to Jaccard. Yet, we make the observation that datasets on a data lake do not relate to each other as in a relational database. In such scenarios, it is common to find datasets with few data values in common. In order to exemplify this idea, let us consider the datasets depicted in Table 5. In this example, the reference dataset \(D_{ref}\) might be joined with any of the two candidate datasets \(D_{1}\) (at the EU level) and \(D_{2}\) (worldwide). Current approaches would propose both as positive pairs, since they yield the same containment. However, we aim at distinguishing the join quality between them and use their _cardinality proportion_ for that purpose, which is defined by the following expression: \[K(A,B)=\frac{min(|A|,|B|)}{max(|A|,|B|)}\] Let us, then consider the following cardinalities corresponding to the city attributes (which are the only relevant ones to join): \(|City|=\) 8,124, \(|Unit|=\) 20,000 and \(|Name|=\) 54,500, respectively belonging to \(D_{ref}\), \(D_{1}\) and \(D_{2}\). We use the cardinality proportion as a measure to infer whether their data granularities are similar. In this sense, the joinable attribute in \(D_{2}\) is much larger than that in \(|D_{ref}|\) and yields a worse proportion compared to \(D_{1}\), and thus should be ranked lower. Importantly, we assume these datasets store independently generated events and such big difference in their cardinality indicates they embed different semantics or sit at different granularity levels. In general, such situations are a source of false positives for current solutions, specially, when considering small tables. ### Definition of an empirical metric We, then, follow an empirical approach to define the join quality metric. This is, from a set of quantifications drafted from a sample, we aim to derive a measure that can generalize to a population (Kang et al., 2017). In our setting, from the manually-labeled ground truth used to conduct the experiment depicted in Figure 1, we observe how the containment and cardinality proportion values relate for both syntactically and semantically-joinable pairs. Indeed, as shown in Figure 2, the rationale that the cardinality proportion is a valid metric to discern false positives provided by the containment holds. As observed, most of the syntactically-joinable pairs have a value of \(C<0.5\), yet for those that are above such threshold most of them lie below the range \(K<0.5\). In other words, we can identify semantically-joinable pairs when both \(C\) and \(K\) are closer to 1. From such observations in the ground truth, a naive approach to discern syntactically and semantically-joinable pairs would be that expressed by the following expression, which would yield 1 if a pair is semantically-joinable and 0 otherwise: \[Q(A,B)=\begin{cases}1,&\text{if }C(A,B)\geq\frac{1}{2}\text{ and }K(A,B)\geq\frac{1}{2}\\ 0,&\text{otherwise}\end{cases}\] Yet this metric is still limited, as is the case for the other ones in the state-of-the-art, on its ability to rank pairs that are of the same kind. We, hence, generalize and propose a multi-class metric to determine the quality of a join based on multiple quality levels \(L\) (i.e., degrees of joinability) as defined by the following expression: \[Q(A,B,L)=max(i)\in[0,\ldots,L]|C(A,B)\geq(1-\frac{i}{L})\wedge K(A,B)\geq\frac {1}{2^{i}}\] Figure 2. Distribution of syntactically and semantically-joinable pairs in the ground truth over \(C\) and \(K\) The intuition of \(Q(A,B,L)\) is that of defining equally-sized buckets for \(C(A,B)\) and constraint them using \(K(A,B)\). Figure 3, depicts the areas defined by such quality metric for the case of \(L=2\) (which is equivalent to \(Q(A,B)\) earlier introduced) and \(L=4\). The latter uses richer labels, for this case denoted as _Low, Medium, Good_, and _High_ for the different levels of quality, respectively 0.25, 0.5, 0.75 and 1 (note that we ignore the value 0 in the chart). Hence, a pair labeled _High_ will always be ranked higher than one labeled _Good_. The interpretation of such metric is that of measuring the quality of the join's output from \(A\)'s perspective under equi-join semantics (i.e., under the semantics of left-semijoin conjunctive queries). This is, how the number of elements in \(A\) will be reduced after doing a join operation using \(B\). Take again the example from Table 5 and consider the following containment values \(C(City,Unit)=0.8\) and \(C(City,Name)=0.95\), and cardinality proportion values \(K(City,Unit)=0.40\) and \(K(City,Name)=0.15\). Note that, although the containment is very high in both cases, the constraint on cardinality proportions allows to rank in a higher position the first, denoting a more interesting join result. To showcase the benefit of considering the cardinality proportion to complement the containment, consider the following extreme case, which is not uncommon on large-scale automated data discovery scenarios. Consider the two datasets depicted in Table 6, the former (\(D_{s}\)) listing the opening hours of stores and the latter (\(D_{m}\)) movies and their directors. Let us assume \(|Store|=3\) and \(|Movie|\) is above a million movies. Solutions exclusively considering containment would qualify the pair \(\langle D_{s}Store,D_{m}.Movie\rangle\) pair as a high quality join, given the \(2/3\) containment (which would be even higher if we consider approximate joins). Yet, this is clearly a false positive. Considering the cardinality proportion, our quality metric would penalize its ranking and assign a low value to this candidate pair. Note that the Jaccard index is able to deal with this case, yet as shown in Table 4 it generally has a high false negative rate deeming it suboptimal. ### A continuous metric for join quality Despite the ability of \(Q(A,B,L)\) to assign quality levels beyond binary ones, the output of such metric is discrete, and thus the rankings it generates are bounded by \(L\). In order to overcome this issue, we aim to provide a generalization of such discrete metric into a continuous one \(Q(A,B)\) in the continuous range \([0,1]\). The approach we follow is that of plotting the empirical distribution function (_edf_) of \(Q(A,B,L)\) for some value of \(L\), and then fit a continuous probability distribution on it. Empirical distributions are functions that describe a sample of observations for a given variable, while probability distributions are functions that yield the probability of different possible values for a variable. We distinguish between probability density functions (_pdf_), which yield the probability that a random variable takes on a certain value, and cumulative distribution functions (_cdf_) which yield the probability that a random variable takes on a value less than or equal to a certain value. Thus, we are precisely interested in the latter. The challenge is to determine what distribution function better fits our metric. **Fitting a Gaussian distribution.** The most notorious kind of probability distribution is the Gaussian distribution, also known as the normal distribution, \(\mathcal{N}(\mu,\sigma^{2})\). The _pdf_ of the normal distribution is defined by the following expression: \[pdf(x;\mu,\sigma^{2})=\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{\frac{-(x-\mu)^{2}}{2 \sigma^{2}}}\] The _cdf_ of the normal distribution is defined as: \[cdf(x;\mu,\sigma^{2})=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\frac{x-\mu}{ \sigma}}e^{\frac{-x^{2}}{2}}dt\] \begin{table} \begin{tabular}{|c|c|c|} \hline **Store** & **Open** & **Close** \\ \hline Chicago & 8am & 18pm \\ \hline Casablanca & 9:30am & 20pm \\ \hline Paris & 9am & 18pm \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline **Movie** & **Director** \\ \hline An American in Paris & G. Gershwin \\ \hline Casablanca & M. Cutrie \\ \hline Chicago & R. Marshall \\ \hline... &... \\ \hline \end{tabular} \end{table} Table 6. \(D_{s}\), store schedules (left), and \(D_{m}\), movies and their directors (right) Figure 3. Areas identified by the quality metric for \(L=2\) (left) and \(L=4\) (right) \begin{table} \end{table} Table 5. A reference dataset (\(D_{ref}\)) and two candidate datasets to be joined. \(D_{1}\) is curated with extensive data at European level, while \(D_{2}\) is curated at the worldwide level with less details Yet, since we are working with a two-dimensional function for \(C\) and \(K\) we must consider the case for the multivariate normal distribution. Assuming \(C\) and \(K\) are independent, the _cdf_ truncated in the range \([a,b]\) for a two-dimensional Gaussian for \(C\) and \(K\) (i.e., \(cdf_{\text{C}}\left(c,k\right)\)) is equal to the product of the individual _cdf_s of \(C\) and \(K\) (i.e., \(cdf_{\text{C}}\left(c\right)cdf_{\text{K}}\left(k\right)\)), which is defined as: \[cdf_{\text{CK}}\left(c,k\right)=\frac{\Phi\left(\frac{c-\mu_{\text{c}}}{c_{ \text{c}}}\right)-\Phi\left(\frac{a-\mu_{\text{c}}}{\sigma_{\text{c}}}\right) }{\Phi\left(\frac{b-\mu_{\text{c}}}{\sigma_{\text{c}}}\right)-\Phi\left(\frac {a-\mu_{\text{c}}}{\sigma_{\text{c}}}\right)}\Phi\frac{\left(\frac{b-\mu_{ \text{c}}}{\sigma_{\text{c}}}\right)}{\Phi\left(\frac{b-\mu_{\text{c}}}{\sigma _{\text{c}}}\right)-\Phi\left(\frac{a-\mu_{\text{c}}}{\sigma_{\text{c}}} \right)}\] where \(\Phi(x)\) is the univariate _cdf_ of the normal distribution defined by the following expression: \[\Phi\left(x\right)=\frac{1}{2}\left(1+\text{erf}\left(\frac{x}{2}\right) \right)=\frac{1}{2}\left(1+\frac{2}{\sqrt{\pi}}\int_{0}^{\frac{\pi}{2}}e^{-t^ {2}}\text{dt}\right)\] Hence, the challenge reduces to finding the mean values \(\mu_{\text{c}}\) and \(\mu_{k}\), which determine the offset of the distribution, and the covariance matrix \(\Sigma=\left(\begin{smallmatrix}\sigma_{\text{c}}&0\\ 0&\sigma_{\text{c}}\end{smallmatrix}\right)\), which determine the shape of the distribution, such that it best fits the discrete quality function \(Q(A,B,L)\). To do so, we consider the Wasserstein metric which is a distance function defined over probability distributions. Hence, the goal is to find the optimal values such that minimize the Wasserstein distance. Such task, which can be evaluated in a brute-force manner over a discrete range of values, yielded the following values: \(\mu_{\text{c}}=0\), \(\mu_{k}=0.44\), and \(\Sigma=\left(\begin{smallmatrix}0.19&0\\ 0&0.28\end{smallmatrix}\right)\). Figure 4 depicts resulting fit of the _cdf_ of the normal distribution over \(C\) and over \(K\) using the previously introduced values, which is superposed over the _cdf_ of \(Q(A,B,4)\). **Continuous quality metric with strictness levels.** Since the previously presented values for mean and covariance matrix have been derived from our ground truth, we finally consider the possibility to include a certain degree of variability on the resulting quality metric. To that end, we consider the _strictness_ score \(s\) and consider three possible values: _relaxed_ (i.e., \(s=0\)), _balanced_ (i.e., \(s=0.25\)) and _strict_ (i.e., \(s=0.5\)). Such score will vary the value of \(\mu_{\text{c}}\) (i.e., the mean of the containment value), which, as can be observed in Figure 3, is the dominating factor in our metric. Hence, the resulting continuous join quality metric \(Q(A,B,s)\) is defined as: \[cdf\left(\mu_{C}+s,\Sigma[0]\,[0],0,1,C(A,B)\right)cdf\left(\mu_{K},\Sigma[1 ]\,[1],0,1,K(A,B)\right)\] Figure 5, shows the application of such metric for the different strictness values considered over our ground truth. ## 5. Predicting the quality of a join In this section, we describe our approach to predict the join quality metric introduced in Section 4. ### Attribute profiling Profiles are composed of meta-features that represent the underlying characteristics of attributes. Such profiles are the key ingredient for high accuracy predictions, thus we require an exhaustive summary of attributes. Hence, we base our profiling on state-of-the-art relational data profiling techniques (Bang et al., 2017). We distinguish meta-features corresponding to unary and binary profiles. We further distinguish the former into meta-features modeling cardinalities, value distribution and syntax. A summary of all the implemented meta-features is depicted in Table 7. Although for space reasons it has not been included here, we validated by means of a principal component analysis the relevance of all meta-features towards meaningful profiling of attributes. **Cardinalities.** These provide a broad view of an attribute. Uniqueness, which is computed dividing the number of distinct values by the cardinality, allows us to quantify the extent of duplicated values. A uniqueness smaller than 1 indicates there exists duplicate values, hence we can identify which attributes have high redundancies. We can also detect incompleteness, which is determined by the number of missing values divided by the cardinality. This produces a value in the range \([0,1]\), where values closer to 1 denote the attribute has a high percentage of missing values. Finally, entropy, also referred as _diversity index_, measures the variety of data in an attribute. **Value distribution.** Here, we exploit information in a fine-grained manner by using a frequency distribution of the attribute values, either by count or percentage. Despite its simplicity, the frequency distribution of column values exposes insightful characteristics, such as how often a value occurs. We compute frequency metrics (e.g., in the form of octiles), and descriptive statistics (e.g., mean, standard deviation, etc.) to characterize the distribution of the data. We also take a sample of the ten most frequent values. **Syntax.** This category of unary metadata describes the patterns of data. These meta-features include information regarding the length of values in characters, such as the length of the longest and shortest value, and the average length. We also compute information regarding syntactic consistency, such as format and data type. This aids to give meaning to the attribute's content. We also infer the data type of an attribute, in a broad and fine-grained manner. Broad data types are generic descriptions such as numeric, alphabetic, alphanumeric, dateTime, non-alphanumeric, etc. However, we also extract its fine-grained type to extract what content is the attribute representing. To this end, we use regular expressions that allow us to model usernames, phrases, phones, etc. In order to improve the quality of meta-features in this category, we preprocess values to lowercase, remove accents and special symbols. Figure 4. Resulting _cdf_s that minimize the Wasserstein distance over the _cdf_ of \(C\) (left) and \(K\) (right) for \(Q(A,B,4)\) **Binary meta-features.** We also extract meta-features regarding pairs of attributes. We use Levenshtein distance to obtain the similarity between pairs of attribute names (Levenshtein, 2017). This is normalized by the length of the largest string. ### Comparing profiles Before comparing profiles and due to the fact attribute meta-features are represented in different magnitudes, we normalize them to guarantee a meaningful comparison. As shown in Table 7, we consider a large amount of meta-features that require normalization. \begin{table} \begin{tabular}{|c|c|l|c|} \hline **Category** & **Meta-feature** & **Description** & **Norm.7** \\ \hline \multirow{4}{*}{Cardinalities} & Cardinality & Number of distinct values within an attribute & Yes \\ \cline{2-4} & Uniqueness & Measures if the attribute contains unique values & No \\ \cline{2-4} & Incompleteness & Measures the number of missing values & No \\ \cline{2-4} & Entropy & Measures the variety of an attribute & Yes \\ \hline \multirow{4}{*}{Value distribution distribution} & Average frequency & The average value of the frequency distribution count & Yes \\ \cline{2-4} & Min frequency & The minimum value of the frequency distribution count & Yes \\ \cline{2-4} & Max frequency & The maximum value of the frequency distribution count & Yes \\ \cline{2-4} & SD frequency & The standard deviation of the frequency distribution count & Yes \\ \cline{2-4} & Octiles & The octiles (quantiles) of the frequency distribution in percentages & No \\ \cline{2-4} & Min perc frequency & The minimum value of the frequency distribution in percentages & No \\ \cline{2-4} & Max perc frequency & The maximum value of the frequency distribution in percentages & No \\ \cline{2-4} & SD per frequency & The standard deviation of the frequency distribution in percentages & No \\ \cline{2-4} & Constancy & Frequency of the most frequent value divided by number of rows & No \\ \cline{2-4} & Frequent words & The 10 most frequent words & No \\ \cline{2-4} & Soundex & The 10 most frequent words in soundex representation & No \\ \hline \multirow{4}{*}{Syntactic} & Data type & The data type of the attribute (i.e., numeric, alphanumeric, alphabetic, & No \\ \cline{2-4} & Data type & nonAlphanumeric, or datetime) & \\ \cline{2-4} & Specific type & The specific type of the attribute (i.e., phone, email, url, ip, username, or phrases) & No \\ \cline{2-4} & Percentage data type & The percentage for each data type detected in the data values & No \\ \cline{2-4} & Percentage specific type & The percentage for each specific type detected in the data values & No \\ \cline{2-4} & Longest string & The number of characters in the longest string & Yes \\ \cline{2-4} & Shortest string & The number of characters in the shortest value in the attribute & Yes \\ \cline{2-4} & Average string & Average length of the strings in term of characters & Yes \\ \cline{2-4} & Number words & The number of words in the attribute & Yes \\ \cline{2-4} & Average words & The average words in the attribute & Yes \\ \cline{2-4} & Min words & The minimum words in the attribute & Yes \\ \cline{2-4} & Max words & The maximum words in the attribute & Yes \\ \cline{2-4} & SD words & The standard deviation in the attribute & Yes \\ \hline \multirow{2}{*}{Pair metadata} & Best containment & The containment score assuming all distinct values are covered & No \\ \cline{2-4} & Flipped containment & Containment assuming all distinct values are covered divided by max cardinality & No \\ \cline{2-4} & Name distance & Measures the difference of two attribute names using Levenshtein distance & No \\ \hline \end{tabular} \end{table} Table 7. Meta-features composing a profile Figure 5. Continuous quality in the ground truth for \(Q(A,B,0)\), \(Q(A,B,0.25)\), and \(Q(A,B,0.5)\) Two common normalization techniques are Min-Max and Z-score. The former consists on rescaling data into the range \([0,1]\), This technique, however, is sensitive to outliers which will lay on the boundaries. Oppositely, Z-score normalization overcomes this issue by rescaling values to have a mean of 0 and a standard deviation of 1. For this reason, we use Z-score to normalize meta-features. The following equation depicts the normalization process, which requires the mean and standard deviation of the metadata, which is computed from all the values of each attribute to be compared. \[Z\text{-}score=\frac{(x-\mu)}{\sigma}\] After normalizing each meta-feature we compute the distances among pairs of attributes. Here, we also compute binary meta-features. The result of this stage is a set of distance vectors \(D\) where, for each \(D_{i}\), values closer to 0 denote high similarities. **Training a regression model.** Once the distance vectors are computed, we can train the predictive model. Precisely, the goal is to train a model so that, for a pair of attributes \(A,B\), its prediction (i.e., \(\mathcal{P}(P_{u}(A),P_{u}(B),P_{b}(A,B))\)) is highly correlated to the true join quality (i.e., \(\mathcal{Q}(A,B,s)\)). For that, we fixed the intermediate value of \(s=0.25\), and evaluated several regression models performing a hyperparameter grid search to minimize the error. Precisely, we built models based on: linear regression, ensemble learning, support vector regression, and multilayer perceptrons (MLP). The resulting best model, in terms of highest coefficient of determination \(R^{2}\), was an MLP with ReLU function, a single hidden layer with dimension 100 and a value of \(\alpha=0.0001\). This model provided a \(R^{2}=0.8831\). ## 6. Experimental Evaluation In this section, we present the evaluation of our approach. On the one hand, we evaluate and compare the ability of the model to generalize and discover quality joins with respect to state-of-the-art solutions. We, also, evaluate and compare its scalability. In order to present transparent experiments and guarantee the reproducibility of results, we created an informative companion website2. There, it is possible to find all necessary resources (i.e., source code, datasets, and detailed instructions) needed to reproduce the presented results. Footnote 2: [https://www.essi.upc.edu/dim/nestriajid/](https://www.essi.upc.edu/dim/nestriajid/) We have implemented the profiling phase of Nextiajp as an extension of Apache Spark. The runtime methods are implemented as new operators over the structured data processing library Spark-SQL. We leverage on the Catalyst optimizer to efficiently compute the profiles and compare them. The predictive model is available as a _Pickle_, which allows to easily adapt it in other projects. ### Generalizability of our approach Since we empirically derive a join quality metric from ground truth, the first question is whether it is applicable to other data lake settings. Thus, the objective of this first experiment is to show the generizability of our approach. To that end, we perform experiments on datasets independently-generated from those selected in our ground truth and assess our metric's performance. **Methodology.** We consider the GitTables dataset (Kang et al., 2018) as data repository to evaluate our approach. GitTables is a large-scale corpus of 1M relational tables extracted from CSV files in GitHub repositories. Each table is provided as a Parquet file, which comes with metadata associated. On average tables have 25 columns and 209 rows. Of special interest for us are the _semantic annotation types_ each attribute has, which tell the probability a column is similar to a semantic type in a knowledge base (e.g., DBpedia). As described in (Kang et al., 2018), these are annotated using a pretrained FastText model, and the annotation corresponds to the most similar semantic type. We, then, constructed a ground truth (i.e., calculated our join quality metric with a strictness level \(s=0.25\)) considering those attributes with a semantic similarity equal to 1.0 that have the same semantic type (e.g., clusters of _author_, _city_, _name_, etc.). Since evaluating all possible combinations of attributes over the complete corpus of GitTables is unattainable (i.e., it is quadratic in the total number of attributes), we reduced the search space to the abstraction_tables and dwarf_tables datasets, which represent the ones with highest volume. **Results.** We evaluated our predictive model on the annotated GitTables ground truth, which yielded a mean squared error (MSE) value of 0.04, and a mean absolute error (MAE) value of 0.13. Since the predictive model was trained from an independently-generated ground truth, we consider these error scores to be highly acceptable. Then, we also evaluated the ability of the predictive model to discern syntactically and semantically-joinable pairs following the same approach as in Table 4. Hence, Table 8 depicts the predictive perfomance metrics of using different thresholds to determine semantically-joinable pairs over the GitTables dataset. From these results, we can conclude that the precision of our metric is monotonically increasing with higher threshold values, while maintaining a constant high recall. ### Comparison with the state-of-the-art (schema matching) Here, we experimentally compare our quality metric and its associated predictive model. **Methodology.** In this experiment we rely on the Valentine experimental suite for data discovery (Zhu et al., 2017). Valentine provides the implementation of seven schema matching algorithms (which are base for data discovery methods), together with a collection of ground \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.5} \\ \hline \(P=0.40\) & \(R=0.87\) & \(F=0.55\) & \(A=0.97\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.6} \\ \hline \(P=0.49\) & \(R=0.85\) & \(F=0.62\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.7} \\ \hline \(P=0.58\) & \(R=0.85\) & \(F=0.69\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.8} \\ \hline \(P=0.68\) & \(R=0.87\) & \(F=0.76\) & \(A=0.98\) \\ \hline \multicolumn{4}{|c|}{\(Q\)\textgreater{} 0.9} \\ \hline \(P=0.76\) & \(R=0.88\) & \(F=0.82\) & \(A=0.99\) \\ \hline \end{tabular} \end{table} Table 8. Performance metrics of using different thresholds over \(Q\) to predict semantically-joinable pairs. \(P\), \(R\), \(F\), and \(A\) denote, respectively, precision, recall, F-score, and accuracy truth annotated datasets. The datasets consider datasets manually annotated as well as automatically annotated. We, precisely, focus on their _joinable_ scenario in Valentine, which is equivalent to our definition of semantically-joinable pairs. We extended Valentine to incorporate new matchers for _a)_ our discrete quality metric with \(L=4\); _b)_ our continuous quality metric with a strictness level of \(s=0.25\); and _c)_ the learned predictive model of _b)_. Since Valentine's ground truth is labeled on a binary class (i.e., joinable or non-joinable), we considered variants of the previously discussed matchers with different threshold values. We extended Valentine's plots, which are represented as boxplots, with our performance metrics under both instance-based and hybrid scenarios. Since our approach does not consider data transformations to calculate the join quality, we did not consider the scenario where Valentine incorporates noise in the instances and schema, and focused only on the verbatim setting. **Results.** We, first report on the _recall at size of ground truth_ evaluation metric. This is a metric that, if the size of ground truth is \(N\), then only the first \(N\) proposed matches will be considered for the recall calculation. This is a metric that allows to assess the quality of the ranking a method provides. Figure 6, depicts the obtained results by means of boxplots. On the left-hand side, we encounter the instance-based methods, which are those that compute their rankings based on the data values of each attribute. Hence, here, we present the results for our discrete quality metric (i.e., _DQ_), and the continuous one with three thresholds (i.e., _CQ_). Here, we can observe that the quality of the ranking provided by all our matcher variants is equal or better than state-of-the-art instance-based methods. This is another indicator of how our proposed metric generalizes, since here it is evaluated on an independently labeled ground truth. Yet, more interestingly, we shift our attention to the right-hand side of Figure 6, depicting the recall at size of ground truth for hybrid approaches (i.e., those that make use of schema and auxiliary data structures previously generated from the instances). Here, we compare the matchers encapsulating our predictive models (i.e., _ML_) with 0.5, 0.6 and 0.7 as threshold to determine joinability. As it can be observed, our predictive approach yields better rankings than most competitors only being in pair with EmbDI (Chen et al., 2017), a method that finds relationships between columns comparing their previously trained embeddings. Next, we compare the effectiveness of the approaches by means of the _precision at 50%_ performance metric. This is a metric that computes the precision only over the top 50% of the provided ranking, which allows to determine the level of false positives a matcher yields. As shown in Figure 7, both our calculated instance-based and predicted metrics clearly outperform the state-of-the-art in terms of precision results. This is, all pairs returned by our approach are indeed true positives. Surprisingly, approaches such as EmbDI which have high recall scores, clearly fail on effectively identifying true positive matches. Concerning our approach, the obtained precision results are extremely good, with only few outliers provided by the predicted metric. After examining Valentine's benchmark, we believe that this is due to the fact that its ground truth datasets have a lack of heterogeneity and are not representative of a data lake environment, as opposite to the GitTables scenario evaluated in Section 6.1. ### Comparison with the state-of-the-art (data discovery) Here, we provide a comparison with state-of-the-art data discovery systems which are not part of the Valentine suite. Precisely, we compare Nextiajp with Aurum (Navarro et al., 2017), D3L (Chen et al., 2017), and WarpGate (Floran et al., 2017). These are systems whose course code is openly available. Unfortunately, no other solutions in the realm of approximate data discovery (i.e., those based on hash or profiles) could be considered due to the fact that _a)_ the code is openly available but it cannot be reproduced due Figure 6. Recall at size of ground truth scores provided by Valentine to outdated dependencies or it is hardcoded for specific environments, or _b_) they assume unrealistic data lake scenarios expecting all datasets to be loaded and pre-processed in a relational database. **Methodology.** For evaluation purposes, we collected 139 independent datasets from those used for the ground truth. We further divided such datasets into 4 testbeds: extra-small \(\text{XS}\) (\(0-1\) MB), small \(S\) (\(1-100\) MB), medium \(M\) (\(100\) MB \(-1\) GB) and large \(L\) (\(>1\) GB). Respectively, these contain 28, 46, 46, and 19 datasets, and 159, 590, 600, and 331 string attributes. For each testbed, manual ground truth was generated with a quality level according to the quality of the join (from 1 to 4), where those above 3 are considered semantically-joinable. Such testbeds are also available in the paper's companion website. The ability to rank joinable pairs on all the systems was assessed over top-K queries, where we report _Precision@K_ and _Recall@GroundTruth_. The former scoring the ratio of true positives over K, and the latter scoring the ratio of true positives over the size of the ground truth for each query. To that end, for each candidate pair, we measure the quality metric with a _balanced_ strictness level. **Results.** We build on the openly available experimental results reported in [10], where Aurum, D3L and WarpGate are assessed over testbeds \(S\) and \(M\), since these are the largest ones in terms of datasets and ground truth defined in this paper. Then, in Figures 8 and 9 we report, respectively, top-K results for such two testbeds (note AU stands for Aurum, WG for WarpGate, and \(\text{NXjD}\) for \(\text{NextiajD}\)). From the obtained results, we can see that \(\text{NextiajD}\) consistently outperforms the alternative systems in terms of precision and recall as the value of \(k\) increases. Thus, \(\text{NextiajD}\) provides more accurate rankings than the alternatives with respect to the available ground truth. It is important, however, to remark that due to the lack of community consensus on how to quantify joinability for data discovery, the compared systems do not target the same objective metric. Thus, in this experiment we have followed the definition proposed in this paper, which is motivated to address large-scale data lake scenarios where data are highly denormalized and file formats embed tabular data. On such metric, \(\text{NextiajD}\) outperforms the alternative systems in terms of precision and recall for top-K queries. Figure 8. Precision and recall scores on testbed \(S\) for state-of-the art data discovery systems Figure 7. Precision at 50% scores provided by Valentine Figure 9. Precision and recall scores on testbed \(M\) for state-of-the art data discovery systems ### Scalability As previously discussed, our most intensive task with regard to computational resources is the generation of attribute profiles from datasets. Thus, here we perform stress tests of this component by means of two experiments. **Methodology.** We generated a 10GB base CSV file with 5 columns and systematically extended it in batches of 10GBs, up to 60GBs. Next, we followed a similar strategy with regard to columns. We created a 20GB base file that was systematically extended with a new duplicate column each time. The resulting files were stored in a Hadoop HDFS cluster, using the default block size and replication parameters. In order to simulate a realistic large-scale scenario, we also converted each input file to Apache Parquet3 format. Parquet is an specialized hybrid layout that fragments data into row group partitions (i.e., physically-independent columns), while it also embeds numerous statistics to optimize queries. To evaluate the scalability of our approach in terms of distribution, we compute the profiling runtime using \(n\) Spark workers (cf. HDFS standoates) in the range \(1\ldots 3\). Footnote 3: [https://parquet.apache.org/](https://parquet.apache.org/) **Results.** Figure 10 depicts the profiling runtime for an increasing file size. Regardless of the number of workers and data format used, the runtime linearly scales with the file size. As expected, profiling Parquet files are much more efficient than CSV ones (i.e., an approximate 4x to 5x speed-up), as we can benefit from statistics and compression when computing certain meta-features. As depicted in Figure 11, we can also observe that the profiling runtime trend scales linearly with the number of columns. Similarly to the previous case, using Parquet significantly speeds up the process, here with a 7x to 8x factor. Finally, in Table 9, we show the average profile size per each testbed from those earlier presented. The disk usage is proportional to both the number of rows and columns. Although the number of columns is the leading factor for the profiling size, the dataset cardinality impacts on the size of some meta-features (e.g., frequent words, soundex, etc.). In any case, the profile sizes are reasonable. Thus, they can be precomputed offline and stored together with the dataset as metadata. The only exception to this would be binary meta-features. As final conclusion, these experiments show that our approach does not introduce any blocking factor hindering parallelism and can fully benefit from it. ## 7. Conclusions and Future Work We have presented a novel learning-based approach for data discovery on large-scale repositories of heterogeneous, independently created datasets. Our work is motivated by (i) the poor predictive performance of current profile-based solutions, and (ii) the inability to scale-up of hash-based approaches, as well as their low precision, which is undesirable for large-scale scenarios. In order to overcome these limitations, we propose a scalable method yielding good precision, and grounded on a novel qualitative definition of join quality. We have experimentally shown that our approach outperforms the state-of-the-art data discovery approaches in terms of predictive and runtime performance. As future work, we look for adapting our approach to detect semantic non-syntactic join relationships (i.e., requiring some simple transformation on the values before joining). Based on such predictions, the system should be able to propose the required transformations to join. ###### Acknowledgements. The authors are grateful to Tianji Cong for kindly providing the raw experimental results for the experiment reported in Section 6.3. This work was partly supported by the DOGO4ML project, funded by the Spanish Ministerio de Ciencia e Innovacion under project PID2020-117191RB-100 / AEI/10.13039/501100011033. Sergi Nadal is partly supported by the Spanish Ministerio de Ciencia e Innovacion, as well as the European Union - NextGenerationEU, under project FJC2020-045809-1 / AEI/10.13039/501100011033.
私たちはスケールで、結合可能なデータセットの発見問題を研究しています。私たちは、プロファイルから学習的なアプローチでこの問題に取り組んでおり、これは簡潔な表現であり、データセットのスキーマとデータの特性を捉えることができます。これらは、分散および並列な手法で効率的に抽出できる簡潔な表現です。これらのプロファイルは比較され、異なるデータセットの2つの属性間のジョイン操作の品質を予測するために使用されます。従来の方法とは異なり、私たちは、ジョイン候補属性間の包含と cardinalilty 比率を考慮する新たなジョイン品質の概念を定義しました。私たちは、NextiaJDというシステムでこのアプローチを実装し、予測性能と計算効率を示す実験を行いました。実験の結果、NextiaJDは、ハッシュベースの方法はより優れた予測性能を達成し、より大きなデータ量にスケールアップできます。
2307.16363
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA Acceleration
Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4\% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at \url{https://github.com/asdvfghg/BearingPGA-Net}.
Jing-Xiao Liao, Sheng-Lai Wei, Chen-Long Xie, Tieyong Zeng, Jinwei Sun, Shiping Zhang, Xiaoge Zhang, Feng-Lei Fan
2023-07-31T01:43:38
http://arxiv.org/abs/2307.16363v1
BearingPGA-Net: A Lightweight and Deployable Bearing Fault Diagnosis Network via Decoupled Knowledge Distillation and FPGA Acceleration ###### Abstract Deep learning has achieved remarkable success in the field of bearing fault diagnosis. However, this success comes with larger models and more complex computations, which cannot be transferred into industrial fields requiring models to be of high speed, strong portability, and low power consumption. In this paper, we propose a lightweight and deployable model for bearing fault diagnosis, referred to as BearingPGA-Net, to address these challenges. Firstly, aided by a well-trained large model, we train BearingPGA-Net via decoupled knowledge distillation. Despite its small size, our model demonstrates excellent fault diagnosis performance compared to other lightweight state-of-the-art methods. Secondly, we design an FPGA acceleration scheme for BearingPGA-Net using Verilog. This scheme involves the customized quantization and designing programmable logic gates for each layer of BearingPGA-Net on the FPGA, with an emphasis on parallel computing and module reuse to enhance the computational speed. To the best of our knowledge, this is the first instance of deploying a CNN-based bearing fault diagnosis model on an FPGA. Experimental results reveal that our deployment scheme achieves over 200 times faster diagnosis speed compared to CPU, while achieving a lower-than-0.4% performance drop in terms of F1, Recall, and Precision score on our independently-collected bearing dataset. Our code is available at [https://github.com/asdvfghg/BearingPGA-Net](https://github.com/asdvfghg/BearingPGA-Net). Bearing fault diagnosis, deep learning, model compression, field programmable gate array (FPGA) ## I Introduction Rotating machines, including turbines, pumps, compressors, fans, etc., are widely used in industrial fields [1, 2, 3]. Rolling bearings, whose primary roles are to support the whole machine and reduce friction, are crucial components in rotating machines. Statistics from authoritative institutes reveals that bearing failure accounts for approximately 45%-70% of all mechanical faults [4, 5]. Therefore, detecting bearing failure is an essential task in industry and civil fields. A timely and accurate detection can greatly enhance the reliability and efficiency of bearings, thereby reducing the enormous economic loss. Since bearing failures often exhibit unique abnormal vibration [6, 7, 8], the most common approach to diagnosing bearing faults is to install an acceleration sensor on mechanical surfaces to measure vibration signals, and then a diagnosis algorithm is applied to detect faulty signals. Over the past decade, deep learning methods, represented by convolutional neural networks (CNNs) and attention-based models, have been dominating the problem of bearing fault diagnosis [9, 10, 11, 12]. Despite the significant success achieved by deep learning methods, the growing model size renders them impractical in industrial settings, since deploying deep learning models often demands high-performance computers. But a factory usually has multiple rotating machines. For instance, in nuclear power plants, there are numerous rotating machines such as seawater booster pumps and vacuum pumps that require monitoring of their vibration signals [13]. Thus, installing computers for each machine will not only undesirably occupy space but also impede production due to excessive circuit connections. In fact, the field programmable gate array (FPGA), a class of low-power consumption, high-speed, programmable logic devices, is widely used in industrial fields. FPGA has high-speed signal processing capabilities such as time-frequency analysis, which can enhance the accuracy and real-time performance of fault detection [14, 15, 16]. By leveraging its parallel computing capabilities, FPGA dramatically increases the speed and energy efficiency, achieving over a thousand times improvement than a digital signal processor (DSP) in fault detection execution [17]. However, the FPGA suffers from limited storage space and computational resources, which brings huge trouble in bearing fault diagnosis, as it needs to process long signal sequences in a timely manner. In brief, to deploy fault diagnosis in practical scenarios, we need to address two challenges: designing lightweight but well-performed deep learning models and deploying these models on embedded hardware such as FPGAs without compromising the performance too much. Along this line, researchers have put forward lightweight deep models for bearing fault diagnosis, aiming to strike a balance between diagnostic accuracy and model complexity [18, 19, 20, 21]. Nevertheless, these lightweight models have not been really deployed on the embedded hardware, and whether or not the good performance can be delivered as expected remains uncertain. On the other hand, two types of hardware have been utilized to deploy CNNs, System-on-Chip (SoC) FPGA and Virtex series FPGA. The former, exemplified by ZYNQ series chips which integrate an ARM processor with an FPGA chip, have been employed to deploy a limited number of bearing fault diagnosis methods [22, 23]. These approaches rely on the PYNQ framework, which translates the Python code into FPGA-executed Verilog through the ZYNQ board. While the PYNQ framework reduces development time, the FPGA resources available for CNN acceleration are limited, reducing the processing time compared to FPGA boards. As for the latter, some prior researchers have designed FPGA-based accelerators for CNNs and successfully deployed CNNs on Virtex series FPGA [24, 25, 26]. However, these FPGAs, capable of deploying multi-layer CNNs, cost ten times higher than commonly used mid-range FPGAs (Kintex series). Consequently, widespread deployment is hindered due to excessive costs. To resolve the tension between model compactness and deployment, here we propose BearingPGA-Net: a lightweight and deployable bearing fault diagnosis network via decoupled knowledge distillation and FPGA acceleration. First, we apply the decoupled knowledge distillation (DKD) to build a lightweight yet well-performed network, named as BearingPGA-Net. Knowledge distillation (KD) is a form of model compression that transfers knowledge from a large network (teacher) to a smaller one (student) [27, 28], which is deemed as more effective than directly prototyping a small network. Knowledge distillation can be categorized into response-based KD (logit KD) and feature-based KD. Albeit previous studies have demonstrated the superiority of feature-based KD over logit KD for various tasks [29, 30], we select the logit KD. Our student network comprises only a single convolutional layer for deployment, and there are no hidden features for feature-based KD to distill. Recently, a novel approach called decoupled knowledge distillation has been proposed, which reformulates the traditional logit distillation by decomposing the knowledge distillation loss function into two separate functions. The training of these functions can be flexibly balanced based on the task [31]. The decoupling is a strong booster for the performance of logit KD, and we favorably translate it into our task. Second, we utilize Verilog to implement neural network accelerators. By devising parallel computing and module reuse, we deploy BearingPGA-Net and enhance its computing speed in a Kintex-7 FPGA. Specifically, we construct basic arithmetic modules with basic multiplication and addition units for the convolutional and fully-connected layers. To cater to the requirements of BearingPGA-Net, we fuse a ReLU and Max-pooling layer to further reduce computation. These computational modules are also translatable to other CNN-based tasks. Moreover, we design a tailored layer-by-layer fixed-point quantization scheme for neural networks, ensuring minimal loss of parameter accuracy in FPGA computations while also cutting the number of parameters by half. Notably, our FPGA framework fully leverages the computational resources of the Kintex-7 FPGA, which is a widely used mid-range FPGA in industrial fields. Compared to previous implementations via SoC FPGA and Virtex FPGA, BearingPGA-Net+Kittex FPGA achieves preferable power consumption, model compactness, and high performance, which is highly scalable in real-world fault diagnosis scenarios. In summary, our contributions are threefold: \(\bullet\) We build BearingPGA-Net, a lightweight neural network tailored for bearing fault diagnosis. This network is characterized by a single convolutional layer and is trained via decoupled knowledge distillation. \(\bullet\) We employ fixed-point quantization to compress the parameters of BearingPGA-Net by 50% and propose a CNN accelerators scheme, where we utilize parallel computing and module reuse techniques to fully leverage the computational resources of the Kintex-7 FPGA. \(\bullet\) Compared to lightweight competitors, our proposed method demonstrates exceptional performance in noisy environments, achieving an average F1 score of over 98% on CWRU datasets. Moreover, it offers a smaller model size, with only 2.83K parameters. Notably, our FPGA deployment solution is also translatable to other FPGA boards. ## II Related Works **1) Lightweight CNNs for bearing fault diagnosis.** Some lightweight networks were proposed for deployment in resource-constrained environments. Yao _et al._ introduced the stacked inverted residual convolution neural network (SIRCNN), comprising one convolutional layer and three inverse residual layers [18]. Similarly, Fang _et al._ developed the lightweight efficient feature extraction network (LFEE-NET), which is a feature extraction module conjugated with a lightweight classification module. However, despite the simple structure of the classification network, its feature extraction network is complex [19]. Other lightweight models incorporated a multi-scale model [20] or a self-attention mechanism [21], demonstrating the superior performance in handling few-shot or cross-domain issues. But these networks have not yet been deployed on embedded devices. Therefore, it remains unclear how much performance will be sacrificed when deployed. **2) Bearing fault diagnosis models for FPGAs.** There were only a small number of models successfully deployed on FPGAs. An FPGA-based multicore system was proposed for real-time bearing fault diagnosis using acoustic emission (AE) signals [32]. It designed a high-performance multicore architecture including 64 processing units running on a Xilinx Virtex-7 FPGA to support online processing, and using time-frequency analysis and support vector machine (SVM) for diagnosis [32]. Toumi implemented an envelope spectrum and multi-layer perceptron (MLP) structure on a ZYNQ-7000 FPGA, achieving around 90% accuracy in CWRU datasets [22]. Ji _et al._ used the knowledge distillation technique to train a single-layer convolutional neural student network and deployed it into a ZYNQ FPGA through parameter quantization, resulting in an approximate 8% improvement compared to training the network directly [23]. Despite these achievements, the task of deploying fault diagnosis models in FPGAs still have a large room for improvement, as their performance has not yet reached the level of the large model. Our idea is to combine CNNs and signal processing techniques so that it can achieve real-time and high-performance fault diagnosis in industrial fields. ## III Method As depicted in Fig. 1, prototyping BearingPGA-Net follows a two-step pipeline: i) training a lightweight BearingPGA-Net via decoupled knowledge distillation; ii) deploying the BearingPGA-Net into an FPGA. Notably, we devise a layer-by-layer fixed-point quantization method to convert the parameters (weights and bias) of PyTorch's CNN, which are originally in 32-bit floating format, into a 16-bit fixed-point format. Additionally, for online diagnosis, the measured signal is amplified by a signal conditioner and then converted into a digital signal by an Analog-Digital (AD) converter. Subsequently, the FPGA's FIFO (first in first out) performs clock synchronization and buffering, followed by executing convolutional neural network operations. Finally, the diagnosing result is displayed using four LEDs. ### _Training BearingPGA-Net_ The BearingPGA-Net is trained via decoupled knowledge distillation (DKD), which is an approach to training a small model (student) with a well-trained large model (teacher). It forces the student to emulate the output of the teacher. During the training process, the teacher model is trained first. Subsequently, its parameters are freezed to generate the outputs as new labels to train the student model. **1) Constructing teacher and student networks.** For the teacher network, we adopt a widely-used 1D-CNN architecture known as the WDCNN [33], which consists of six CNN blocks and one fully-connected layer. In our implementation, we adjust the number of weights in the fully-connected layer to fit the input data size. Additionally, we design a one-layer student network specifically for FPGA deployment (BearingPGA-Net). This student network comprises a single 1D-CNN layer, a ReLU activation layer and a Max-Pooling layer, followed by a fully-connected layer mapping the latent features to the logits. The structure information of BearingPGA-Net is shown in Tab. I. **2) Decoupled knowledge distillation.** Despite numerous forms of knowledge distillation, our method adopts the response-based KD, which utilizes the teacher model's logits for knowledge transfer. This is because the limited hardware resource in a low-cost FPGA cannot hold two or more convolutional layers. Then, there are no hidden features for feature-based KD to distill. In classical knowledge distillation, soft labels, which are the logits produced by the teacher, are deemed as distilled knowledge [27]. Soft labels are obtained by the softmax function converting the logits \(z_{i}\) of a neural network into the probability \(p_{i}\) of the \(i\)-th class: \[p_{i}=\frac{\exp{(z_{i}/T)}}{\sum_{j=1}^{C}\exp(z_{j}/T)}, \tag{1}\] where \(C\) represents the number of classes, and \(T\in\mathbb{R}^{+}\) serves as the temperature factor. When the temperature \(T\) is higher, the probability distribution over classes becomes smoother. A lower value of \(T\) (where \(T<1\)) sharpens the output, increasing the disparity in probability values of different classes. For classification, the cross-entropy (CE) loss is adopted to measure the difference between the probability of the predicted label \(p\) and the ground truth \(y\): \[\mathcal{L}_{CE}(y,p(z,T))=-{\sum_{i=1}^{C}}y_{i}\log{p_{i}}, \tag{2}\] and KL-Divergence measures the similarity between the probability labels of the teacher \(p^{\mathcal{T}}\) and the student \(p^{\mathcal{S}}\), \[\mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{\mathcal{S}}(z,T))=-{\sum_{i=1}^{C}}p _{i}^{\mathcal{T}}\log{\frac{p_{i}^{\mathcal{S}}}{p_{i}^{\mathcal{T}}}}. \tag{3}\] KD combines two loss functions: \[\mathcal{L}_{KD}=(1-\alpha)\mathcal{L}_{CE}(y,p^{\mathcal{S}})+\alpha T^{2} \mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{\mathcal{S}}(z,T)), \tag{4}\] where \(\alpha\) is the scale factor to reconcile the weights of two loss functions, and \(T^{2}\) keeps two loss functions at the same level of magnitude. Combining \(\mathcal{L}_{KL}\) and \(\mathcal{L}_{KD}\) helps the student get the guidance of the teacher and the feedback from the ground truth. This ensures that the student model learns effectively and reduces the risk of being misled. Fig. 1: The overall framework for prototyping and deploying BearingPGA-Net. \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline Layer & Kernel size & Channel & Output size & Padding & Stride \\ \hline Conv1d & 64x4 & 4 & 128x4 & 28 & 8 \\ \hline ReLU & - & - & 128x4 & - & - \\ \hline MaxPool & 2×1 & 4 & 64x4 & 0 & 2 \\ \hline Linear & (256,10) & - & 10 & - & - \\ \hline \end{tabular} \end{table} TABLE I: The structure parameters of BearingPGA-Net. However, \(\mathcal{L}_{KL}\) implies that all logits from the teacher are transferred to the student on an equal booting. Intuitively, student models should have the ability to filter the knowledge they receive. In other words, knowledge relevant to the current target category should be reinforced, while knowledge far from the target category should be attenuated. The coupling in classical KD harms the effectiveness and flexibility across various tasks. To address this, decoupled knowledge distillation, which factorizes \(\mathcal{L}_{KL}\) into a weighted sum of two terms (the target class and non-target class), was proposed [31]. Let \(p_{t}\) and \(p_{/t}\) be probabilities of the target and non-target classes, respectively. Then, we have \[p_{t}=\frac{\exp{(z_{t}/T)}}{\sum_{j=1}^{C}\exp{(z_{j}/T)}},\ p_{/t}=\frac{ \sum_{i=1,i\neq t}^{C}\exp{(z_{i}/T)}}{\sum_{j=1}^{C}\exp{(z_{j}/T)}}, \tag{5}\] and \[\hat{p}_{t}=\frac{p_{t}}{p_{/t}}=\frac{\exp{(z_{t}/T)}}{\sum_{i=1,i\neq t}^{C} \exp{(z_{i}/T)}}. \tag{6}\] Then, \(\mathcal{L}_{KL}\) can be factorized as \[\begin{split}&\mathcal{L}_{KL}(p^{\mathcal{T}}(z,T),p^{S}(z,T)) \\ =&-p_{t}^{\mathcal{T}}\log\frac{p_{t}^{\mathcal{S}}}{ p_{t}^{\mathcal{T}}}-\sum_{i=1,i\neq t}^{C}p_{i}^{\mathcal{T}}\log\frac{p_{t}^{ \mathcal{S}}}{p_{t}^{\mathcal{T}}}\\ =&\underbrace{-p_{t}^{\mathcal{T}}\log\frac{p_{t}^{ \mathcal{S}}}{p_{t}^{\mathcal{T}}}-p_{/t}^{\mathcal{T}}\log\frac{p_{/t}^{ \mathcal{S}}}{p_{/t}^{\mathcal{T}}}}_{\mathcal{L}_{KL}^{NCKD}}-p_{/t}^{ \mathcal{T}}\underbrace{\sum_{i=1,i\neq t}^{C}\hat{p}_{i}^{\mathcal{T}}\log \frac{\hat{p}_{i}^{\mathcal{S}}}{\hat{p}_{i}^{\mathcal{T}}}}_{\mathcal{L}_{KL }^{NCKD}},\end{split} \tag{7}\] where \(\mathcal{L}_{KL}^{TCKD}\) denotes the KL loss between teacher and student's probabilities of the target class, named target class knowledge distillation (TCKD), while \(\mathcal{L}_{KL}^{NCKD}\) denotes the KL loss between teacher and student's probabilities of the non-target classes, named non-target class knowledge distillation (NCKD). The derivation can also be found in [31]. Thus, the entire DKD loss function is \[\mathcal{L}_{DKD}=(1-\alpha)\mathcal{L}_{CE}(y,p^{\mathcal{S}})+\alpha T^{2}( \beta\mathcal{L}_{KL}^{TCKD}+\gamma\mathcal{L}_{KL}^{NCKD}), \tag{8}\] where \(p_{t}\) and \(p_{/t}\) are merged into two hyperparameters \(\beta\) and \(\gamma\) to coordinate the contributions of two terms. In bearing fault diagnosis, encouraging TCKD while suppressing NCKD (\(\beta>\gamma\)) often leads to performance improvement. By optimizing this decoupled loss, the knowledge acquired by the teacher model is more easily imparted into the student model, thereby improving the student network's performance. ### _Deploying BearingPGA-Net into FPGA_ The PYNQ framework can directly convert the Python code into FPGA bitstream files, but it is specifically designed for the architecture of a System-on-Chip (SoC) FPGA. This type of FPGA integrates an ARM processor and an FPGA chip into a single package. However, the FPGA in such SoC has only 50k look-up table (LUT), while other FPGAs have over 200k LUT. As a result, the computation speed of the SOC FPGA is much slower [22, 23] than other FPGAs. In contrast, without bells and whistles, we design a simple yet pragmatic CNN acceleration scheme that directly translates the network computation modules in logic circuitry, which is directly deploying the module into FPGA chips, without the need for ARM chip translation, thereby fully leveraging the FPGA's high-speed computing capabilities. Specifically, we convert the operations of BearingPGA-Net into Verilog and optimize the place and route (P&R) of the logical connection netlist such as logic gates, RAM, and flip-flops to implement hardware acceleration. The entire process includes register transfer level (RTL) design, simulation and verification, synthesis, implementation (P&R), editing timing constraints, generation of bitstream files, and the FPGA configuration. Notably, our scheme is also scalable to other FPGA chips such as Virtex-7series. **1) Fixed-point quantization.** The original network in Python employs the 32-bit floating-point numbers for high-accuracy calculations. However, these 32-bit numbers consume a significant amount of memory, which cannot allow the deployment of single-layer networks. To address this, we employ fixed-point quantization, which converts the numbers to 16-bit fixed-point format. In this format, the bit-width keeps intact for integers, decimals, and symbols in a floating-point number. The reduction in precision is up to a half of the original, despite a minor decrease in performance. The fixed-point number is expressed as \[Q_{X,Y}=\pm\sum_{i=0}^{X-1}b_{i}\cdot 2^{i}+\sum_{j=1}^{Y}b_{X+j}\cdot 2^{-j}. \tag{9}\] where \(Q_{X,Y}\) is a fixed-point number that contains \(X\)-bit integers and \(Y\)-bit decimals, \(b_{i}\) and \(b_{X+j}\) are the value on the binary bit. The storage format of fixed-point numbers in an FPGA is \((\pm,b_{i},\cdot,b_{X+j})\). Furthermore, when considering a fixed bit-width, it is crucial to determine the trade-off between the bit-widths of Fig. 2: An example of bit-width of integers and decimals in each layer of BearingPGA-Net on FPGA computation, where \((S,X,Y)\) denotes the number of symbol bit, integer bits, and decimal bits, respectively. integers and decimals. It is important to have a sufficiently large range to minimize the probability of overflow while ensuring a small enough resolution to reduce quantization error [34]. In this study, a dynamic fixed-point quantization is assigned to each layer of the BearingPGA-Net to preserve the overall accuracy [35, 36]. The specific bit-width for integers and decimals is determined based on the maximum and minimum values of parameters in each layer, so different models may have various bit-width. The allocation of bit-width of integers and decimals in each layer is illustrated in Fig. 2. **2) The overall workflow.** As shown in Fig. 3. Parameters are stored in the ROM after quantization, and we implement the BearingPGA-Net accelerator (FFT, convolutional layer, max-pooling conjugated with ReLU, fully-connected layer) using Verilog. The FFT operation is performed by the IP core, and we carefully design logic circuit modules for all operations of the BearingPGA-Net to carry out. Moreover, some specific modules used in FPGAs are described below: Receiving filter (RF) selector: It splits the input signal into 128 segments of 64 points with a stride of 8, which is equivalent to sliding a convolutional kernel of \(1\times 64\) with a stride of 8 over the input signal. ROM control (Ctrl): It retrieves weight parameters in ROM for the convolutional and fully-connected layers. For the convolutional layer, it reads 256 convolutional weight parameters (4 \(\times\) 64 kernels) once a time and 10 weight parameters per cycle to update the input of the multiplication-accumulation module for the fully-connected layer. In addition, the FPGA directly stores bias parameters (4 for the convolutional layer and 10 for the fully-connected layer) in registers. Shift module: It shifts the position of the decimal point, which converts the bit-width of integers and decimals from (2,13) to (7,8) to adjust the fixed-point numbers in the down-stream fully-connected layer. Classification module: It selects the maximum value among 10 outputs of the fully-connected layer as the failure category of the bearing. Softmax is waived in FPGA deployment. **3) BearingPGA-Net accelerator.** CNN accelerator designs can be generally classified into two groups: computing acceleration and memory system optimization [24]. Since our BearingPGA-Net consists of only one convolutional layer and one fully-connected layer, memory management does not require additional optimization. We focus on optimizing the FPGA acceleration of each layer in BearingPGA-Net. Multiplication-accumulation unit. As FPGA registers cannot directly perform matrix calculations, we design the multiplication-accumulation unit beforehand. The logic circuit diagram of the multiplication-accumulation unit is shown in Fig. 4, where the accumulation operation is executed in \(N\) cycles, and the result register is used to store the temporary results in each cycle. The expression of the multiplication-accumulation unit is \[y=\sum_{i}^{N}(p_{i}\times q_{i}), \tag{10}\] where \(p_{i}\) and \(q_{i}\) are 16-bit fixed-point numbers. Convolutional layer. Fig. 5 illustrates the implementation of the convolution layer. The signal after passing through the RF selector is divided into segments of \(128\times 64\). Subsequently, the multiplication-accumulation unit uses weights \(w\in\mathbb{R}^{1\times 64}\) stored in the ROM to multiply the signal \(s_{i}\in\mathbb{R}^{1\times 64}\). Each multiplication-accumulation unit runs for 64 cycles and adds the bias \(b\). The equation for this operation is \(z_{i}=\sum_{j=1}^{64}(s_{i,j}\times w_{j})+b\). Moreover, hardware acceleration is achieved using 128 parallel multiplication-accumulation units. This enables each convolutional kernel to complete the full convolution in just 64 cycles. The parallel output is merged into \(\mathbf{z}=\{z_{1},z_{2},\cdots,z_{128}\}\). With 4 convolutional kernels, the layer requires 4 external cycles, totaling 256 cycles. The speedup is 128-fold relative to conventional implementations. ReLU and max-pooling layer. ReLU activation and max-pooling are back-to-back in the BearingPGA-Net. Also, both ReLU and max-pooling functions attempt to find the maximum value between two numbers. Therefore, we fuse them into a single module to save the inference time. Suppose that the input signal is \(\mathbf{x}\in\mathbb{R}^{1\times n}\), the ReLU activation function is \[p_{i}=\max(0,x_{i}),\ i=1,2,\cdots,n, \tag{11}\] followed by the max-pooling layer, where the window size is \(k\) and the stride is \(s\). The output of this layer is \[q_{i}=\max(\{p_{j}\}_{j=i\times s}^{i\times s+k-1}),\ i=1,2,\cdots,\lfloor \frac{n-k}{s}\rfloor+1, \tag{12}\] Fig. 4: The circuit diagram of a multiplication-accumulation unit, which includes a fixed-point multiplier, a fixed-point adder, a reset register, and a result register. Fig. 3: The overall diagram for deploying the BearingPGA-Net into FPGA. where \(\lfloor.\rfloor\) is the floor function to ensure the last pooling window is fully contained within the signal length. We devise a strategy to reduce the amount of computation as Algorithm 1 shows. Two 16-bit numbers, denoted as \(x_{1}\) and \(x_{2}\), are firstly compared in terms of their symbol bit. If both numbers are smaller than 0, the output is set to 0, thus executing the ReLU function directly. On the other hand, if the numbers have opposite signs, the negative number is killed. These operations can save one maximum comparison. Numerical comparisons are only performed when both numbers are positive, in which case the complete ReLU and max-pooling operations are executed. By adopting our method, we are able to save approximately \(2/3\) of LUT resources. ``` 0:\(x_{1},x_{2}\) (\(x[15]\) symbol bit, \(x[14:0]\) number bits) 1:if\(x_{1}[15]>0\) AND \(x_{2}[15]<0\)then 2:return\(x_{1}\) 3:else if\(x_{1}[15]<0\) AND \(x_{2}[15]>0\)then 4:return\(x_{2}\) 5:else if\(x_{1}[15]<0\) AND \(x_{2}[15]<0\)then 6:return\(0\) 7:else if\(x_{1}[15]>0\) AND \(x_{2}[15]>0\)then 8:if\(x_{1}[14:0]>=x_{2}[14:0]\)then 9:return\(x_{1}\) 10:else if\(x_{1}[14:0]<x_{2}[14:0]\)then 11:return\(x_{2}\) 12:endif 13:endif ``` **Algorithm 1** ReLU and Max-pooling Fully-connected layer. The fully-connected layer establishes connections between the 256 outputs of the ReLU and max-pooling fusion layers and the final 10 outputs. Unlike on computers, the fully-connected layer in FPGAs consumes fewer resources, as there is no need to slide windows. For the fully-connected layer, we reuse multiplication-accumulation units. Initially, the size of weights in the fully-connected layer is \(256\times 10\). To simplify the computations, we divide this matrix into 256 segments, each consisting of 10 weights. Consequently, 10 multiplication-accumulation units perform parallel computation over 256 cycles to generate the output. The implementation for this design is illustrated in Fig. 6. In summary, our FPGA accelerator offers two main advantages: i) Parallelism: multiplication-accumulation units are simultaneously calculated, which greatly boosts computational efficiency. ii) Module reuse: due to the constraint of limited FPGA computing resources that cannot parallelize 1024 units concurrently, multiplication-accumulation units are reused to strike a balance between resources and speed. ## IV Experiments ### _Datasets Descriptions_ **1) CWRU dataset.** This widely-used dataset is curated by Case Western Reserve University Bearing Data Center, which consists of two deep groove ball bearings mounted on the fan-end (FE) and drive-end (DE) of the electric motor. Electros discharge machining is used to inject single-point defects with diameters of 7, 14, and 21 mils into the outer race, inner race, and ball of both bearings, respectively. As a result, there are ten categories in this dataset, including nine types of faulty bearings and one healthy bearing. Specifically, the motor shaft is subjected to four levels of load (0HP, 1HP, 2HP, 3HP, where HP denotes horsepower), which slightly affects the motor speed (1797 r/min, 1772 r/min, 1750 r/min, 1730 r/min). Vibration data are collected at 12 kHz and 48 kHz, respectively. In this study, we analyze the vibration signals collected at 12 kHz on the DE side of the motor. **2) HIT dataset.** The bearing fault test is conducted in the MIIT Key Laboratory of Aerospace Bearing Technology and \begin{table} \begin{tabular}{|l|l|l|l|} \hline Label & Faulty Mode & Label & Faulty Mode \\ \hline 1 & Health & 6 & OR (Moderate) \\ \hline 2 & Ball cracking (Minor) & 7 & OR (Severe) \\ \hline 3 & Ball cracking (Moderate) & 8 & IR (Minor) \\ \hline 4 & Ball cracking (Severe) & 9 & IR (Moderate) \\ \hline 5 & OR cracking (Minor) & 10 & IR (Severe) \\ \hline \end{tabular} \end{table} TABLE II: Ten healthy statuses in our HIT dataset. OR and IR denote that the faults appear in the outer race and inner race, respectively. Fig. 5: The implementation diagram of the convolutional layer. Fig. 6: The implementation diagram of the fully-connected layer. Equipment at Harbin Institute of Technology (HIT). Fig. 7 shows the bearing test rig and the faulty bearings used in the experiment. We utilize HC7003 angular contact ball bearings, which are designed for high-speed rotating machines. The accelerometer is directly attached to each bearing to collect vibration signals of the bearings. Similar to the CWRU dataset, we injected defects at the outer race (OR), inner race (IR), and ball at three severity levels (minor, moderate, severe). Tab. II presents ten healthy statuses of bearings in our dataset. For the test, a constant motor speed of 1800 r/min is set, and vibration signals are acquired using the NI USB-6002 device at a sampling rate of 12 kHz. Each bearing vibration is recorded for 47 seconds, resulting in 561,152 data points per category. Our dataset is more challenging than the CWRU dataset because the bearing faults in our dataset are cracks of the same size but varying depths whose vibration signals between different faults exhibit more similarity. ### _Experimental Configurations_ **1) Data preprocessing.** First, the raw long signal is divided into segments of 2,048 points and standardized. Next, the Fast Fourier Transform (FFT) is applied to convert the signal from the time domain to the frequency domain. Previous research has experimentally demonstrated that employing signal processing techniques such as FFT and wavelet transform, can enhance the performance of shallow neural networks [37]. Because the characteristics of non-stationary vibration signals are more pronounced in the frequency or time-frequency domain than the time domain, this compensates for the constrained feature extraction capability of shallow networks. Specifically, the starting point of the segment of 2,048 points is randomly chosen, and the stride is set to 28 to resample the raw signal. All classes of signals are sampled 1,000 times, resulting in a total of 10,000 samples. Then, the dataset is randomly divided into training, validation, and testing sets with a ratio of 2:1:1. Next, the Gaussian noise is added to the signals to simulate noise in the industrial fields and also evaluate different models' performance in noisy experiments. The signal-to-noise ratio (SNR) is calculated as \(\text{SNR}=10\log 10(P_{s}/P_{n})\), where \(P_{s}\) and \(P_{n}\) are the average power of the signal and noise, respectively. Lastly, the signals are standardized using z-score standardization. Notably, each signal \(\mathbf{x}\) is transformed as \(\mathbf{x}^{\prime}=(\mathbf{x}-\mu)/\sigma\), where \(\mu\) and \(\sigma\) are the mean and standard deviation of the signal \(\mathbf{x}\), respectively. For the CWRU dataset, the SNR ranges from -6dB to 2dB in 2dB intervals. For our dataset, the SNR ranges from 0dB to 8dB in 2dB intervals. As shown in Fig. 8, the signal amplitude in our dataset is lower than in the CWRU dataset. Therefore, even relatively low-intensity noise (0dB SNR) can overshadow the signal. **2) Baselines.** We compare our method against four state-of-the-art lightweight baselines: wide deep convolutional neural networks (WDCNN) [33], lightweight efficient feature extraction networks (LEFE-Net) [38], lightweight transformers with convolutional embeddings and linear self-attention (CLFormer) [19], and knowledge distillation-based student convolutional neural networks (KDSCNN) [23]. All these models are published in flagship journals of this field in recent years. As summarized in Tab. III, our model enjoys the smallest model size, relatively low computational complexity, and the second shortest inference time compared to its competitors. Although our model has slightly higher computational complexity and inference time than KDSCNN due to the introduced FFT, it has a substantially smaller number of parameters (2.83K vs 5.89K), while the model size is the first priority for deployment on FPGAs. **3) Implementation settings.** All experiments are conducted in Windows 10 with Intel(R) Core(TM) 11th Gen i5-1135G7 at 2.4GHz CPU and one NVIDIA RTX 3080 8GB GPU. Our code is written in Python 3.10 with PyTorch 2.1.0. For all compared models, we use stochastic gradient descent (SGD) [39] as the optimizer with momentum=0.9 and Cosine Annealing LR [40] as the learning rate sched \begin{table} \begin{tabular}{|l|c|c|c|} \hline Method & \#Parameters & \#FLOPs & Inference time \\ \hline LEFE-Net & 73.95K & 14.7M & 22.4540s \\ \hline WDCNN & 66.79K & 1.61M & 1.6327s \\ \hline CLFormer & 4.95K & 0.63M & 0.7697s \\ \hline KDSCNN & 5.89K & 70.66K & 0.2219s \\ \hline BearingPGA-Net & 2.83K & 78.34K & 0.6431s \\ \hline \end{tabular} \end{table} TABLE III: The properties of compared models. #FLOPs denotes floating point operations. Time is the elapsed time to infer 1000 samples on an NVIDIA Geforce GTX 3080 GPU. Fig. 8: Comparison of two datasets under noise. Fig. 7: The test rig of faulty bearings to collect data. ular. The training epochs are set to 75, and we use grid search to find the best hyperparameters. \([32,64,128,256]\) is for the batch size, and \([10^{-4},3\times 10^{-4},10^{-3},3\times 10^{-3},0.01,0.03,0.1,0.3]\) is for the learning rate. In KD-based methods, we search \(T\) from \([1,2,3,4,5,6,7,8,9]\), and \(\alpha\) from \([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]\), while in our model we have to additionally search \(\beta\) and \(\gamma\). We configure \([0,0.2,0.5,1,2,4]\) for \(\beta\) and \([1,2,4,8,10]\) for \(\gamma\), respectively. **4) Evaluation metrics.** We use the F1 score to validate the performance of the proposed method. The metric is defined as \[\mathrm{F1~{}score}=\frac{1}{\mathrm{C}}\mathrm{\sum}_{i=1}^{\mathrm{C}}\frac {2\mathrm{TP}_{i}}{2\mathrm{TP}_{i}+\mathrm{FP}_{i}+\mathrm{FN}_{i}},\] where \(\mathrm{TP}_{i}\), \(\mathrm{FP}_{i}\), and \(\mathrm{FN}_{i}\) are the numbers of true positives, false positives, and false negatives for the \(i\)-th class. \(\mathrm{C}\) denotes the number of classes. All results are the average of ten runs. **5) FPGA description.** We utilize the ALINX 7325B-FPGA as the hardware deployment board, as shown in Fig. 9. This board incorporates the Kintex-7 XC7K325T FPGA chip, which is widely used in the industrial field due to its optimal balance between price and performance-per-watt. The clock frequency is set to 100MHz, and we utilize four LEDs as indicators of the fault diagnosis result, where ten categories are encoded as 4-bit binary numbers. The specifications of the FPGA chip are summarized in Tab. IV. ### _Classification Results_ Here, we compare the performance of the student model (BearingPGA-Net) with its competitors. **1) Results on the CWRU and HIT datasets.** On the CWRU dataset, as shown in Tab. V, BearingPGA-Net achieves the top F1 scores in 0HP and 1HP conditions and the second-best scores in 2HP and 3HP conditions. Moreover, our method maintains over 95% F1 score at the -6dB noise level, whereas the KDSCNN performs the worst on the CWRU-0HP dataset at -6dB noise (79.95%). Although WDCNN performs comparably, its number of parameters is approximately 22 times larger than ours. These results demonstrate BearingPGA's strong diagnosis performance despite its small parameter size. On the HIT dataset, as Tab. VI shows, our method achieves the highest average F1 score in noisy scenarios. Although LEFE-Net slightly outperforms ours on clean signals, the difference is only 1%, and the LEFE-Net has a substantially larger model size. Furthermore, the performance of other models (CLFormer: 76.37%, KDSCNN: 74.98%) falls behind ours (83.30%) by a large margin. Overall, our model is the best performer on this dataset. **2) Model compression results.** We compare the compression performance of DKD. Firstly, Tab. VII shows the properties of teacher and student models. DKD effectively compresses the teacher model, reducing model parameters by 17 times, FLOPs by 10.6 times, and inference time by 2.45 times. Secondly, Tab. VIII compares the performance of the models. Clearly, DKD enhances the performance of student model, though still slightly below the teacher (with an average F1 score approximately 1% lower). Comparing the student model with and without DKD, distillation provides considerable improvements, especially in noisy environments. On the CWRU-3HP, DKD improves the average F1 score by 3.7%. Notably, DKD admits 8.23% and 8.46% gains at -6dB on CWRU-2HP and 0dB on the HIT dataset, respectively. These results demonstrate DKD can successfully transfer knowledge to shallow models, despite substantial compression. these hyperparameters. The results are presented in Tab. X. Firstly, the parameter of temperature (\(T\)) does not appear to be highly influential. While a larger value of \(T\) leads to slightly better performance, the improvement is only about 1%. Secondly, the analysis shows that all three scale parameters have a significant impact. In particular, setting \(\alpha\) to 0.2 yields superior results. Once \(\alpha\) surpasses 0.3, there is a noticeable decline in performance, with approximately 64% drop in F1 score. Unlike \(\alpha\), \(\beta\) exhibits less sensitivity, but increasing its value still leads to improved results. Conversely, a lower value (below 2) is recommended for the parameter \(\gamma\). The aforementioned phenomenon exemplifies the characteristics of DKD. Regarding the parameter \(\alpha\), it is utilized to regulate both \(\mathcal{L}_{CE}\) and \(\mathcal{L}_{KL}\) loss. A lower value of \(\alpha\) implies that the student network predominantly learns from the ground truth, while any knowledge transmitted by the teacher model serves merely as supplementary information. Next, \(\beta\) and \(\gamma\) are employed to control the TCKD and NCKD losses, respectively. Consistent with the findings of the ablation experiments, a larger value for \(\beta\) and a smaller value for \(\gamma\) yield optimal results, as they compel the model to effectively harness the knowledge pertaining to the target classes. tion. Tab. XI reveals that the quantized model demonstrates a lower-than-0.4% performance drop in terms of F1, Recall, and Precision scores relative to the original model. Additionally, the parameters in the quantized model are reduced by more than half. These findings highlight the effectiveness of the designed quantization method and convolution architecture for FPGA deployment. Furthermore, the confusion matrices between the FPGA and PyTorch models are compared. As illustrated in Fig. 10, the FPGA model exhibits \(>18\) errors in classifying ball faults (B0-B2), \(<4\) errors for inner race faults (IR0-IR2), and 1 additional error in healthy bearings, compared to PyTorch results. The BearingPGA-Net when deployed on FPGA only generates a total of 24 extra errors compared to the original version across 2500 samples. **4) Resource analysis.** The resource usage for the entire network is summarized in Tab. XII. The LUT resource and block RAM (BRAM) resource exhibit utilization rates of 74.40% and 58.20%, respectively, indicating that BearingPGA-Net makes good use of the FPGA's computational and memory resources. Next, we analyze the resource consumption specifically within the LUT, which is the key computational resource in an FPGA. As illustrated in Fig. 11, the convolution and FFT modules account for the majority of resources in the LUT, constituting approximately 50% and 23%, respectively. This observation underscores the limited computational resources available on a Kintex-7 FPGA, _i.e._, a single convolutional layer alone consumes a half LUT resource. Moreover, we record the model inference time and power consumption on both the CPU and the FPGA, where the Python project is deployed on a portable Intel NUC Mini PC. As presented in Tab. XIII, the FPGA demonstrates an inference time that is approximately \(1/201\) of the CPU's and a power consumption that is nearly \(1/42\) of the CPU. This significant reduction is attributed to the flexibility of FPGAs in processing CNN operations to enhance efficiency. Additionally, FPGA employs parallel computing techniques, with the advantages of low power consumption and accelerated computing rates. Such low power consumption also makes online bearing fault monitoring and diagnosis possible. ## V Conclusions In this paper, we have proposed a lightweight and deployable CNN, referred to as BearingPGA-Net. By integrating FFT and DKD, BearingPGA-Net outperforms other state-of-the-art lightweight models when signals are noisy. Additionally, we have fully utilized the computational resources of the Kintex-7 FPGA and designed CNN accelerating scheme. The parallelism and model reuse significantly improves the running speed of our method, while the customized parameter quantization greatly alleviates the performance drop. Our deployment scheme has achieved over 200 times faster diagnosis speed compared to CPU and maintained over 97% \begin{table} \begin{tabular}{|l|c c c c c|} \hline \(T\) & 1.5 & 2 & 2.5 & 3 & 3.5 \\ \hline F1(\%) & 95.28 & 94.85 & 95.20 & **96.19** & 96.18 \\ \hline \(\alpha\) & 0.1 & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline F1(\%) & 92.49 & **96.19** & 32.13 & 32.25 & 32.19 \\ \hline \(\beta\) & 0.2 & 0.5 & 1 & 2 & 4 \\ \hline F1(\%) & 89.49 & 91.08 & 92.84 & 95.07 & **96.20** \\ \hline \(\gamma\) & 0.2 & 0.5 & 1 & 2 & 4 \\ \hline F1(\%) & **96.41** & 95.58 & 96.19 & 96.30 & 32.09 \\ \hline \end{tabular} \end{table} TABLE X: The hyperparameter sensitivity in DKD on the CWRU-0HP dataset with -6dB noise. \begin{table} \begin{tabular}{|l|c c c|} \hline Hardware & Inference time & Power Consumption \\ \hline Intel iS-1135G7@2.40GHz & 1160us & 28W \\ \hline Kintex-7 XC7K325T@100MHz & 5.77us & 0.67W \\ \hline \end{tabular} \end{table} TABLE XIII: The inference time and power consumption of BearingPGA-Net on CPU and FPGA for 1000 samples. \begin{table} \begin{tabular}{|l|c c c|} \hline Resources & Used resources & Available resources & Resource occupancy \\ \hline LUT & 151637 & 203800 & 74.40\% \\ LUTRAM & 936 & 64000 & 1.46\% \\ FF & 180099 & 407600 & 44.19\% \\ BRAM & 259 & 445 & 58.20\% \\ DSP & 185 & 840 & 22.02\% \\ IO & 6 & 500 & 1.20\% \\ BUFG & 3 & 32 & 9.38\% \\ PLL & 1 & 10 & 10.00\% \\ \hline \end{tabular} \end{table} TABLE XII: The resource report of Kintex-7 XC7K325T FPGA Fig. 11: The LUT consumption with respect to each module. \begin{table} \begin{tabular}{|l|c c c c|} \hline & \#Parameters & F1 (\%) & Recall (\%) & Precision (\%) \\ \hline PyTorch & 2.83K & 97.39 & 97.40 & 97.57 \\ \hline FPGA & 1.42K & 97.12 & 97.34 & 97.12 \\ \hline \end{tabular} \end{table} TABLE XI: Comparison of 32-bit Pytorch and 16-bit FPGA models on HIT dataset. Fig. 10: The confusion matrices of results produced by models before and after quantization on the HIT dataset, where B, IR, OR denote faults at the ball, the inner race, and the outer race, respectively, and H denotes the healthy bearing. classification F1 score on the HIT bearing dataset. Finally, our model introduces a new approach to realizing online bearing fault diagnosis.
深層学習は、軸受故障診断分野で顕著な成功を収めていますが、その成功はより大きなモデルと複雑な計算によって引き起こされます。これは、高速性、強固なポータビリティ、低消費電力が必要とする産業分野では、実現が困難な課題です。この論文では、BearingPGA-Netという軽量かつデプロイ可能なモデルを提案し、これら課題に対処することを目的としています。まず、訓練された大型モデルを補助することで、BearingPGA-Netを分散学習によってトレーニングしました。その小型サイズにもかかわらず、他の軽量の最先端方法と比較して優れた軸受故障診断性能を示しています。さらに、BearingPGA-NetをFPGA上で加速化する設計を提案しました。この設計には、Verilogを用いてBearingPGA-Netの各層に量化と可変論理ゲートの設計が含まれています。特に、並列計算とモジュールの再利用を重視することで
2305.07417
Recent results from DANSS
DANSS is a solid state scintillator neutrino spectrometer placed at a small distance from the commercial nuclear reactor of Kalininskaya NPP. The distance from the detector to the center of the reactor core can be changed online in the range 10.9-12.9 m. This fact together with a very high neutrino counting rate (more than 5000 events per day) and low background makes DANSS an ideal detector to search for neutrino oscillations in $1~eV^2 \Delta m^2$ range. We report the results based on the statistics of 6 million events, obtained between April 2016 and March 2022. The results include limits in the short range oscillation parameter space, fuel evolution studies and the bump in the neutrino spectrum. The talk will also cover our plans of the detector upgrade.
Igor Alekseev
2022-12-21T11:30:32
http://arxiv.org/abs/2305.07417v1
# Recent results from DANSS ###### Abstract: DANSS is a solid state scintillator neutrino spectrometer placed at a small distance from the commercial nuclear reactor of Kalininskaya NPP. The distance from the detector to the center of the reactor core can be changed online in the range 10.9-12.9 m. This fact together with a very high neutrino counting rate (more than 5000 events per day) and low background makes DANSS an ideal detector to search for neutrino oscillations in 1 eV\({}^{2}\Delta m^{2}\) range. We report the results based on the statistics of 6 million events, obtained between April 2016 and March 2022. The results include limits in the short range oscillation parameter space, fuel evolution studies and the bump in the neutrino spectrum. The talk will also cover our plans of the detector upgrade. Details of the DANSS detector and physics results of the first year of its operation could be found elsewhere [1, 2] as well as the results obtained by the previous year [3, 4]. This short contribution is concentrated over our progress during the last year. Our progress in the inverse beta-decay (IBD) statistics accumulation is illustrated in fig. 1. Now we have data for 3 full fuel campaigns and 4 reactor off periods. An important progress is reached in the understanding of our calibration. In addition to the reactions already used for calibration purposes delayed event spectrum is also analyzed as a calibration source (fig. 2 left). Now the calibration set includes \({}^{12}\)B decays from two reactions, induced by atmosphere muons, \(n+^{12}\)C\(\rightarrow^{12}\)B\(+p\) and \(\mu^{-}\) capture by \({}^{12}\)C; \({}^{22}\)Na and \({}^{60}\)Co radioactive sources; neutrons from \({}^{248}\)Cm fission and IBD, stopped muons decays. \({}^{12}\)B decay data is used to set the scale, because behavior of the produced electron is the most similar to IBD positron we need to measure and this data is accumulated uniformly during the run. No additional smearing is added in the Monte-Carlo simulations any more for a good reproduction of the experimental data. The scale of all the sources in the calibration set agrees within \(\pm 0.2\) with an exception of \({}^{22}\)Na, which has an offset 1.8%. The problem could be in contamination of the sample with \({}^{26}\)Al with slightly different energy of the decay. We keep systematic error 2% in the energy scale until we find a solution of this problem. Continuous reactor monitoring during 3 full fuel campaigns allows us to make a study of counting rate and neutrino spectrum evolution with the change in the fuel composition. The rate dependence over fission fraction of \({}^{239}\)Pu is shown in fig. 2 (middle). Our data demonstrate slope slightly steeper than the slope coming from MC-simulations with HM model [5, 6], while Daya Bay data show less steep slope [7]. We also have new results in light sterile neutrino search. After the new data was included into the analysis the 90% confidence level limit to sterile neutrino parameters in the region of \(\Delta m^{2}\sim 0.9\) eV\({}^{2}\) became as stringent as \(\sin^{2}2\theta<0.004\) (Gaussian CLs method), but two points with close \(\Delta\chi^{2}\sim-10\) manifested themselves. A Felt Figure 1: IBD statistics accumulation during 6 years of DANSS operation used to obtain their significance. The result is shown in fig. 2 (right). A dark blue area corresponds to \(3\sigma\)-limit. Much more conservative \(3\sigma\)-limit from Gaussian CLs method is shown by red line for comparison. The best fit point has significance \(2.35\sigma\), which is much less than we need to claim an indication of the 4th neutrino existence. The collaboration appreciates the permanent assistance of the KNPP administration and Radiation Safety Department staff. This work is supported by Ministry of Science and Higher Education of the Russian Federation under Contract No. 075-15-2020-778.
DANSSは、カリニンカヤNPPの商業核発電所の近くに位置する固体状 scintillator Neutrino spectrometerです。検出器と原子炉核心の中心的距離は、オンラインで10.9〜12.9mの範囲で変更可能です。この事実と非常に高いν counting rate(1日あたり5000件以上)と低いバックグラウンドは、DANSSを1eV²のΔm²の範囲でニュートリノのOscillationsの探査に最適な検出器にします。600万件のイベントの統計に基づいて結果を報告します。この結果は、2016年4月から2022年3月までの期間に得られたイベントの統計に基づいています。その結果は、短距離のニュートリノスペクトルのパラメータ空間の制限、燃料進化の研究、ニュートリノスペクトルの増加に関する結果が含まれています。この
2309.09089
On the geometry and dynamical formulation of the Sinkhorn algorithm for optimal transport
The Sinkhorn algorithm is a numerical method for the solution of optimal transport problems. Here, I give a brief survey of this algorithm, with a strong emphasis on its geometric origin: it is natural to view it as a discretization, by standard methods, of a non-linear integral equation. In the appendix, I also provide a short summary of an early result of Beurling on product measures, directly related to the Sinkhorn algorithm.
Klas Modin
2023-09-16T20:12:35
http://arxiv.org/abs/2309.09089v2
# On the geometry and dynamical formulation of the sinkhorn algorithm for optimal transport ###### Abstract. The Sinkhorn algorithm is a numerical method for the solution of optimal transport problems. Here, I give a brief survey of this algorithm, with a strong emphasis on its geometric origin: it is natural to view it as a discretization, by standard methods, of a non-linear integral equation. In the appendix, I also provide a short summary of an early result of Beurling on product measures, directly related to the Sinkhorn algorithm. **Keywords:** optimal transport, Sinkhorn algorithm, Schrodinger bridge, product measures, Beurling, Madelung transform, groups of diffeomorphisms, geometric hydrodynamics **MSC 2020:** 49Q22, 35Q49, 37K65, 65R20 0 Footnote 0: On the occasion of the 60th birthdays of Hans Munthe–Kaas and Brynjulf Owren. ###### Contents * 1 Introduction * 2 Dynamical formulation of optimal transport * 3 Entropic regularization and the Madelung transform * A Beurling's "forgotten" result ## 1. Introduction The Sinkhorn algorithm has quickly sailed up as a popular numerical method for optimal transport (OT) problems (see the book by Peyre and Cuturi [18] and references therein). Its standard derivation starts from the Kantorovich formulation of the discrete OT problem and then adds entropic regularization which enables Sinkhorn's theorem [21] for the optimal coupling matrix (see the paper by Cuturi [5] for basic notions, and the work of Schmitzer [19] and the PhD thesis of Feydy [6] for more details, including optimized and stable implementation of the full algorithm). In this paper I wish to give a brief survey of the Sinkhorn algorithm from a geometric viewpoint, making explicit connections to well-known techniques in geometric hydrodynamics, such as the Madelung transform, and the dynamical formulation of OT as given by Benamou and Brenier [2]. This viewpoint on entropic regularization is known to experts since long before the Sinkhorn algorithm became popular for OT: it was first formulated by Schrodinger [20], then re-established by Zambrini [24], and is today well-known as the law governing the _Schrodinger bridge problem_ in probability theory (cf. Leonard [12]). Here, I wish to complement this presentation by entirely avoiding probability theory and instead use the language of geometric analysis, particularly geometric hydrodynamics (_cf._ Arnold and Khesin [1]). The presentation of Leger [10], based on Wasserstein geometry, is close to mine and contains more details. A benefit of the hydrodynamic formulation is that it readily enables standard numerical tools for space and time discretization (and the accompanying numerical analysis). The principal theme of my presentation is that the Sinkhorn algorithm on a smooth, connected, orientable Riemannian manifold \(M\) has a natural interpretation as a space and time discretization of the following non-linear evolution integral equation: \[\begin{cases}\partial_{s}f=-f-\log\left(\int_{M}K_{\varepsilon}(\cdot,y)\, \mathrm{e}^{g(y)}\,\mathrm{d}y\right)+\log\rho_{0},&f\colon M\times[0,\infty) \to\mathbb{R}\\ \partial_{s}g=-g-\log\left(\int_{M}K_{\varepsilon}(\cdot,y)\,\mathrm{e}^{f(y) }\,\mathrm{d}y\right)+\log\rho_{1},&g\colon M\times[0,\infty)\to\mathbb{R}. \end{cases} \tag{1}\] Here, \(\rho_{0}\) and \(\rho_{1}\) are two strictly positive and smooth densities with the same total mass relative to the Riemannian volume form, and \(K_{\varepsilon}(x,y)\) is the heat kernel on \(M\). In the special case \(M=\mathbb{R}^{n}\) then \[K_{\varepsilon}(x,y)=\frac{1}{(4\pi\varepsilon)^{n/2}}\exp\Big{(}-\frac{\|x- y\|^{2}}{4\varepsilon}\Big{)}. \tag{2}\] It is sometimes convenient to use the heat flow semi-group notation \[\mathrm{e}^{\varepsilon\Delta}f\coloneqq\int_{M}K_{\varepsilon}(\cdot,y)f(y )\mathrm{d}y.\] **Remark 1**.: The equations (1) are almost linear for small \(\varepsilon\). Indeed, they can be written \[\begin{cases}\partial_{s}f=-f-g+\log\rho_{0}-T_{\varepsilon}(g),\\ \partial_{s}g=-f-g+\log\rho_{1}-T_{\varepsilon}(f),\end{cases}\qquad T_{ \varepsilon}(f)\coloneqq\log\big{(}\mathrm{e}^{-f}\mathrm{e}^{\varepsilon \Delta}\mathrm{e}^{f}\big{)},\] where \((f,\varepsilon)\mapsto T_{\varepsilon}(f)\) is a \(C^{\infty}\) function, for example in Sobolev topologies \(H^{s}\). Local existence and uniqueness of solutions to the integral equations (1) are obtained by standard Picard iterations. The extension to global results probably follow by standard techniques, but the question of _convergence_ as \(s\to\infty\) (and possibly \(\epsilon\to 0\) simultaneously) is subtle. To understand this dynamics, one could attempt backward error analysis on the continuous version of the system (6) below. I expect this would yield a connection to the non-linear parabolic equation studied by Berman [3], whose dynamics, he proved, approximates the dynamics of \(\boldsymbol{f}^{(k)}\mapsto\boldsymbol{f}^{(k+1)}\) below. Before discussing the geometric origin of the equations (1), let us see how the standard (discrete) Sinkhorn algorithm arise from the equations (1). Let \(\rho_{0}\) and \(\rho_{1}\) be two atomic measures on \(M\) \[\rho_{0}=\sum_{i=1}^{N}p_{i}\delta_{x_{i}},\qquad\rho_{1}=\sum_{i=1}^{N}q_{i} \delta_{y_{i}}, \tag{3}\] where \(x_{i},y_{i}\in\mathbb{R}^{n}\) and the weights \(p_{i}>0\) and \(q_{i}>0\) fulfill \(\sum_{i}p_{i}=\sum_{i}q_{i}\). If we change variables \(a(x,s)=\exp(f(x,s))\) and \(b(x,s)=\exp(g(x,s))\) then the integral equations (1) become \[\begin{cases}\partial_{s}a=-a\log\left(\frac{ae^{\varepsilon\Delta}b}{\rho_{0 }}\right),\\ \partial_{s}b=-b\log\left(\frac{be^{\varepsilon\Delta}a}{\rho_{1}}\right). \end{cases} \tag{4}\] For this to make sense for (3) we need \(a\ll\rho_{0}\) and \(b\ll\rho_{1}\), i.e., \(a=\sum_{i}a_{i}\delta_{x_{i}}\) and \(b=\sum_{i}b_{i}\delta_{y_{i}}\) for some \(a_{i}\geq 0\) and \(b_{i}\geq 0\). Since the heat flow convolves a delta distribution to a smooth function, it follows if \(\varepsilon>0\) that \[\left(\frac{a\mathrm{e}^{\varepsilon\Delta}b}{\rho_{0}}\right)(x_{i})=\frac{a_{i }}{q_{i}}\sum_{j=1}^{N}K_{\varepsilon}(x_{i},y_{j})b_{j}=\frac{a_{i}}{q_{i}} \boldsymbol{K}_{\varepsilon}\boldsymbol{b},\] where \(\boldsymbol{b}=(b_{1},\ldots,b_{N})\) and \(\boldsymbol{K}_{\varepsilon}\) is the \(N\times N\) matrix with entries \((\boldsymbol{K}_{\varepsilon})_{ij}=K_{\varepsilon}(x_{i},y_{i})\). Likewise, \[\left(\frac{b\mathrm{e}^{\varepsilon\Delta}a}{\rho_{1}}\right)(y_{i})=\frac{b _{i}}{p_{i}}\boldsymbol{K}_{\varepsilon}^{\top}\boldsymbol{a},\] where \(\boldsymbol{a}=(a_{1},\ldots,a_{N})\) and the heat kernel property \(K_{\varepsilon}(y_{i},x_{j})=K_{\varepsilon}(x_{j},y_{i})\) is used. Expressed in \(\boldsymbol{a}\) and \(\boldsymbol{b}\) the equations (4) now become an ordinary differential equation (ODE) on \(M^{2N}\) \[\begin{cases}\partial_{s}\boldsymbol{a}=-\boldsymbol{a}*\log\left(\frac{ \boldsymbol{a}}{\boldsymbol{q}}*\boldsymbol{K}_{\varepsilon}\boldsymbol{b} \right),\\ \partial_{s}\boldsymbol{b}=-\boldsymbol{b}*\log\left(\frac{ \boldsymbol{b}}{\boldsymbol{p}}*\boldsymbol{K}_{\varepsilon}^{\top} \boldsymbol{a}\right),\end{cases}\] where \(*\) denotes element-wise multiplication and \(\log\) and divisions are also applied element-wise. Notice that this ODE loses its meaning (or rather its connection to (4)) if \(\varepsilon=0\). Moving back to the coordinates \(\boldsymbol{f}=\log\boldsymbol{a}\) and \(\boldsymbol{g}=\log\boldsymbol{b}\) yields the system \[\begin{cases}\partial_{s}\boldsymbol{f}=-\boldsymbol{f}-\log\left(\boldsymbol {K}_{\varepsilon}\exp\boldsymbol{g}\right)+\log\boldsymbol{q},\\ \partial_{s}\boldsymbol{g}=-\boldsymbol{g}-\log\left(\boldsymbol{K}_{ \varepsilon}^{\top}\exp\boldsymbol{f}\right)+\log\boldsymbol{p},\end{cases} \tag{5}\] To proceed we need a time discretization. For this, apply to (5) the Trotter splitting (_cf._[13]) combined with the forward Euler method to obtain \[\begin{cases}\boldsymbol{f}^{(k+1)}=(1-h)\boldsymbol{f}^{(k)}-h\log\left( \boldsymbol{K}_{\varepsilon}\exp\boldsymbol{g}^{(k)}\right)+h\log\boldsymbol{ q},\\ \boldsymbol{g}^{(k+1)}=(1-h)\boldsymbol{g}^{(k)}-h\log\left(\boldsymbol{K}_{ \varepsilon}^{\top}\exp\boldsymbol{f}^{(k+1)}\right)+h\log\boldsymbol{p}, \end{cases} \tag{6}\] where \(h>0\) is the time-step length. **Definition 1**.: The _discrete Sinkhorn algorithm on \(M\)_ is given by the time discretization (6) with \(h=1\). **Remark 2**.: For \(M=\mathbb{R}^{n}\) the heat kernel is given by (2). Expressing (6) in the variables \(\boldsymbol{a}\) and \(\boldsymbol{b}\) and taking \(h=1\) we recover the Euclidean Sinkhorn algorithm as presented in the literature \[\boldsymbol{a}^{(k+1)}=\frac{\boldsymbol{q}}{\boldsymbol{K}_{\varepsilon} \boldsymbol{b}^{(k)}},\qquad\boldsymbol{b}^{(k+1)}=\frac{\boldsymbol{p}}{ \boldsymbol{K}_{\varepsilon}^{\top}\boldsymbol{a}^{(k+1)}}. \tag{7}\] **Remark 3**.: If we take \(h\neq 1\) then (6) expressed in \(\boldsymbol{a}\) and \(\boldsymbol{b}\) become \[\boldsymbol{a}^{(k+1)}=\left(\boldsymbol{a}^{(k)}\right)^{1-h}\left(\frac{ \boldsymbol{q}}{\boldsymbol{K}_{\varepsilon}\boldsymbol{b}^{(k)}}\right)^{h}, \qquad\boldsymbol{b}^{(k+1)}=\left(\boldsymbol{b}^{(k)}\right)^{1-h}\left( \frac{\boldsymbol{p}}{\boldsymbol{K}_{\varepsilon}^{\top}\boldsymbol{a}^{(k+1 )}}\right)^{h}.\] For \(h>1\) this is the so-called _over-relaxed Sinkhorn algorithm_ which convergences faster than (7) (see [22, 17, 11]). Indeed, the faster convergence is readily understood by applying linear stability theory to (6) in the vicinity \(\varepsilon\to 0^{+}\). From a numerical ODE perspective this is also expected: larger time steps typically yield faster marching toward asymptotics provided that the time step is small enough for the method to remain stable. Indeed, Figure 1 shows an almost perfect match between the stability region of the Trotter-Euler method and the convergence of the time-step iterations (6). An interesting venue could be to look for other methods, with other numerical damping properties. **Remark 4**.: Notice that if \(C=\int_{M}ae^{\varepsilon\Delta}b\) then, along the flow (4), \[\frac{d}{ds}C=-\int_{M}ae^{\varepsilon\Delta}b\log\left(\frac{ae^{\varepsilon \Delta}b}{\rho_{0}}\right)-\int_{M}be^{\varepsilon\Delta}a\log\left(\frac{be^{ \varepsilon\Delta}a}{\rho_{1}}\right).\] **Acknowledgement**.: This work was supported by the Swedish Research Council (grant number 2022-03453) and the Knut and Alice Wallenberg Foundation (grant number WAF2019.0201). I would like to thank Ana Bela Cruzeiro, Christian Leonard, and Jean-Claude Zambrini, for helpful and intriguing discussions, and especially for pointing me to the "forgotten" work of Beurling. ## 2. Dynamical formulation of optimal transport This section reviews the dynamical (or fluid) formulation of smooth optimal transport problems as advocated by Benamou and Brenier [2]. See [8] for details on the notation and more information about infinite-dimensional manifold and Riemannian structures. For simplicity I assume from here on that \(M\) is a compact Riemannian manifold without boundary. The non-compact or boundary cases can be handled by introducing suitable decay or boundary conditions. Let \(\operatorname{Dens}(M)=\{\rho\in C^{\infty}(M)\mid\rho(x)>0,\,\int_{M}\rho=1\}\) denote the space of smooth probability densities. It has the structure of a smooth Frechet manifold [7]. Its tangent bundle is given by tuples \((\rho,\dot{\rho})\) where \(\dot{\rho}\in C^{\infty}_{0}(M)=\{f\in C^{\infty}(M)\mid\int_{M}^{\cdot}f=0\}\). Otto [16] suggested the following (weak) Riemannian metric on \(\operatorname{Dens}(M)\) \[\left\langle\dot{\rho},\dot{\rho}\right\rangle_{\rho}=\int_{M}|\nabla S|^{2} \rho,\qquad\dot{\rho}+\operatorname{div}(\rho\nabla S)=0. \tag{8}\] The beauty of this metric is that the distance it induces is exactly the \(L^{2}\)-Wasserstein distance, and the geodesic two-point boundary value problem corresponds to the optimal transport problem (in the smooth category). See [16, 9, 14] for details. In summary, \(L^{2}\) optimal transport is a problem of Lagrangian (variational) mechanics on \(\operatorname{Dens}(M)\): find a path \([0,1]\ni t\mapsto\rho_{t}\in\operatorname{Dens}(M)\) with fixed end-points \(\rho_{0}\) and \(\rho_{1}\) that extremizes (in this case minimizes) the action functional \[A(\rho_{t})=\int_{0}^{1}L\Big{(}\rho_{t},\frac{\partial\rho_{t}}{\partial t} \Big{)}\,\mathrm{d}t, \tag{9}\] for the kinetic energy Lagrangian \(L(\rho,\dot{\rho})=\frac{1}{2}\left\langle\dot{\rho},\dot{\rho}\right\rangle_ {\rho}\). The optimal transport map is then recovered as the time-one flow map for the time dependent vector field on \(M\) given by \(v(x,t)=\nabla S_{t}(x)\). The dynamical formulation is now obtain by replacing the variational problem for the action (9) with an equivalent constrained variational problem on \(\operatorname{Dens}(M)\times\Omega^{1}(M)\) (densities times one-forms) for the action \[\bar{A}(\rho_{t},m_{t})=\frac{1}{2}\int_{0}^{1}\langle\!\langle m_{t}/\rho_{t },m_{t}\rangle\!\rangle_{L^{2}}\,\mathrm{d}t\] subject to the constraint \[\dot{\rho}_{t}+\operatorname{div}(m_{t})=0.\] This is a convex optimization problem since \(\bar{A}\) is convex and the constraint is linear (see [2] for details). Notice that the convexity of \(\bar{A}\) is with respect to the linear structure of \(C^{\infty}(M)\times\Omega^{1}(M)\), which is different from the non-linear convexity notion for the Levi-Civita connection associated with the Riemannian metric (8). Figure 1. Stability analysis (upper figure) for the Trotter-Euler splitting method applied to the linear test equation \(\dot{x}=-x-(1-\delta)y,\ \dot{y}=-(1-\delta)x-y\) for \(\delta=10^{-2}\), roughly corresponding to the system (6) with \(\varepsilon\simeq\delta\). The method is stable whenever the magnitude of the corresponding two eigenvalues for the flow map are bounded by \(1\). The lower figure shows, for \(\varepsilon=10^{-2}\) and various step-sizes marked in the stability plot, how the \(L^{2}\) norm of the right-hand side of (5) (the error) decreases with the number of time-steps taken with the Trotter-Euler method (6). The convergence plot matches the theoretical stability plot well. At \(h=1\) we observe an initially almost vertical decrease of the error due to the small eigenvalue near zero, thereafter the dynamics takes place in the eigenspace of the other eigenvalue close to one and here the convergence is slow. The optimal step-size is where the two eigenvalues meet at about \(h\approx 1.75\). In the interval \(h\in[1.75,2)\) we expect an almost linear decrease of the error, as the two eigenvalues are the same. For \(h\geq 2\) the method is not (linearly) stable. ## 3. Entropic regularization and the Madelung transform The aim of this section is to introduce entropic regularization of the dynamical formulation and show how it simplifies the problem via the imaginary Madelung (or Hopf-Cole) transform. This transformation is the analog, in the dynamical formulation of smooth OT, to Sinkhorn's theorem applied to the coupling matrix in discrete OT. Most of the results presented in this section are available in the papers by Leonard [12] and by Leger [10]. Let me first introduce two central functionals from information theory. The first is, of course, _entropy_, i.e., the functional on \(\mathrm{Dens}(M)\) given by \[E(\rho)=\int_{M}\rho\log\rho.\] Its cousin, the _Fisher information functional_, is given by \[I(\rho)=\frac{1}{2}\int_{M}\frac{|\nabla\rho|^{2}}{\rho}.\] There are various ways to describe the relation between \(E(\rho)\) and \(I(\rho)\): * \(I(\rho)\) is the trace of the Hessian (with respect to (8)) of \(E(\rho)\), or * \(I(\rho)\) is the rate of change of entropy along the heat flow \(\dot{\rho}=\Delta\rho\). In our context, \(I(\rho)\) plays the role of negative potential energy in the Lagrangian \[L(\rho,\dot{\rho})=\frac{1}{2}\left\langle\dot{\rho},\dot{\rho}\right\rangle_{ \rho}+\varepsilon^{2}I(\rho). \tag{10}\] The corresponding action functional \[A(\rho_{t})=\int_{0}^{1}\left(\frac{1}{2}\left\langle\partial_{t}\rho_{t}, \partial_{t}\rho_{t}\right\rangle_{\rho}+\varepsilon^{2}I(\rho_{t})\right) \mathrm{d}t \tag{11}\] is called the _entropic regularization_ of (9). Notice that the parameter \(\varepsilon\) has physical dimension as thermal diffusivity. This is not a coincidence. Indeed, as we shall soon see this regularization significantly simplifies the variational problem (imaginary \(\varepsilon\) also works!) by means of heat flows with thermal diffusivity \(\varepsilon\). Before that, however, let me just point out that the dynamical formulation of Benamou and Brenier [2] is still applicable: if we change variables to \((\rho,m)\) as before we obtain again a convex optimization problem (since the functional \(I(\rho)\) is convex) \[\min_{\rho_{t},m_{t}}\int_{0}^{1}\left(\frac{1}{2}\langle\!\langle m_{t}/\rho _{t},m_{t}\rangle\!\rangle_{L^{2}}+\varepsilon^{2}I(\rho_{t})\right)\mathrm{ d}t\quad\text{subject to}\quad\partial_{t}\rho_{t}+\mathrm{div}(m_{t})=0.\] In a suitable setting one can apply convex analysis to obtain existence and uniqueness (_cf._[2]). So far I used Lagrangian mechanics to describe the dynamical formulation. Let me now switch to the Hamiltonian view-point. The Legendre transform of the Lagrangian (10) is \[T\mathrm{Dens}(M)\ni(\rho,\dot{\rho})\mapsto\left(\rho,\frac{\delta L}{\delta \dot{\rho}}\right)=(\rho,S)\in T^{*}\mathrm{Dens}(M) \tag{12}\] where the co-tangent space \(T^{*}_{\rho}\mathrm{Dens}(M)\) is given by co-sets of smooth functions defined up to addition of constants (if \(M\) is connected as I assume). Notice that the \(S\) in (12) is exactly the \(S\) in (8) (defined only up to a constant). The corresponding Hamiltonian on \(T^{*}\mathrm{Dens}(M)\) is \[H(\rho,S)=\frac{1}{2}\left\langle S,S\right\rangle_{*\rho}-\varepsilon^{2}I( \rho), \tag{13}\] where the _dual metric_\(\left\langle\cdot,\cdot\right\rangle_{*}\) is given by \[\left\langle S,S\right\rangle_{*\rho}=\int_{M}|\nabla S|^{2}\rho.\] Before I continue, consider a finite-dimensional analog of the Hamiltonian (13): on \(T^{*}\mathbb{R}\) take \(H(q,p)=p^{2}/2-\varepsilon^{2}q^{2}/2\). Of course, the analogy is \(p^{2}/2\leftrightarrow\left\langle S,S\right\rangle_{*\rho}\) and \(q^{2}/2\leftrightarrow I(\rho)\). The equations of motion are \[\dot{q}=p,\quad\dot{p}=\varepsilon^{2}q.\] This system describes a harmonic oscillator when \(\varepsilon\) is _imaginary_. For real \(\varepsilon\), the dynamics is not oscillatory. Indeed, if we change variables to \(x=p-\varepsilon q\) and \(y=p+\varepsilon q\) we obtain the Hamiltonian \(H(x,y)=xy/2\) with dynamics \[\dot{x}=x/2,\quad\dot{y}=-y/2.\] These are two uncoupled equations where \(x\) is growing and \(y\) is decaying exponentially. Thus, we can expect this type of dynamics also for (13). Indeed, I shall now introduce a change of coordinates for \((\rho,S)\), analogous to the change of coordinates \((q,p)\iff(x,y)\). **Definition 2**.: The _imaginary Madelung transform1_ is given by Footnote 1: In the standard Madelung transform \(\varepsilon=\mathrm{i}\hbar\) which is why I say ‘imaginary’ here. \[T^{*}\mathrm{Dens}(M)\ni(\rho,S)\longmapsto\left(\underbrace{\sqrt{\rho \mathrm{e}^{S/\varepsilon}}}_{a},\underbrace{\sqrt{\rho\mathrm{e}^{-S/ \varepsilon}}}_{b}\right),\] where \(a,\bar{b}\in C^{\infty}(M)\) are defined up to \((\mathrm{e}^{\sigma}a,\mathrm{e}^{-\sigma}\bar{b})\) for \(\sigma\in\mathbb{R}\) and should fulfill \(\int_{M}a\bar{b}=1\). The individual component \(a\) is known as the _Hopf-Cole transform_. This transformation is a symplectomorphism (see [10] for \(\varepsilon\in\mathbb{R}\) and [23, 8] for \(\varepsilon\in\mathrm{i}\mathbb{R}\)). The inverse transform is \(\rho=a\bar{b}\) and \(S=\varepsilon\log(a/\bar{b})\). The Hamiltonian (13) expressed in the new canonical coordinates \((a,\bar{b})\) thus become \[\begin{split} H(a,\bar{b})&=\int_{M}\frac{ \epsilon^{2}}{2}\left(\left|\frac{\nabla a}{a}-\frac{\nabla\bar{b}}{\bar{b}} \right|^{2}-\left|\frac{\nabla a}{a}+\frac{\nabla\bar{b}}{\bar{b}}\right|^{2} \right)a\bar{b}\\ &=-\varepsilon^{2}\int_{M}\nabla a\cdot\nabla\bar{b}= \varepsilon^{2}\int_{M}a\Delta\bar{b}\,.\end{split} \tag{14}\] Notice two things: (i) this Hamiltonian is quadratic, and (ii) it is of the form in the toy example \(H(x,y)\) above. Hamilton's equations of motion for (14) are \[\dot{a}=\varepsilon\Delta a,\qquad\dot{\bar{b}}=-\varepsilon\Delta\bar{b}\,.\] Again, two decoupled equations as in the toy example, but now given by forward and backward heat flows with thermal diffusivity \(\varepsilon\). **Remark 5**.: To be more precise, one should also take into account that \((a,\bar{b})\) is a co-set, so the general form of the equation should be \[\dot{a}=\varepsilon\Delta a+\sigma a,\qquad\dot{\bar{b}}=-\varepsilon\Delta \bar{b}-\sigma\bar{b}\,,\] where \(t\mapsto\sigma(t)\in\mathbb{R}\) is arbitrary. However, since the scaling is arbitrary we can always represent the \((a,\bar{b})\) co-set by the element for which \(\sigma=0\). Notice also that the constraint \(\int_{M}a\bar{b}=1\) is preserved by the flow, as a short calculation shows. It is, of course, not good to work with backward heat flows, but there is an easy fix. Let \(b(x,t)\coloneqq\bar{b}(x,1-t)\). Then \(b\) fulfills the forward heat equation (but backwards in time). In the variables \(a_{t}=a(\cdot,t)\) and \(b_{t}=b(\cdot,t)\) the solution to the variational problem for the action (11) must therefore fulfill \[\begin{cases}\partial_{t}a_{t}=\varepsilon\Delta a_{t},&a_{0}b_{1}=\rho_{0} \\ \partial_{t}b_{t}=\varepsilon\Delta b_{t},&a_{1}b_{0}=\rho_{1}.\end{cases}\] The two equations are coupled only through mixed boundary conditions at \(t=0\) and \(t=1\). With \(a\coloneqq a_{0}\) and \(b\coloneqq b_{0}\) these equations can be written in terms of the heat semigroup as \[a\mathrm{e}^{\varepsilon\Delta}b=\rho_{0},\quad b\mathrm{e}^{\varepsilon \Delta}a=\rho_{1}. \tag{15}\] As you can see, a solution to (15) is stationary point of the integral equations (4). Indeed, one should think of (4) as a gradient-type flow for solving the equations (15), as I shall now elaborate on. If we take only the first part of the equations (15) we obtain the equation \[\partial_{s}a=-a\log\left(\frac{a\mathrm{e}^{\varepsilon\Delta}b}{\rho_{0}} \right), \tag{16}\] with \(b\) now as a fixed parameter. Let \(\sigma=a\mathrm{e}^{\varepsilon\Delta}b\), so that \(\partial_{s}\sigma=\mathrm{e}^{\varepsilon\Delta}b\partial_{s}a\) (since \(b\) is considered constant). The Fisher-Rao metric for \(\partial_{s}\sigma\) is given by \[\langle\partial_{s}\sigma,\partial_{s}\sigma\rangle_{\sigma}=\int_{M}\frac{( \partial_{s}\sigma)^{2}}{\sigma}=\int_{M}\frac{(\partial_{s}a)^{2}\mathrm{e}^ {\varepsilon\Delta}b}{a}. \tag{17}\] Furthermore, the entropy of \(\sigma\) relative to \(\rho_{0}\) is given by \[H_{\rho_{0}}(\sigma)=\int_{M}\sigma\log\left(\frac{\sigma}{\rho_{0}}\right).\] It is straightforward to check that the Riemannian gradient flow for the functional \(F_{1}(a)=H_{\rho_{0}}(a\mathrm{e}^{\varepsilon\Delta}b)\) with respect to the metric (17) is given by equation (16). Likewise, the equation for \(b\) with fixed \(a\) is the Fisher-Rao gradient flow of \(F_{2}(b)=H_{\rho_{1}}(b\mathrm{e}^{\varepsilon\Delta}a)\). Consequently, the Sinkhorn algorithm is the composition of steps for the first and second gradient flows. The question of assigning one Riemannian gradient structure to the entire flow is more intricate, since the functionals \(F_{1}\) and \(F_{2}\) depend on both \(a\) and \(b\). ## Appendix A Beurling's "forgotten" result Motivated by Einstein's work on Brownian motion governed in law by the heat flow, Schrodinger [20] arrived at the equations (15) by studying the most likely stochastic path for a system of particles from initial distribution \(\rho_{0}\) to final distribution \(\rho_{1}\). He gave physical arguments for why the problem should have a solution, but mathematically it was left open. S. Bernstein then addressed it at the 1932 International Congress of Mathematics in Zurich. A full resolution, however, did not come until 1960 through the work of Beurling [4]. The objective was, in Beurling's own words, "to derive general results concerning systems like (15) and, in particular, to disclose the inherent and rather simple nature of the problem." Beurling certainly succeeded in doing so. But to his astonishment (and slight annoyance) no-one took notice. In fact, Schrodinger's bridge problem was itself largely forgotten among physicists and mathematicians. Both Schrodinger's problem and the solution by Beurling were "rediscovered" and advocated by Zambrini [24] as he was working with an alternative version of Nelson's framework for _stochastic mechanics_ (cf. [15]). Beurling relaxed the problem (15) by replacing the functions \(\rho_{0},\rho_{1},a,b\) by measures \(\mu_{0},\mu_{1},\alpha,\beta\) on \(M\). By multiplying the right-hand sides, one obtains the product measure \(\mu\equiv\mu_{0}\wedge\mu_{1}\) on \(M\times M\). For the left-hand side, Beurling went on as follows. Any measure \(\nu\) on \(M\times M\) gives rise to the generalized marginal measures \(\nu_{0}\) and \(\nu_{1}\) defined for all \(h\in C_{0}(M)\) by \[\int_{M}h\,d\nu_{0}=\int_{M\times M}K_{\epsilon}(x,y)h(x)\,d\nu\quad\text{and} \quad\int_{M}h\,d\nu_{1}=\int_{M\times M}K_{\epsilon}(x,y)h(y)\,d\nu.\] Thus, we have a mapping from the space of measures to the space of product measures via the quadratic map \[T_{\epsilon}\colon\nu\mapsto\nu_{0}\wedge\nu_{1}.\] Beurling noticed that the generalized version of Schrodinger's problem in equation (15) can be written \[T_{\epsilon}(\alpha\wedge\beta)=\mu_{0}\wedge\mu_{1}. \tag{18}\] Let \(\mathcal{M}\) denote the space of Radon measures on \(M\times M\) (i.e., the continuous dual of compactly supported continuous functions on \(M\times M\)) and \(\mathcal{P}\subset\mathcal{M}\) the sub-set of product measures. Further, let \(\mathcal{M}^{+}\subset\mathcal{M}\) and \(\mathcal{P}^{+}\subset\mathcal{P}\) denote the corresponding sub-sets of non-negative measures. Since \(K_{\epsilon}>0\), it follows that \[T_{\epsilon}\colon\mathcal{M}^{+}\to\mathcal{P}^{+}. \tag{19}\] Let me now state Beurling's result adapted to the setting here.2 Footnote 2: The result proved by Beurling is much more general: it solves the problem for an \(n\)-fold product measure on the Cartesian product of \(n\) locally compact Hausdorff spaces. **Theorem 1** (Beurling [4], Thm. I).: _Let \(M\) be compact (possibly with boundary) and \(\epsilon>0\). Then the mapping (19) restricted to \(\mathcal{P}^{+}\) is an automorphism (in the strong topology of \(\mathcal{M}\))._ From this result, a solution to Schrodinger's problem (18) in the category of measures is obtained as \(\alpha\wedge\beta=T_{\epsilon}^{-1}(\mu_{0}\wedge\mu_{1})\). Furthermore, the solution \(\alpha\wedge\beta\) depends continuously (in operator norm) on the data \(\mu_{0}\wedge\mu_{1}\). Notice that, whereas \(\alpha\wedge\beta\) is unique as a product measure, the components \(\alpha,\beta\) themselves are only defined up to multiplication \(e^{f}\alpha,e^{-f}\beta\) by an arbitrary function \(f\) on \(M\). Thus, to work with product measures naturally captures the non-uniqueness pointed out in Remark 5 above. The condition that \(M\) is compact is used to obtain a positive lower and upper bound on the kernel \(K_{\epsilon}\) (these are, in fact, the only conditions that Beurling's proof imposes on \(K_{\epsilon}\)). Such bounds are necessary for the map \(T_{\epsilon}\) to be an automorphism (i.e., continuous with continuous inverse). Beurling also gave a second, weaker result, which can be applied to the case of non-compact \(M\). **Theorem 2** (Beurling [4], Thm. II).: _Let \(\epsilon>0\) and let \(\mu_{0}\wedge\mu_{1}\in\mathcal{P}^{+}\) be such that_ \[\Big{|}\int_{M}\int_{M}\log K_{\epsilon}\,d\mu_{0}\,d\mu_{1}\Big{|}<\infty.\] _Then there exists a unique non-negative product measure \(\nu\) on \(M\times M\) that solves the equation_ \[T_{\epsilon}(\nu)=\mu_{0}\wedge\mu_{1}.\] Beurling's results can be viewed as a generalization from matrices to measures of Sinkhorn's theorem [21] on doubly stochastic matrices, only it came four years _before_ Sinkhorn's result. I find it remarkable that Beurling came up with these results independently of Kantorovich's formulation of optimal transport in terms of measures on a product space (which came to general knowledge in the West in the late 1960's).
Sinkhornアルゴリズムは、最適輸送問題の解き方の数値的アルゴリズムです。ここでは、そのアルゴリズムについて、特にその幾何学的起源に焦点を当てて、簡潔な概要を説明します。それは、標準的な方法を用いて非線形積分方程式の離散化であると考えられます。附录では、Beurlingの製品測度に関する初期結果についても、簡単な要約を記載しました。
2307.16405
Causal-learn: Causal Discovery in Python
Causal discovery aims at revealing causal relations from observational data, which is a fundamental task in science and engineering. We describe $\textit{causal-learn}$, an open-source Python library for causal discovery. This library focuses on bringing a comprehensive collection of causal discovery methods to both practitioners and researchers. It provides easy-to-use APIs for non-specialists, modular building blocks for developers, detailed documentation for learners, and comprehensive methods for all. Different from previous packages in R or Java, $\textit{causal-learn}$ is fully developed in Python, which could be more in tune with the recent preference shift in programming languages within related communities. The library is available at https://github.com/py-why/causal-learn.
Yujia Zheng, Biwei Huang, Wei Chen, Joseph Ramsey, Mingming Gong, Ruichu Cai, Shohei Shimizu, Peter Spirtes, Kun Zhang
2023-07-31T05:00:35
http://arxiv.org/abs/2307.16405v1
# Causal-learn: Causal Discovery in Python ###### Abstract Causal discovery aims at revealing causal relations from observational data, which is a fundamental task in science and engineering. We describe _causal-learn_, an open-source Python library for causal discovery. This library focuses on bringing a comprehensive collection of causal discovery methods to both practitioners and researchers. It provides easy-to-use APIs for non-specialists, modular building blocks for developers, detailed documentation for learners, and comprehensive methods for all. Different from previous packages in R or Java, _causal-learn_ is fully developed in Python, which could be more in tune with the recent preference shift in programming languages within related communities. The library is available at [https://github.com/py-why/causal-learn](https://github.com/py-why/causal-learn). Causal Discovery in Python Causal-learn: Causal Discovery in Python ## 1 Introduction A traditional way to uncover causal relationships is to resort to interventions or randomized experiments, which are often impractical due to their cost or logistical limitations. Hence, the importance of causal discovery, i.e., the process of revealing causal information through the analysis of purely observational data, has become increasingly apparent across diverse disciplines, including genomics, ecology, neuroscience, and epidemiology, among others (Glymour et al., 2019). For instance, in genomics, causal discovery has been instrumental in understanding the relationships between certain genes and diseases. Researchers might not have the resources to manipulate gene expressions, but they can analyze observational data, which are usually widely available, such as genomic databases, to uncover potential causal relationships. This can lead to breakthroughs in disease treatment and prevention strategies without the cost of traditional experimentation. Current strategies for causal discovery can be broadly classified into constraint-based, score-based, functional causal models-based, and methods that recover latent variables. Constraint-based and score-based methods have been employed for causal discovery since the 1990s, using conditional independence relationships in data to uncover information about the underlying causal structure. Algorithms such as Peter-Clark (PC) (Spirtes et al., 2000) and Fast Causal Inference (FCI) (Spirtes et al., 1995) are popular, with PC assuming causal sufficiency and FCI handling latent confounders. In cases without latent confounders, score-based algorithms like the Greedy Equivalence Search (GES) (Chickering, 2002) aim to find the causal structure by optimizing a score function. These methods provide asymptotically correct results, accommodating various data distributions and functional relations but do not necessarily provide complete causal information as they usually output Markov equivalence classes of causal structures (graphs within the same Markov equivalence class have the same conditional independence relations among the variables). On the other hand, algorithms based on Functional Causal Models (FCMs) have exhibited the ability to distinguish between different Directed Acyclic Graphs (DAGs) within the same equivalence class, thanks to additional assumptions on the data distribution beyond conditional independence relations. An FCM represents the effect variable as a function of the direct causes and a noise term; it renders causal direction identifiable due to the independence condition between the noise and cause: one can show that under appropriate assumptions on the functional model class and distributions of the involved variables, the estimated noise cannot be independent from the hypothetical cause in the reverse direction (Shimizu et al., 2006; Hoyer et al., 2008; Zhang and Hyvarinen, 2009). More recently, the Generalized Independent Noise condition (GIN) (Xie et al., 2020) has demonstrated its potential in learning hidden causal variables and their relations in the linear, non-Gaussian case. To equip both practitioners and researchers with computational tools, several packages have been developed for or can be adapted for causal discovery. The Java library TETRAD (Glymour and Scheines, 1986; Scheines et al., 1998; Ramsey et al., 2018) contains a variety of well-tested causal discovery algorithms and has been continuously developed and maintained for over 40 years; R packages pcalg (Kalisch et al., 2012) and bnlearn (Scutari, 2010) also include some classical constraint-based and score-based methods such as PC and GES. However, these tools are based on Java or R, which may not align with the recent trend favoring Python in certain communities, particularly within machine learning. While there are Python wrappers available for these packages (e.g., py-tetrad (Andrew and Ramsey, 2023)/py-causal (Wongchokprasitti et al., 2019) for TETRAD, and Causal Discovery Toolbox (Kalainathan et al., 2020) for pcalg and bnlearn), they still rely on Java or R. This dependency can complicate deployment and does not cater directly to Python users seeking to develop their own methods based on an existing codebase. Thus, there is a pronounced need for a Python package that covers representative causal discovery algorithms across all primary categories. Such a tool would significantly benefit a diverse range of users by providing access to both classical methods and the latest advancements in causal discovery. In this paper, we describe _causal-learn_, an open-source python library for causal discovery. The library incorporates an extensive range of causal discovery algorithms, providing accessible APIs and thorough documentation to cater to a diversity of practical requirements and data assumptions. Moreover, it provides independent modules for specific functionalities, such as (conditional) independence tests, score functions, graph operations, and evaluation metrics, thereby facilitating custom needs and fostering the development of user-defined methods. An essential attribute of causal-learn is its full implementation in Python, eliminating dependencies on any other programming languages. As such, users are not required to have expertise in Java or R, enhancing the ease of integration within the enormous and growing Python ecosystem and promoting seamless utilization for a range of computational and scripting tasks. With causal-learn, modification and extensions based on the existing implementation of causal discovery methods also become plausible for developers and researchers who may not be familiar with Java or R, which could significantly accelerate the progress in related fields by lowering the threshold of the integration of causality into various pipelines. ## 2 Design The design philosophy of causal-learn is centered around building an open-source, modular, easily extensible and embeddable Python platform for learning causality from data and making use of causality for various purposes. Due to the different goals, assumptions, and techniques between causal learning and traditional learning tasks, newcomers to the field often find it hard to get a clear picture of the developments in modern causality research. Thus, we briefly introduce the algorithms and functionalities in causal-learn with a special focus on their use cases and suitable application scenarios. ### Search methods Causal-learn covers representative causal discovery methods across all major categories with official implementation of most algorithms. We briefly introduce the methods as follows. It is worth noting that we are actively updating the library to incorporate latest algorithms. * **Constraint-based causal discovery methods.** Current algorithms under that category are PC (Spirtes et al., 2000), FCI (Spirtes et al., 1995), and CD-NOD (Huang et al., 2020). PC is a classical and widely-used algorithm with consistency guarantee under independent and identically distributed (i.i.d.) sampling assuming no latent confounders, the faithfulness assumption, and the causal Markov condition, which has been extensively applied in many fields. By continuously applying (conditional) independence tests on subsets of variables of increasing size in a smart way, its search procedure returns a Markov Equivalence Class (MEC), of which the graphical object consists of a mixture of directed and undirected edges, known as a Completed Partially Directed Acyclic Graph (CPDAG). PC is highly adaptable to various use cases, facilitated by the selection of an appropriate independence test; it can handle data with different assumptions, such as Fisher-Z test (Fisher et al., 1921) for linear Gaussian data, Chi/G-squared test (Tsamardinos et al., 2006) for discrete data, and Kernel-based Conditional Independence (KCI) test (Zhang et al., 2011) for the nonparametric case. Moreover, causal-learn provides an extension, Missing-Value PC (MV-PC) (Tu et al., 2019), to address issues of missing data. Furthermore, we have implemented FCI for causal structures that include hidden confounders (it indicates the possible existence of hidden confounders whenever the possibility cannot be excluded, but it cannot help determine possible relations among them), and causal discovery from nonstationary/heterogeneous data (CD-NOD). These constraint-based methods offer wide applicability as they can accommodate various types of data distributions and causal relations, provided that appropriate conditional independence testing methods are utilized. However, genenerally speaking, they may not be able to determine the complete causal graph uniquely and, accordingly, there usually exist some undirected edges in the returned CPDAGs. * **Score-based causal discovery methods.** Different from the search style of constraint-bed methods, score-based methods find the causal structure by optimizing a properly defined score function. Greedy Equivalence Search (GES) (Chickering, 2002) is a well-known two-phase procedure that directly searches over the space of equivalence classes. Similarly, exact search (e.g., A* (Yuan and Malone, 2013), Dynamic Programming (Silander and Myllymaki, 2006)), and permutation-based search (e.g., GRaSP (Lam et al., 2022)) apply different search strategies to return a set of the sparsest Directed Acyclic Graphs (DAGs) that contains the true model under assumptions strictly weaker than faithfulness. These score-based methods are versatile, able to accommodate a wide array of data and causal relations by choosing suitable score functions, such as BIC (Schwarz, 1978) for linear Guassian data, BDeu (Buntine, 1991) for discrete data, and Generalized Score (Huang et al., 2018) for the nonparametric case. The choice of score function can be conveniently adjusted as a hyperparameter. * **Causal discovery methods based on constrained functional causal models.** While constraint-based and score-based methods offer flexibility through the selection of an appropriate independence test or score function, they are limited to returning equivalence classes, yielding non-unique solutions where the causal direction between certain variable pairs remains indeterminate. In contrast, assuming specific Functional Causal Models (FCMs)-that is, functions in a particular functional class to specify how the effect is generated from its direct causes and noise-allows for the full determination of the causal structure, albeit at the cost of certain trade-offs. Causal-learn incorporates algorithms based on several FCM variants, capable of producing unique causal directions. Examples include the linear non-Gaussian acyclic model (LiNGAM) (Shimizu et al., 2006) and its variant, i.e., DirectLiNGAM (Shimizu et al., 2011), which have been extensively applied for non-Gaussian noises with linear relations. VAR-LiNGAM (Hyvarinen et al., 2010), which combines LiNGAM with vector autoregressive models (VAR), to estimate both time-delayed and instantaneous causal relations from time series. RCD (Maeda and Shimizu, 2020), an extension of LiNGAM, allows for hidden confounders, while CAM-UV (Maeda and Shimizu, 2021) further extends this to the nonlinear additive noise case. In addition, the additive noise model (ANM) (Hoyer et al., 2008) has been proven to be identifiable in the presence of nonlinearity and additive noises. Furthermore, we have also incorporated the post-nonlinear (PNL) causal model (Zhang and Hyvarinen, 2009), a highly general form (with LiNGAM and ANM as special cases) that has been demonstrated to be identifiable in the generic case, barring five specific situations described in (Zhang and Hyvarinen, 2009). * **Causal representation learning: Finding causally related hidden variables.** Latent variables play an instrumental role in a multitude of real-world scenarios, often acting as hidden confounders that influence observed variables. Unfortunately, most existing methods may fail to produce convincing results in cases with latent variables (confounders). In causal-learn, we implement the Generalized Independent Noise (GIN) condition (Xie et al., 2020) for estimating linear non-Gaussian latent variable causal model, which allows causal relationships between latent variables and multiple latent variables behind any two observed variables. This promises to improve the detection and understanding of the complex, often hidden, causal structures that govern real-world phenomena. Besides, causal-learn also has Granger causality (Granger, 1969, 1980) implemented for statistical but not causal1 time series analysis. Through the collective efforts of various teams and the contributions of the open-source community, causal-learn is always under active development to incorporate the most recent advancements in causal discovery and make them available to both practitioners and researchers. Footnote 1: As mentioned by Granger, Granger causality is not necessarily true causality. In fact, If one assumes 1) that there is no latent confounding process, 2) that the data are sampled at the right causal frequency, and 3) that there are no instantaneous causal influences, then Granger causality defined by Granger (Granger, 1980) can be seen as causal relations that can be discovered from stochastic processes with constraints-based methods such as PC. Of course, if those assumptions are violated, one may still apply Granger causal analysis, but the estimated relations may not be true causal influences. ### (Conditional) independence tests In addition to its comprehensive search methods, causal-learn also provides a variety of (conditional) independence tests as independent modules. Besides being an essential parts of several search methods, these tests can also be independently utilized and seamlessly integrated into existing statistical analysis pipelines. Currently,the library features a diverse array of such tests including Fisher-z test (Fisher et al., 1921), Missing-value Fisher-z test, Chi-Square test, Kernel-based conditional independence (KCI) test and independence test (Zhang et al., 2011), and G-Square test (Tsamardinos et al., 2006), each with distinct capabilities and benefits. The Fisher-z test is ideally suited for linear-Gaussian data, while the Missing-value Fisher-z test addresses the challenges of missing values by implementing a testwise-deletion approach. For categorical variables, the Chi-Square and G-Square tests are most effective. For users interested in a nonparametric test or the case with mixed categorical and continuous data types, the KCI test is an option. Overall, the range of tests offered by causal-learn underscores its versatility in handling diverse data types. ### Score functions Moreover, a diverse range of score functions is available in _causal-learn_. These score functions quantify the goodness of fit of a model to the data, a crucial measure in score-based causal discovery methods, and can also be utilized independently for model selection in a broader range. Among these, the Bayesian Information Criterion (BIC) score (Schwarz, 1978) is used extensively, offering a balance between model complexity and fit to the data. Another important score function is the Bayesian Dirichlet equivalent uniform (BDeu) score (Buntine, 1991). This score function, especially beneficial for discrete data, incorporates a uniform prior over the set of Bayesian networks. Additionally, the Generalized Score (Huang et al., 2018) is also available in causal-learn, which offers the flexibility to accommodate more complex scenarios and is beneficial for nonparametric cases where the true data-generating process does not align with the assumptions of BIC (linear Gaussian) or BDeu (discrete). ### Utilities Causal-learn further offers a suite of utilities designed to streamline the assembly of causal analysis pipelines. The package features a comprehensive range of graph operations encompassing transformations among various graphical objects integral to causal discovery. These include Directed Acyclic Graphs (DAGs), Completed Partially Directed Acyclic Graphs (CPDAGs), Partially Directed Acyclic Graphs (PDAGs), and Partially Ancestral Graphs (PAGs). Additionally, to enhance the convenience of experimental processes, _causal-learn_ features a set of commonly used evaluation metrics to appraise the quality of the causal graphs discovered. These metrics include precision and recall for arrow directions or adjacency matrices, along with the Structural Hamming Distance (Acid and de Campos, 2003). ### Demos, documentation, and benchmark datasets The _causal-learn_ package also contains extensive usage examples of all search methods, (conditional) independence tests, score functions, and utilities at [https://github.com/py-why/causal-learn/tree/main/tests](https://github.com/py-why/causal-learn/tree/main/tests). Furthermore, detailed documentation is available at [https://causal-learn.readthedocs.io/en/latest/](https://causal-learn.readthedocs.io/en/latest/). It is worth noting that it also includes a collection of well-tested benchmark datasets-since ground-truth causal relations are often unknown for real data, evaluation of causal discovery methods has been notoriously known to be hard, and we hope the availability of such benchmark datasets can help alleviate this issue and inspire the collection of more real-world datasets with (at least partially) known causal relations. ## 3 Example In this section, let us demonstrate how _causal-learn_ discovers causal relations from observational data in one line of code. First, we could easily install the library via pip: ``` 1pip install causal-learn ``` Then we are ready to take a look into the causal world. Causal discovery in Python is as simple as follows: ``` 1#applyPCwithdefaultparameters 2cg=pc(data) 3 4#visualization 5cg.draw_pydot_graph() ``` The visualization of the returned causal graph is shown in Figure 1. ## 4 Conclusion The _causal-learn_ library serves as a comprehensive toolset for causal discovery, significantly advancing the field of causal analysis and its applications in domains such as machine learning. It provides a robust platform for not only applying causal analysis techniques but also for facilitating the development of novel or enhanced algorithms. This is achieved by providing an infrastructure fully in Python that allows users to efficiently modify, extend, and tailor existing implementations, contribute new ones, and maintain high-quality standards. Given the current demand for causal learning and the rapid progress in this field, coupled with the active development and contribution from our team and the community, the _causal-learn_ library is poised to bring causality into an indispensable component across diverse disciplines. ## Acknowledgments and Disclosure of Funding We are grateful for the collective efforts of all open-source contributors that continue to foster the growth of causal-learn. Especially, we would like to thank Yuequn Liu, Zhiyi Huang, Feng Xie, Haoyue Dai, Erdun Gao, Aoqi Zuo, Takashi Nicholas Maeda, Takashi Ikeuchi, Madelyn Glymour, Ruibo Tu, Wai-Yin Lam, Ignavier Ng, Bryan Andrews, Yewen Fan, and Xiangchen Song. The work of MG is supported in part by ARC DE210101624. The work of RC is supported in part by National Key R&D Program of China (2021ZD0111501). This project is partially supported by the National Institutes of Health (NIH) under Contract R01HL159805, by the NSF-Convergence Accelerator Track-D award #2134901, by a grant Figure 1: Visualization of the causal graph returned by _causal-learn_ with PC algorithm. from Apple Inc., a grant from KDDI Research Inc, and generous gifts from Salesforce Inc., Microsoft Research, and Amazon Research.
因果発見は、観察データから因果関係を明らかにすることを目的とし、これは科学とエンジニアリングにおける基盤的なタスクです。ここでは、$\textit{causal-learn}$ を題目に、因果発見のためのオープンソースのPythonライブラリについて説明します。このライブラリは、実践者と研究者双方に包括的な因果発見手法を提供することを目的としています。非専門家向けに使いやすく、開発者向けにモジュラーな構築ブロックを提供し、学習者向けに詳細なドキュメントを提供し、すべての分野における包括的な方法を提供します。以前のRやJavaのパッケージとは異なり、$\textit{causal-learn}$ はPythonで完全に開発されており、関連コミュニティにおけるプログラミング言語の最近の好みに合わせて開発されています。このライブラリはhttps://github.com/py-why/causal-learnで提供されています。
2309.08397
Topological Exploration using Segmented Map with Keyframe Contribution in Subterranean Environments
Existing exploration algorithms mainly generate frontiers using random sampling or motion primitive methods within a specific sensor range or search space. However, frontiers generated within constrained spaces lead to back-and-forth maneuvers in large-scale environments, thereby diminishing exploration efficiency. To address this issue, we propose a method that utilizes a 3D dense map to generate Segmented Exploration Regions (SERs) and generate frontiers from a global-scale perspective. In particular, this paper presents a novel topological map generation approach that fully utilizes Line-of-Sight (LOS) features of LiDAR sensor points to enhance exploration efficiency inside large-scale subterranean environments. Our topological map contains the contributions of keyframes that generate each SER, enabling rapid exploration through a switch between local path planning and global path planning to each frontier. The proposed method achieved higher explored volume generation than the state-of-the-art algorithm in a large-scale simulation environment and demonstrated a 62% improvement in explored volume increment performance. For validation, we conducted field tests using UAVs in real subterranean environments, demonstrating the efficiency and speed of our method.
Boseong Kim, Hyunki Seong, D. Hyunchul Shim
2023-09-15T13:47:18
http://arxiv.org/abs/2309.08397v1
# Topological Exploration using Segmented Map with ###### Abstract Existing exploration algorithms mainly generate frontiers using random sampling or motion primitive methods within a specific sensor range or search space. However, frontiers generated within constrained spaces lead to back-and-forth maneuvers in large-scale environments, thereby diminishing exploration efficiency. To address this issue, we propose a method that utilizes a 3D dense map to generate Segmented Exploration Regions (SERs) and generate frontiers from a global-scale perspective. In particular, this paper presents a novel topological map generation approach that fully utilizes Line-of-Sight (LOS) features of LiDAR sensor points to enhance exploration efficiency inside large-scale subterranean environments. Our topological map contains the contributions of keyframes that generate each SER, enabling rapid exploration through a switch between local path planning and global path planning to each frontier. The proposed method achieved higher explored volume generation than the state-of-the-art algorithm in a large-scale simulation environment and demonstrated a 62% improvement in explored volume increment performance. For validation, we conducted field tests using UAVs in real subterranean environments, demonstrating the efficiency and speed of our method. ## I Introduction Utilizing unmanned vehicles in subterranean environments for exploration is of paramount importance in the field of robotics, with the objective of reducing human casualties and minimizing property damage. Studies on exploration have significantly evolved over the past five years, greatly influenced by the DARPA Subterranean Challenge [1, 2, 3]. These studies have primarily proposed algorithms based on LiDAR sensors that can operate effectively in visually degraded, large-scale environments with dust or darkness. Among various platforms, UAVs have garnered significant attention due to their high mobility, enabling them to achieve greater exploration efficiency. UAVs possess the advantage of being able to operate independent of terrain, making them suitable for flexible mission execution in environments such as stairs, cliffs, or various irregular terrains [4, 5, 6], which makes them a preferred platform for exploration. However, despite these advantages, UAVs still face several challenges such as limitations in weight, size, and flight duration. For instance, solving three-dimensional perspective problems within onboard computers with limited computational capabilities and developing efficient exploration algorithms to cover as much area as possible within the given time constraints are demanded. To address these issues, we propose a novel topological exploration method, as shown in Fig. 1, that segments the regions from a global-scale perspective and fully leverages the Line-of-Sight (LOS) feature of LiDAR sensors. In more detail, the proposed method utilizes segmented regions and the LiDAR keyframes that have contributed to the generation of these regions to determine the execution of local path planning and global path planning. This reduces unnecessary back-and-forth maneuvers of the UAV during exploration, thereby increasing exploration efficiency. We compared our proposed method with the state-of-the-art algorithm GB planner2 [3] in a large-scale simulation environment. Furthermore, we demonstrated the high exploration efficiency and speed of the proposed method through field tests using UAVs in real subterranean environments. The primary contributions of this paper are as follows: 1. Rapid exploration within large-scale 3D environments through frontier that generated from a global-scale perspective using Segmented Exploration Regions (SERs). 2. Efficient path planning using a novel topological map that takes into account the relationship between SERs and the most contributed LiDAR keyframes. 3. Demonstration of the practicality and superiority of the proposed method through large-scale simulation environments and field tests in real subterranean environments. Fig. 1: An instance of the proposed segmented map-based topological exploration algorithm using a UAV in a subterranean environment. ## II Related Works Autonomous robotic exploration has been approached in various ways. A common method involves the use of frontiers [7, 8, 9, 10], which detect boundaries between known and unknown areas. [7] initiated the frontier-based scheme, directing the robot to the nearest frontier in a greedy manner. [8] refined the greedy method, implementing high-speed traversal with minimal velocity changes. In [9], an information structure was presented that includes frontier cells and viewpoints for hierarchical exploration planning. Similarly, [10] introduced a hierarchical framework based on viewpoints and cuboid subspace centroids. Sampling-based methods explore uncharted spaces by randomly generating viewpoints [11, 12, 13], RRT nodes [14, 15, 16], or motion primitives [17]. They are predominantly inspired by the RRT-based algorithms [18], emphasizing efficient exploration of complex environments. [11] is the early work that employs "next best views" (NBVs) to maximize coverage of unknown volumetric spaces. [12] enhances the NBV planner to address the curse of dimensionality by reducing sampling space and incorporating a history of explored areas. In contrast, [17] generates a sequence of motion primitives via randomized control space sampling. Recent research has increasingly focused on large-scale subterranean environments. Notably, the field of underground exploration has seen a surge in interest due to the DARPA Subterranean Challenge [19]. To explore large-scale, multi-branched spaces, topology [20, 21] and graph-based approaches [22, 3] have been proposed for representing exploration regions. In a recent effort, [21] employs convex polyhedra to separate 3D regions, aiding the selection of local exploration directions. This approach leverages the separated regions to generate frontier points and select local exploration directions. The graph-based planners [22], on the other hand, utilize a rapidly-exploring random graph to identify collision-free local paths that maximize the exploration gain. These methodologies are further enhanced by incorporating global path planning, which assists homing operations [1, 22] and re-positioning [21, 3] of exploring robots. While the results are promising, these random sampling-based methods generate a redundant graph representation in a local 3D space and require intensive computation, affecting the efficiency of exploration planning. In this study, we introduce a novel frontier generation scheme designed from a global-scale perspective to minimize redundancies, such as back-and-forth maneuvers, thereby enhancing the exploration performance. We also present an exploration strategy that employs the keyframe contribution to facilitate efficient and rapid exploration in large-scale environments. ## III Problem Statement When launching robots for exploration in unknown areas, LiDAR-based localization (LiDAR Inertial Odometry or SLAM) is imperative. The entirety of map points \(V\subset\mathbb{R}^{3}\), generated from LiDAR-based localization, can be partitioned into the explored region \(V_{\text{cover}}\) and the unexplored region \(V_{\text{uncover}}\), utilizing the keyframes \(K\subset\mathbb{R}^{3}\) representing the map-generating positions and the coverage \(\zeta_{coverage}\) (\(V=V_{\text{cover}}\cup V_{\text{uncover}}\)). When the map points generated from \(i\)-th keyframe \(K_{i}\in\mathbb{R}^{3}\) are \(Z_{i}\subset\mathbb{R}^{3}\), then \(V\) can be expressed as \(V=\{Z_{i,i\in 0,1,2,\cdots,k}\}\), and \(V_{\text{cover}}\) is consists of all {x,y,z} points in \(V\) that have a Euclidean distance from each element of \(K\) within \(\zeta_{coverage}\). The primary objective of the proposed exploration algorithm is to reduce the \(V_{\text{uncover}}\), ultimately satisfying \(V=V_{\text{cover}}\). ## IV Proposed Approach ### _Segmented Exploration Regions (SERs)_ For efficient exploration in large-scale underground environments, we divide \(V_{\text{uncover}}\) into multiple Segmented Exploration Regions (SERs) \(V_{\text{SERs}}\subset V_{\text{uncover}}\), using Euclidean distance clustering techniques as shown in Fig. 2. Unlike existing methods that generate frontiers within a specific sensor range, the proposed exploration method divides the three-dimensional space into segments based on the geometric characteristics of the \(V_{\text{uncover}}\), generating frontiers at a global scale. \(V_{\text{SERs}}=V_{\text{SER}}^{0}\cup V_{\text{SER}}^{1}\cup V_{\text{SER}}^ {2}\cdots V_{\text{SER}}^{k}\) are utilized for the frontier generation, and the centroid of each \(V_{\text{SER}}^{j}\subset\mathbb{R}^{3}\) is considered as the frontier \(g^{j}\in\mathbb{R}^{3}\) that should be reached to cover the \(V_{\text{SER}}^{j}\) for exploration. As exploration progresses and \(K\), \(V\), \(V_{\text{cover}}\), and \(V_{\text{uncover}}\) are updated, the \(V_{\text{SERs}}\) is also updated in real-time. As described in Algorithm 1, the generation of SERs is carried out when a map is updated (when a new keyframe is generated) in LiDAR-based localization. The updated map is down-sampled for computational efficiency and is divided into \(V_{\text{cover}}\) and \(V_{\text{uncover}}\) by comparing it with the keyframes generated so far. For \(V_{\text{uncover}}\), we apply Euclidean distance clustering techniques to generate \(V_{\text{SERs}}\). Finally, for each \(V_{\text{SER}}^{j}\), the corresponding frontier point \(g^{j}\) is generated, Fig. 2: An overview of Segmented Exploration Regions (SERs) generation. The black lines represent 3D map points \(V\) generated from LiDAR keyframes \(K\), and the blue squares denote explored regions \(V_{\text{cover}}\) included in coverage \(\zeta_{coverage}\). SERs are generated by applying Euclidean distance clustering to unexplored regions \(V_{\text{uncover}}\), with squares of the same color indicating the same SER. Frontier corresponding to each SER is represented by star shape with the same color. which is considered as a candidate for the robot to reach in order to reduce \(V_{\text{uncover}}\). The reason keyframe generation serves as the criterion for SERs generation lies in the need to systematically manage all generated map points during exploration, pairing them with corresponding \(K_{i}\) and \(Z_{i}\). Given the inherent nature of LiDAR sensors, \(Z_{i}\) maintain a Line-of-Sight (LOS) trajectory from \(K_{i}\), a pivotal element for the generation of frontier edge, which we aim to explain in Section IV-C. ### _Real-time Graph Generation with LiDAR Keyframes_ When a new keyframe is generated, we generate edges that account for connectivity between each pair of nodes (keyframes) using \(K\) and \(V\). The graph \(G\), composed of nodes and edges, serves as the foundation for the robot's global path planning within the large-scale environment and is updated in real-time during exploration. The edge between the \(i\)-th node \(K_{i}\) and the \(j\)-th node \(K_{j}\) is determined by performing collision checks with the sub-map \(V_{s}\subset V\). \(V_{s}\) is represented by a set of \(Z\) extracted from \(k\) keyframes \(K^{k}=\{K_{k_{0}},K_{k_{1}},\cdots,K_{k_{k-1}}\}\) using the K-nearest neighbors (KNN) algorithm. Similar to the \(V_{\text{SERs}}\) generation, the generation of the graph \(G\) is performed whenever a new keyframe is generated. The proposed graph generation method not only considers connectivity between \(K_{t-1}\) and \(K_{t}\) but also involves examining the connectivity with the \(k\) nearest nodes. This approach is essential for efficient global path planning in complex environments with multiple branches, characteristic of large-scale scenarios. When a new keyframe \(K_{t}\) is generated, it is added to the keyframe array \(K\), and its corresponding \(Z_{t}\) is added to the map point array \(V\). Subsequently, using the KNN algorithm, the indices of \(k\) nearest keyframes are extracted, and a sub-map \(V_{s}\) for collision checking is swiftly generated by leveraging the features of paired \(K\) and \(V\). Collision checking between two nodes and sub-map points is conducted along line segments and sub-map points. If the vectors from each node to sub-map points form an acute angle with the line segment connecting the nodes, collision checking is performed using the Euclidean distance between the point and the line segment. Otherwise, collision checking is carried out using the Euclidean distance between the point and the node forming an obtuse angle. ### _Frontier Graph Generation with Keyframe Contribution_ In this section, we present a method for generating a graph between frontiers \(g=\{g^{j,j\in{0,1,\cdots,l}}\}\) and specific node \(K_{i}\) based on the contribution of keyframes. In Section IV-A, we explained how \(V_{\text{SERs}}\) are generated for frontiers. These frontiers are generated from \(V_{\text{SERs}}\) by applying Euclidean distance clustering to \(V_{\text{uncover}}\), making them a result of a global-scale perspective. If we know which keyframes contributed to the points composing each \(V_{\text{SER}}^{j}\), we can extract the keyframe that had the most significant contribution to a particular frontier. As previously mentioned, our exploration method is based on the LOS characteristic of LiDAR sensor points. \(V_{\text{SER}}^{j}\) are constituted by points generated from multiple keyframes \(K_{\text{SER}}^{j}\subset K\) but are generated based on geometric features of 3D dense map. Therefore, if a specific keyframe most contributed to \(V_{\text{SER}}^{j}\), we consider LOS to that frontier \(g^{j}\), even if not all points in that \(V_{\text{SER}}^{j}\) were generated by the same keyframe. Because we manage \(K\) and \(V\) as pairs, we introduce a novel map representation named the keyframe-centric map \(V_{\text{key}}=\{V_{\text{key}}^{i}\}\). \(V_{\text{key}}\subset\mathbb{R}^{6}\) includes each \(Z_{i}\) that constitutes \(V\) along with its corresponding \(K_{i}\). The \(j\) point within the map points generated from the \(i\)-th keyframe can be denoted as \(Z_{i,j}\in\mathbb{R}^{3}\), and from the perspective of \(V_{\text{key}}\), the corresponding point can be represented as \(V_{\text{key}}^{i,j}=\{Z_{i,j},K_{i}\}\). The generation of frontier edges based on keyframe contributions is detailed in Algorithm 2. The generation of the frontier graph \(G_{\text{F}}\) begins by parallelly updating the keyframe-centric map \(V_{\text{key}}\) with the newly generated \(Z_{t}\) and \(K_{t}\). Following this, Algorithm 1 in Section IV-A is applied to generate \(g\) and \(V_{\text{SERs}}\). For each \(V_{\text{SER}}^{j}\) comprising \(V_{\text{SERs}}\), the keyframe that contributes the most to the generation of that \(V_{\text{SER}}^{j}\) is extracted. As shown in Fig. 3, for all \(\{x,y,z\}\) points forming \(V_{\text{SER}}^{j}\), we concurrently search for and extract the corresponding \(V_{\text{key}}^{i,j}\) from the parallelly generated \(V_{\text{key}}\), along with the keyframe information, generating all keyframes \(K_{\text{SER}}^{j}\) that generated \(V_{\text{SER}}^{j}\). Finally, the keyframe \(K_{\text{Highest}}\) with the highest contribution within \(K_{\text{SER}}^{j}\) is extracted to construct the graph between the frontier \(g^{j}\) generated from \(V_{\text{SER}}^{j}\) and \(K_{\text{Highest}}\). It's important to note that \(G\) described in Section. IV-B is utilized for the robot's global path planning, necessitating collision checks. However, \(G_{\text{F}}\) is employed to just specify frontiers under the assumption of LOS from specific nodes, and thus, collision checks are not performed. ### _Global Path Planning within Topological Map_ In Sections IV-B and IV-C, we described the generation process of \(G\) and \(G_{\text{F}}\). In this section, we aim to describe efficient global path planning for exploration in large-scale 3D environments characterized by multiple branches, utilizing the topological map composed of \(G\) and \(G_{\text{F}}\). When a new keyframe \(K_{t}\) is generated, \(V_{\text{SERs}}\), \(g\), \(G\), and \(G_{\text{F}}\) are updated. First, for each frontier \(g^{j}\) generated from each \(V_{\text{SER}}^{j}\), we use the proposed exploration score to identify the frontier \(g^{\text{fin}}\) with the highest score. We then prioritize the exploration of \(g^{\text{fin}}\). The proposed exploration score can be expressed as \[\textbf{ExplorationScore}(g^{j})=\\ \frac{w_{\text{bio}}\textbf{Volume}(V_{\text{SER}}^{j})}{w_{\text {Dip}}\textbf{Direction}(\psi_{t},\psi_{g^{j}})\cdot w_{\text{bio}}\textbf{Distance}(K_{t},g^{j})}. \tag{1}\] As shown in (1), the proposed exploration score is composed of distance, volume, and direction factors. \(\textbf{Distance}(K_{t},g^{j})\) represents the distance between the current node \(K_{t}\) and frontier \(g^{j}\), with a higher score computed for closer frontiers, emphasizing proximity. Secondly, \(\textbf{Volume}(V_{\text{SER}}^{j})\) represents the number of points constituting \(V_{\text{SER}}^{j}\), and a higher score is computed for \(g^{j}\) generated with a greater number of points in \(V_{\text{SER}}^{j}\). This design aims to efficiently map as large volume as possible within a limited time. Lastly, \(\textbf{Direction}(\psi_{t},\psi_{g^{j}})\) represents the difference between the current robot's yaw and the direction towards Fig. 4: Global path planning results within the proposed topological map. If the SER that generated the selected frontier does not include any points of \(Z_{t}\) generated by the current keyframe \(K_{t}\), global path planning is performed using \(G\) and \(G_{\text{F}}\). Fig. 3: Comparison between the SERs map \(V_{\text{SERs}}\) and the keyframe-centric map \(V_{\text{key}}\). Each SER visible in the SERs view is generated based on the geometric features of the \(V_{\text{uncover}}\). The points visible in the keyframe-centric view represent the keyframes (colored) that generated those points. The frontier graph \(G_{\text{F}}\) is generated using the keyframe \(K_{\text{Highest}}\) that made the most significant contribution to the generation of each SER and its corresponding frontier. \(g^{j}\). A higher score is computed for frontiers with minimal direction difference, aiming to reduce back-and-forth maneuvers during exploration and enhance exploration efficiency. Note that, \(w_{\text{Dls}}\), \(w_{\text{blt}}\), and \(w_{\text{Dir}}\) represent the weights for the distance, volume, and direction factors, respectively. Using the proposed exploration score, once the frontier \(g^{\text{fhet}}\) with the highest score is selected, we proceed to decide whether to perform global path planning or local path planning towards \(g^{j_{\text{best}}}\). Fortunately, the proposed exploration algorithm leverages the keyframe-centric map \(V_{\text{key}}\) to determine which keyframes contributed to generating the selected \(V_{\text{SER}}^{j_{\text{best}}}\), allowing us to ascertain whether \(V_{\text{SER}}^{j_{\text{best}}}\) contains any point from the \(Z_{t}\) generated from the \(K_{t}\). If it does, considering the LOS from the \(K_{t}\) to \(V_{\text{SER}}^{j_{\text{best}}}\), exploration towards \(g^{j_{\text{best}}}\) is conducted using local path planning. On the other hand, if \(V_{\text{SER}}^{j_{\text{best}}}\) does not contain any point from the \(Z_{t}\), Non-Line of Sight (NLOS) is considered, leading to global path planning towards the \(g^{j_{\text{best}}}\) within the topological map composed of \(G\) and \(G_{\text{F}}\). Global path planning using the topological map is detailed in Algorithm 3, and the result is shown in Fig. 4. ## V Experiments ### _Simulation Based Evaluation_ In this section, the performance of our method was compared with the state-of-the-art GB planner2 [3] algorithm in the 3D cave simulation environment provided by the DARPA Subterranean virtual competition using UAV. To evaluate the performance, two factors, explored volume increment per second \((m^{3}/s)\) and explored volume over time \((m^{3})\), were compared, and the results are shown in Fig. 5. In the simulation environment, both algorithms used a LiDAR sensor with a Horizontal Field of View (HFOV) of 360deg, a Vertical Field of View (VFOV) of 45deg, a maximum sensor range of 80\(m\) with 32 channels. In the simulation environment, the proposed method had a coverage \(\zeta_{coverage}\) of 15\(m\), a down-sampling parameter \(v_{\text{down}}\) of 5\(m\), and \(k\) set to 10. A motion primitive-based planning algorithm was used for our local path planning, and both exploration algorithms had a maximum flight speed of \(2m/s\). As shown in Fig. 5, the proposed method exhibits fewer back-and-forth maneuvers based on our novel planning strategy (detailed in Section. IV-D) and outperformed the GB planner2 in both factors. Also, the proposed method exhibits smoother exploration paths compared to GB planner2, achieved through frontier generation from a global-scale perspective and topological exploration based on keyframe contributions. The proposed method generates a larger volume within the same time compared to GB planner2, demonstrating a 62% improvement in median value of the volume increment. ### _Experimental Evaluation_ For a more comprehensive analysis of the proposed method, we conducted field tests using UAVs in a real subterranean environment. The performance of the proposed method varies depending on the specification of LiDAR sensors, as it relies on the geometric features of the 3D dense map generated during the exploration. Therefore, we utilized two aerial platforms equipped with different LiDAR sensors to perform separate explorations within the same environment and subsequently analyzed their performance. #### V-B1 Hardware and Software Setup Two aerial platforms, as shown in Fig. 6, the USRG Quad and the USRG X8, were used. The USRG Quad is an 8-inch quadcopter platform equipped with an Ouster OS1 32-channel LiDAR with an Fig. 5: Comparison of exploration performance between the proposed method and GB planner2 using UAV. (a) Comparison of exploration trajectory between the proposed method (green) and GB planner2 (red). The inset figure on the right illustrate the results of global path planning triggered within the topological map through our exploration planning strategy, shown as the green path. (b) Quantitative comparison of explored volume over time and the explored volume increment. Fig. 6: The two custom aerial platforms used for field testing, the USRG Quad and the USRG X8. HFOV of 360\({}^{\circ}\), VFOV of 45\({}^{\circ}\), and an effective range of 90\(m\). The USRG X8 is a 5-inch coaxial octocopter platform equipped with an Ouster OS Dome 128-channel LiDAR with an HFOV of 360\({}^{\circ}\), VFOV of 180\({}^{\circ}\), and an effective range of 20\(m\). Both platforms used an Intel NUC computer with a 6-core i7-10710U CPU for real-time onboard computation, and motion primitive-based planning algorithms were employed for local path planning. Localization was achieved using our previous work [23], LiDAR Inertial Odometry, which provides keyframes, corresponding LiDAR scans, and UAV positions. During the field tests, the maximum flight speed was set to 1.5\(m/s\) for the USRG Quad, and 1.2\(m/s\) for the USRG X8, and both platforms were equipped with LED for providing visual information in dark environments. #### V-C2 Segmented map-based Topological Exploration The proposed method for the field test was set up with both UAVs having a coverage \(\zeta_{coverage}\) of 7\(m\), a down-sampling parameter of 2\(m\), and \(k\) set to 10. Both UAVs started exploration from the entrance of an underground bunker and performed exploration until their batteries were exhausted. The exploration results for USRG Quad and USRG X8 are shown in Fig. 7, while the exploration performance is shown in Fig. 8. USRG Quad covered approximately 354\(m\) by flying for 320\(s\) at an average speed of 1.1\(m/s\), while USRG X8 covered approximately 168\(m\) by flying for 210\(s\) at an average speed of 0.8\(m/s\). The inset figures labeled as 'Explored area' in Fig. 7 represent the maps (white map) generated by the two UAVs inside the underground bunker (red map). USRG Quad covered approximately 79% of the entire map, while USRG X8 covered approximately 43%. Additionally, the inset figures marked with a star shape show the flight view of the two UAVs from corresponding positions. The quantitative exploration performance based on the maximum range of the LiDAR sensor is shown in Fig. 8. As shown in Fig. 8, on a median value basis, USRG Quad had a volume increment of 118\(m^{3}/s\), while the USRG X8 had a 51\(m^{3}/s\). Through the field test using two UAVs, the proposed method has demonstrated the ability to achieve fast and efficient exploration in a real subterranean environment by leveraging our novel keyframe contribution-based topological map and exploration planning strategy. ## VI Conclusions In this paper, we proposed a topological exploration algorithm using keyframe contribution. Unlike existing methods that generate frontiers within a specific sensor range or search space, our method generates frontiers from a global-scale perspective, enhancing exploration efficiency in large-scale environments. Using the keyframe-centric map and SERs map, we generate a frontier graph by considering which keyframe contributes the most when frontiers are generated. Finally, by using data structures pairing keyframes with their corresponding scans, we design planning strategies for exploring specific frontiers, enabling rapid exploration. The proposed method surpasses the state-of-the-art GB planner2 algorithm in a large-scale cave simulation environment, showcasing a 62% improvement in the median value of volume increment. Moreover, it has demonstrated rapid and efficient exploration performance in real subterranean environments through field tests. In the future, our goal is to extend the proposed method to multi-robot or heterogeneous robot applications in various unstructured environments. Fig. 8: Comparing the quantitative exploration performance of two UAVs equipped with different LiDAR sensors. The USRG Quad equipped with a LiDAR sensor with a longer maximum range, demonstrates higher exploration efficiency compared to the USRG X8. Fig. 7: Exploration results using USRG Quad and USRG X8 in the subterranean environment located in South Korea. The white map, orange lines, light blue lines, and colored points represent the 3D dense map, graph edges, frontier graph edges, and frontiers generated during the exploration, respectively.
既存の探索アルゴリズムは、主に特定のセンサ範囲または探索空間内でランダムサンプリングまたはモーションプリミティブメソッドを用いて境界線を生成します。しかし、制約された空間内で生成された境界線は、大規模な環境で前後移動を引き起こし、探索効率を低下させます。この問題に対処するため、3次元密度のマップを利用して分割探索領域(SER)を生成し、グローバル規模の視点から境界線を生成する手法を提案します。特に、この論文は、LiDARセンサーポイントの視線(LOS)特徴を完全に活用した、新しいトポログラフィマップ生成手法を提案します。このトポログラフィマップは、各SERを生成するキーフレームの貢献を含み、大規模な地下環境内での探索効率を高めます。この方法により、ローカルパスプランニングとグローバルパスプランニングの切り替えによって迅速な探索を実現します。提案手法
2309.14966
Interactively Learning Social Media Representations Improves News Source Factuality Detection
The rise of social media has enabled the widespread propagation of fake news, text that is published with an intent to spread misinformation and sway beliefs. Rapidly detecting fake news, especially as new events arise, is important to prevent misinformation. While prior works have tackled this problem using supervised learning systems, automatedly modeling the complexities of the social media landscape that enables the spread of fake news is challenging. On the contrary, having humans fact check all news is not scalable. Thus, in this paper, we propose to approach this problem interactively, where humans can interact to help an automated system learn a better social media representation quality. On real world events, our experiments show performance improvements in detecting factuality of news sources, even after few human interactions.
Nikhil Mehta, Dan Goldwasser
2023-09-26T14:36:19
http://arxiv.org/abs/2309.14966v1
# Interactively Learning Social Media Representations Improves News Source Factuality Detection ###### Abstract The rise of social media has enabled the widespread propagation of fake news, text that is published with an intent to spread misinformation and sway beliefs. Rapidly detecting fake news, especially as new events arise, is important to prevent misinformation. While prior works have tackled this problem using supervised learning systems, automatedly modeling the complexities of the social media landscape that enables the spread of fake news is challenging. On the contrary, having humans fact check all news is not scalable. Thus, in this paper, we propose to approach this problem _interactively_, where humans can interact to help an automated system learn a better social media representation quality. On real world events, our experiments show performance improvements in detecting factuality of news sources, even after few human interactions. ## 1 Introduction Over the last decade, we have witnessed a rise in the proliferation of "fake news" Lazer et al. (2018), news content which lacks the journalistic standards ensuring its quality while maintaining its appearance. Social media is flooded with inaccurate and incomplete information Vosoughi et al. (2018), and combating this has attracted significant research interest Nguyen et al. (2020). However, this is still a hard task, particularly on unseen topics. In this paper, rather than annotating data to learn these topics, we propose to use quick **human interactions** to characterize social media, allowing us to learn a better representation, and detect factuality better. Instead of fact checking individual articles, some works Baly et al. (2020) focus on fact-checking sources. While still requiring automated systems due to the number of sources online, source factuality detection can be more scalable, as sources often publish content of similar factuality. Following this, we focus on capturing the factuality levels of sources: _high, mixed, low_. One concept underlying methods that aim to exploit social information for identifying the factuality of news sources is the _social homophily principle_McPherson et al. (2001), which captures the tendency of members of the same social group to hold similar views and content preferences. This often leads to the formation of _"echo chambers"_Jamieson and Cappella (2008); Quattrociocchi et al. (2016), tightly-knit _"information communities"_ that have little interaction with other communities holding different views. Prior work shows how similar news, particularly misinformation, tends to spread more in some of these tightly-knit information communities Bessi et al. (2016). Thus, identifying them can provide the needed information for capturing the factuality of sources (communities spreading mostly low factuality content in the past are likely to spread low factuality content in the future). In this work, we first capture social information in an information graph, modeling it via a R-GCN Schlichtkrull et al. (2018). Many approaches to detect news factuality are often studied in unrealistic settings, as their success hinges on test data being similar to or related to training data. However, a more realistic setup would examine whether a system would be able to generalize to emerging news events: These events introduce different narratives, users, and news sources, that are unseen and do not interact with training content; i.e. test users don't follow train users and test graph nodes aren't connected to train nodes. In this paper, to simulate these settings, we collected new data, consisting of the articles published around specific news events Black Lives Matter and Climate Change - see Sec 5.1), their sources, and social context. We applied a recent strong baseline system Mehta et al. (2022), trained over data sampled from past events, and it resulted in significant degradation in performance on the new events (\(\sim\)22% Acc, 19% Macro-F1). Our main observation in this paper is that even in these challenging settings, the social homophily principle can be exploited to better detect source factuality, **if the system can identify relevant information communities over users engaging with the new content**. This is since users that are part of an information community that propagates fake news, are likely to do so as well. As we later show, automatically detecting the factuality of news sources is difficult, particularly on emerging news events. _Instead_, we suggest an **interactive learning protocol**, in which human judgements dynamically help the model identify these communities. As humans analyzing all emerging news content is clearly infeasible, we propose a novel sampling method for interactions, based on resolving inconsistencies in the model's graph-based social representation. Specifically, we identify pairs of users that are clustered in the same community, but have conflicting factuality predictions, as this indicates inconsistency. We create small sub-graphs corresponding to the social and content preferences of these users and other members of the community, and ask the humans to resolve the conflict: Based on their profile descriptions, social relations and articles endorsed, _is it likely (given the principle of social homophily) that these two users belong to the same community?_ The human judgements provide rich feedback for this question, by adding edges to the graph, which connect users, articles, and sources. These edges result in cleaner information communities, which alleviate the difficulty of the source classification task. Fig. 1 describes this. In summary, we make the following contributions. (1) We are the first to formulate the task of **interactive news source factuality detection** by characterizing social context, and implement an interaction tool for supporting this. (2) We suggest a novel sampling approach for reducing the number of human judgements needed by focusing on social inconsistencies. (3) We focus on one of the most challenging settings of news source factuality detection in emerging news events, collect data, and perform experiments showing how minimal, quick interactions can lead to performance improvements on unseen data. More generally, we propose an interactive framework to learn stronger information communities, and apply it to improve news source factuality detection. In the future, it can also be applied to other social media analysis tasks. Sec. 3 describes our graph model, Sec. 4 our novel protocol to incorporate interactions, Sec. 5 shows results, and Sec. 6 analyzes them. ## 2 Related Work Detecting fake news on social media is a popular research topic, studied in supervised learning [16, 17, 18, 19, 20, 21], graphs [14, 15, 16], zero-shot [20], dialogue [21], cross-domain [13, 22, 23, 24], and low-resource [14] settings. One of the most challenging yet most critical social context fake news detection settings is the early detection of it, where test data has new users, articles, and sources, that do not interact with training data. Recently, researchers have been working on this task, especially at the article/tweet level. Liu and Wu classify news propagation paths, Yuan et al. model user credibility, while Konkobo et al. built a semi-supervised classifier. In our work, we focus on this challenging early detection setting, specifically to identify the factuality of news sources. We Figure 1: Our framework overview: **Adapting News Source Factuality Detection to Emerging News Events by Interactively Characterizing the Social Media Context of News Articles and Their Sources.** (Key: U = Users, A = Articles, S = Sources, Green/H = High Factuality, Red/L = Low Factuality). From the learned graph model (b), we find pairs of inconsistent users by clustering all user embeddings and looking for conflicting factuality labels (c) (L = low; H = high factuality). Here, the High Factuality user doesn’t match the mostly Low Factuality cluster. We then build sub-graphs from these pairs of mismatched users and their community to show human interactors (d), who create new edges based on content similarity. This is far simpler and quicker than identifying factuality, as humans only need to identify which nodes are similar in content. Based on the interactions, we create edges in the broad event graph (a) to do better news source factuality detection (either directly or with more training). show how our _interactive setup_ can be useful, even in these settings. If combined with other early detection methods, our framework may lead to further gains, and we leave this for future work. Using human interactions to improve models has also been popular recently Brantley et al. (2021), in scenarios such as active learning Blok et al. (2021), or humans providing general system feedback Tandon et al. (2022). Other works exploit human feedback for concept discovery Pacheco et al. (2022, 2023) by communicating human-level symbolic knowledge Pacheco and Goldwasser (2021). In contrast, our interactions enable stronger general models, and generalization to new unseen scenarios. Social homophily has been used to better many NLP tasks, like sentiment analysis, entity linking, and fake news West et al. (2014); Yang et al. (2016); Mehta et al. (2022). Particularly, prior work shows how misinformation (and similar news) spreads more in tightly-knit communities, motivating our idea that if we use humans to increase homophily and build better information communities, we can detect factuality better Bessi et al. (2016); Halberstam and Knight (2016); Cinelli et al. (2021). ## 3 Graph Model Similar to Mehta et al., we view fake news source detection as reasoning over relationships between sources, articles, and users in an information graph. We use their graph model1, briefly explaining it in this sec. Sec. 4 explains our interactive protocol. Footnote 1: [https://github.com/hockeybro12/FakeNews_Inference_Operators](https://github.com/hockeybro12/FakeNews_Inference_Operators) The model uses a heterogeneous graph to capture the interaction between social information and news content, and a Relational Graph Convolutional Network (R-GCN) to encode it. The R-GCN allows us to create contextualized node representations for factuality prediction. For example, one way sources are represented is by the articles they publish (which in turn are also represented by their relationships to other nodes). **Graph Creation**: The graph (see: Fig.1a) consists of 3 types of nodes, each with feature vectors (details: App. A.3.1): (1) \(S\), the news _sources_, are our classification targets. (2) \(A\), the _articles_ published by these sources, (3) \(U\), the Twitter _users_. Sources are first connected to articles they publish. Social context is added via Twitter users that interact/connect to sources/ articles/other users. These users provide the means for fake (and real) news spread on social media: **(1) Following Sources/Users:** Users are connected to sources and users they follow. **(2) Propagating Articles:** Articles are connected to users that tweet its title/link. **Graph Embedding:** As in Mehta et al., we train a R-GCN Schlichtkrull et al. (2018) to learn graph embeddings, which will be later used to determine where human interaction may be beneficial. We optimize the Classification objective of News Source Factuality Detection (categorical cross-entropy). To predict labels, we pass the source node embeddings from R-GCN through the Softmax activation. ## 4 Interactive Protocol We hypothesize that understanding content and the context it is provided in is critical to detecting fake news. Specifically, identifying information communities of users, sources, and articles based on their content preferences can be helpful, as a community that mostly shares fake news in the past, is likely to share fake news in the future. Further, users that join this community are likely to share beliefs of the community, and thus also share fake news. Unfortunately, understanding content on social media and using it to identify information communities is challenging for AI agents. It becomes more difficult as new events with new relationships arise, as the agent does not have enough data to determine what is fake news. This makes the early detection of fake news difficult (see Sec. 5.3). On the other hand, educated humans can more easily understand relationships on social media, even in new events, as they can better analyze social interactions. Thus, humans can clear up model confusion by helping the model identify the information communities or make existing ones bigger. For example, after reading a sample of tweets from users discussing a new event, humans can _quickly_ determine if the users are offering the same perspectives, and should be in the same community. This knowledge can help the agent model these users and other content they interact with better. As we later show experimentally, human interactions like these enables us to build strong information communities, which helps the agent, particularly with the early detection of news sources factuality on new news events. Unlike automated agents, humans cannot analyze all content that pertains to a new event, as it is too massive. Instead, due to the highly connected structure of social media, **small amounts of interactions done in the right places can make significant impact**, as the added information can flow throughout the information graph. Thus, we first discuss in Sec 4.1 how we determine what content humans should interact with and what interactions they should make (i.e. forming/strengthening information communities). Then, in Sec 4.2, we explain how we can incorporate those interactions back into the model to achieve performance improvements. ### Soliciting Human Interactions Now, we discuss 3 different protocols to identify the data on which humans should interact, and then what they should do. In general, we want humans to analyze a sub-graph of the broad information graph characterizing the new event. Given this sub-graph, we ask humans to help form information communities by characterizing the content in the graph based on similarity, i.e. identify if two users are similar, two articles offer the same perspective, etc. This is done by asking humans a series of questions (details : App. B) which enables them to connect nodes in the sub-graph based on content preferences, via a graphical interface we developed. An ex. is shown in App. Fig 2. We then replicate these connections in the broad information graph. Identifying the sub-graph that will benefit the most from interactions is critical to getting the most value out of each interaction. We build the sub-graphs by first choosing a pair of users, as our end goal is to build stronger user information communities. We explore three different protocols for doing this in 4.1.1 and Sec 4.1.2. After finding these pairs of users, we build the sub-graph to show humans by including these users and their direct connections in the graph. This includes the articles they propagate, other users that propagate those articles, the sources that publish those articles, and up to 3 "influencers" (users with over 1000 followers) that one of these users follows. For each node in the sub-graph, we populate it with relevant information to enable the human interactors to understand content. For ex:, user/source nodes show user bio, tweets, etc. Article nodes show article publish date, headline, and first paragraph. Details: App B. #### 4.1.1 Baselines We have two baselines for selecting pairs of users. **(1) Random**, users at random. **(2) Model Confusion** takes an _active learning_-like approach, and chooses users based on a label confusion criterion, calculated by propagating the softmax score of the source prediction downwards to get user confusion. Specifically, to get this score, we look at all the sources the user directly interacts with (articles they propagate and sources they follow), and then take the weighted average of those source's Softmax predicted label to be the user score (thus approximating user confidence). For example, a user interacting with 3 articles predicted with low factuality score of 0.7 and 1 source with high factuality score 0.9 will have confidence 0.75. #### 4.1.2 Social vs. Factuality Mismatch Criterion Now, we discuss our novel protocol to determine the pairs of users, seen also in Alg 1. It is designed around one of the key ideas in this paper, homophily, the tendency of users with similar social preferences to have similar content preferences. Our graph model learns to represent both, by creating node embeddings which capture users' similarities, and learning classifiers used for characterizing content by identifying factuality. Intuitively, our protocol is designed to identify users, that based on the current model parameters, break the homophily principle. These users are part of the same social group while at the same time have different factuality predictions, and thus likely different content preferences. When this is true, the model may not have clearly understood the content preferences of these users, which a human can help clear up. To identify these pairs of users, we first need to compute factuality labels for each user. As the model is trained for source classification, we designed a heuristic to use source labels to compute user labels: We assign users the label of the _most common predicted label_ of the sources/articles the user is directly connected to. For ex:, a user following 3 low factuality sources and tweeting 1 mixed factuality article is assigned a low factuality label, as it interacts with more low factuality content. After computing user labels, we need to find groups of similar users, which we do by k-means clustering all users in the event graph using their model embeddings (Alg 1: 3). Then, we assign each cluster a factuality label based on the most frequently occurring user factuality label in that cluster (Alg 1: 5). Finally, we choose pairs of users that are in the same cluster, but one has a different label than the cluster label, as the model thinks they are similar but predicts their factuality differently, which indicates a sign of confusion (Alg 1: 7-9). ### Incorporating Human Interactions Humans interact by making new connections on the sub-graphs. We then utilize the interactions by connecting the appropriate nodes in the broader event graph. Our goal is to show how human interactions allow us to have a better model that performs well with and without further interactions. We focus on the challenging **fully inductive setting**: where all test set nodes are not seen at training and are also not connected to training set nodes. Further, we evaluate the important setting of early detection of fake news, where test data comes from unseen emerging events. As we show in Sec 5.3, in these settings, existing models struggle. We evaluate 3 interaction-based protocols. The three protocols have the same starting point, a graph-based factuality classification system trained over an established dataset (Baly et al., 2020). The protocols are designed to show how interactions can enhance that initial system when making predictions on data from unseen emerging events, and are organized in order of increasing effort required and increasing performance. All involve performing interactions on up to two different data sets (each corresponding to a different emerging event, see Sec 5.1). Since some of the protocols we introduce update the parameters of the model after interaction, we collect data for two events to ensure that all protocols can be evaluated in the fully inductive settings on the second event data (i.e., relying on interactions alone without training). We hereby refer to the first event as \(E1\), and the second as \(E2\). Each event is further split into interaction and no interaction halves (ex: \(E1\)-\(1\)/\(E1\)-\(2\)), for comparison and model training (see below). **(1) Fully Inductive:** In the first protocol, humans interact on the interaction halves of \(E1\) and \(E2\), and then the interactions are incorporated, without any additional training. This is the most challenging, but no extra effort is necessary for performance improvements. **(2) Interactions Amplify Model Learning:** Here, our goal is to show how interactions can help us learn a stronger model that performs well without interactions. Thus, we interact on the interaction half of \(E1\) (half so we can evaluate how we do on the same event without interactions), use it to train the model, and evaluate it on \(E2\) (future event, fully inductive) without any additional interactions. **(3) Learning to Incorporate Interactions:** In this protocol, we show how training the model after interactions allows the model to learn how to better incorporate them. This enables it to do even better when interactions are provided on future events. To do this, as above, we interact on half of \(E1\) and train on it. Then, we evaluate \(E2\) on both the interaction and non-interaction half. Both halves of \(E2\) are connected, so although interactions are only on half, information can propagate via the graph. ### Simulating Human Interactions Due to constraints involved with human interaction time/cost, to evaluate our models we also designed a heuristic to simulate humans: We hypothesize that two users are similar if they have the same gold factuality label. While our interaction approach prioritizes content preferences for interactions, identifying this automatically is difficult, so this is an approximation. Thus, doing human interactions at the scale of simulated ones could perform better, and we leave it for future work. To get user gold factuality labels, we use the same heuristic as in Sec 4.1 (assigning users the label of the source they are most often connected to). Note that in this simulated interaction setting, we are using the test set to determine user labels, so this setup is not realistic. ## 5 Experiments ### Dataset and Collection To evaluate our model's ability to predict the factuality of news medium, we used the Media Bias/Fact Check dataset (Baly et al., 2018). We expand it by scraping additional sources from Media Bias/Fact Check2, for better coverage of recent events and increasing the number of sources for evaluation. Identically to Baly et al., we labeled the sources on a 3-point factuality scale: _high_, _mixed_, or _low_. Our goal in this paper is to show how human interactions can help news source factuality detection on new events, where even strong models struggle (Sec 5.3). To do this, we evaluated our model on **two broad events**: _Black Lives Matter (BLM)_ and _Climate Change (CLM)_. For each, we scraped data from Twitter over 3 time periods (01/02/2019 - 06/01/19; 06/02/19 - 01/1/21; 02/02/21 - 05/06/22), each of which additionally cover many different sub-events. For each time period, we created a fully inductive graph, consisting of at least 99 sources and their metadata. None of these graphs are connected to each other in any way and no nodes in any of them are common with each other or the training set - making our test settings fully inductive, and very challenging. To ensure this inductive setting, when collecting data for future time periods, we made sure not to include sources/users/articles that we already used in previous time periods, even if they propagated content in those future periods. Combined with Baly et al., we used the first time period for training, to teach the model how to identify fake news in general and as it pertains to an event. We used the 2nd and 3rd time period as \(E1\) and \(E2\) in the protocols discussed in Sec 4.2. Details (statistics, etc.): A.2. We release our code and data.3 Footnote 3: [https://github.com/hockeybro12/Fake_News_Interactive_Detection](https://github.com/hockeybro12/Fake_News_Interactive_Detection) ### Evaluation Method For both BLM and CLM, we evaluate on the two inductive sub-events (\(E1,E2\)) collected in Sec 5.1. The interaction half is referred to with a -1 and the non-interaction with -2, for ex: \(E1\)-1. For fair comparison, each data split, i.e. \(E1\)-1, is the same across all evaluations. We report results on Accuracy, Macro F1 (the dataset is unbalanced), and the total number of edges added by all interactions. We evaluated 3 settings, the first 2 are simulated using gold test set labels (see Sec 4.3), while the last is done by humans: **(1) Interaction Graphs Only:** Edges are added only between users in interaction graphs. **(2) X% of Data:** Edges are added between X% of all possible users that have the same label in the test set that we run interactions on (not only users in interaction sub-graphs). X is 100%, 75%, or 25%. **(3) Human Interactions**: For BLM, we evaluate on two separate versions, each featuring a different source set (and social media data). This section shows results on the first version, when one human interacts on 20 graphs per data split (details in B.1, B.2). The appendix shows the second version of BLM with _3 different interactors_, showing the same trends. For space, these interaction results/details (including agreement) are in B.2.3. This section also evaluates Climate Change, with 2 interactors interacting on 10 sub-graphs per data split. (for detailed CLM results, see App. C). The human interaction results are also the most realistic evaluation setting, as they don't use any gold test set labels, like the simulated interactions do. ### Baselines We trained our baseline model, from Sec. 3, for Source Factuality Detection on Baly et al. and the first event, where it achieved strong performance, similar to SOTA Mehta et al. (2022) (we use the same data and methodology) and other baselines Baly et al. (2020) (SVM). However, when evaluated inductively on a BLM event that was published after dates the training data was collected from - i.e. \(E1\)-1 - performance significantly worsened (see Table 1). This validated our hypothesis that strong models, even if trained on generic and event specific data, do not translate well to future events. Thus, we propose to use our interactive protocol. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Model** & **Baly** & **Baly** & **E1-1** & **E1-1** \\ & **Acc.** & **F1** & **Acc** & **F1** \\ \hline Baly & 71.52 & 67.25 & - & - \\ Mehta R-GCN & 68.90 & 63.72 & - & - \\ Mehta BEST & 72.55 & 66.89 & - & - \\ \hline BL: Mehta R-GCN & 66.04 & 54.20 & 43.21 & 34.44 \\ \hline \end{tabular} \end{table} Table 1: Baseline results on Baly et al. (2020) and an inductive future BLM event \(E1\)-1 (not seen or connected to the training graph). Baseline (BL) is the strong graph classification model from Mehta et al. (2022) Mehta R-GCN) that was competitive with the state of the art ((Mehta et al., 2022) - Mehta BEST). Even with this, performance significantly worsens on \(E1\), showing that detecting fake news on future events inductively is challenging. BL: Mehta R-GCN was trained on a smaller Baly et al. (2020) dataset, as some sources were used for evaluation, which is why the performance is slightly lower. \begin{table} \begin{tabular}{|l|l|l|} \hline **Model** & **E2-1 Acc** & **E2-1 F1** \\ \hline Random Users & 35.21 & 29.99 \\ Confused Users & 36.61 & **32.72** \\ User Clustering & **42.10** & 32.22 \\ \hline \end{tabular} \end{table} Table 2: Ablation study on our methods for choosing interactions on \(E2\)-1. It is clear that finding users based on clustering and then factuality mismatch is best. ### Interactions We now evaluate our interaction protocols - what portions of the graph to show users and how to incorporate interactions, using the method in Sec 5.2. #### 5.4.1 Soliciting Interactions When comparing our methods for choosing what sub-graphs to show on BLM, simulated interactions performance shows a benefit (Table 2) of choosing the users to build interaction graphs for based on confused user clustering. This matches our intuition as if the model predicts a users' factuality differently than other users similar to it, then the model is confused and clearing that could improve performance. Thus, we use this method of choosing sub-graphs throughout the rest of our experiments. #### 5.4.2 Incorporating Interactions Now, we evaluate our 3 protocols of incorporating interactions discussed in Sec 4.2, in order of increasing performance and model training required. For space, additional human interaction results are in App. B.2.3 and detailed CLM results in App. C. Note that simulated interactions (4.3) use gold test set labels and thus are only used to test our models. First, Protocol 1, where we evaluate how the model performs with interactions in the completely inductive setting, so no training is necessary. In Tab. 3, we ran interactions on only the interaction half of each event (\(E1\)-1 + \(E2\)-1) and the dev. data (also from \(E1\)-1), to choose the strongest model. To ensure the dev. set being chosen from \(E1\)-1 does not bias us into a strong model, we also did interactions on the interaction half of \(E2\) and notice stronger performance improvements. Note that \(E2\) is a future event and is not connected to \(E1\) at all. All settings improve performance. Moreover, on BLM, human interactions improves performance \(\sim\)9.3% Acc. on E2-1, comparable to simulated interactions with significantly more data, showing the large impact benefit of human interactions. Next, in Protocol 2, we learn a better model for \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline **Model** & **E1-2** & **E1-2** & **E2-1** & **E2-1** & **\#** \\ & **Ace** & **F1** & **Ace** & **F1** & **Edges** \\ \hline BLM No Interactions & 37.93 & 30.70 & 35.21 & 27.65 & - \\ BLM No Interactions Train & 64.86 & 66.91 & 42.10 & 40.10 & - \\ CLM No Interactions Train & 49.29 & 44.84 & 44.77 & 42.35 & - \\ \hline BLM Sim. Interactions on Sub-Graphs Only & 62.16 & 62.95 & 43.66 & 29.47 & 2,162 \\ BLM Sim. Interactions on 100\% of Data in E1-1 & 56.75 & 59.27 & 45.07 & 40.18 & 133,336 \\ BLM Sim. Interactions on 75\% of Data in E1-1 & 65.51 & 64.01 & 43.66 & 39.11 & 74,414 \\ BLM Sim. Interactions on 25\% of Data in E1-1 & 54.05 & 46.48 & 39.43 & 35.09 & 8,266 \\ \hline BLM Human Interactions in E1-1 & 67.56 & 71.56 & 45.07 & 35.18 & 84 \\ CLM Human Interactions in E1-1 & 53.52 & 44.53 & 40.29 & 46.38 & 47 \\ \hline \end{tabular} \end{table} Table 4: Protocol 2: Interactions results on BLM + Climate Change (CLM) when we train on interactions, and then apply the model to a new event with no interactions done. E1 and E2 are the two separate, inductive graphs. E1-1 is the interaction half of the \(1^{st}\) event and E1-2 is the \(2^{nd}\), non-interaction half. E2-1 (non-interaction half) is not connected to E1. Compared to the model that was trained on E1-1 without interactions (No Interactions Train), human interactions lead to a more accurate model for future events, by \(\sim\)3% better Acc. for BLM and \(\sim\)4% F1 for CLM (E2-1). Sim. settings also show improvements. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Model** & **E1-1** & **E1-1** & **E1-2** & **E1-2** & **E2-1** & **\#** \\ & **Ace** & **F1** & **Ace** & **F1** & **Ace** & **F1** & **Edges** \\ \hline BLM No Interactions & 43.21 & 34.44 & 37.93 & 30.70 & 35.21 & 27.65 & - \\ CLM No Interactions & 40.16 & 32.77 & 39.65 & 31.86 & 34.88 & 30.93 & - \\ \hline BLM Sim. Interactions on Sub-Graphs Only & 44.54 & 36.45 & 37.93 & 30.70 & 42.10 & 32.22 & 2,162 \\ BLM Sim. Interactions on 100\% of Data in E1-1 + E2-1 & 49.20 & 40.52 & 37.93 & 30.70 & 44.73 & 36.82 & 133,336 \\ BLM Sim. Interactions on 75\% of Data in E1-1 + E2-1 & 46.03 & 38.05 & 37.93 & 30.70 & 50.00 & 40.50 & 74,414 \\ BLM Sim. Interactions on 25\% of Data in E1-1 + E1-2 & 46.03 & 37.65 & 37.93 & 30.70 & 42.10 & 32.65 & 8,266 \\ \hline BLM Human Interactions in E1-1 + E2-1 & 44.44 & 35.96 & 37.93 & 30.70 & 44.73 & 30.03 & 84 \\ CLM Human Interactions in E1-1 + E2-1 & 46.72 & 43.94 & 39.65 & 31.86 & 39.53 & 36.95 & 47 \\ \hline \end{tabular} \end{table} Table 3: Protocol 1: Interactions results on BLM and Climate Change (CLM) in the difficult, inductive, no training setting. E1 and E2 are the two separate, inductive graphs. E1-1 is the first half that receives interactions, and E1-2 is the second half that doesn’t. E2-1 (first half E2) also receives interactions, but it’s dev set is not used to select the model. With a minimal number of added edges, human interactions achieve performance improvements in these difficult, inductive settings, with no extra training (compared to No Interactions). Ex: results improve on human BLM E2-1 (\(\sim\)9.5% Acc.) Sim. settings also show improvements. news source factuality detection after doing interactions, compared to not doing any. In Tab. 4, we ran interactions on the interaction half of \(E1\), and then trained on that data. On \(E2\) with no interactions done, we can see how this improves accuracy compared to models trained without interactions. Finally, for Protocol 3, we learn to better incorporate interactions into the model after we train for it. Thus, we train similarly to Protocol 2, but now we also run interactions on the interaction half of \(E2\). In Tab. 5, we see accuracy improves on both halves of \(E2\) after we learn to incorporate interactions on \(E1\), even though \(E2\) is inductive. Further, F1 improves on \(E1\)-\(1\). This shows that training with and then doing interactions helps performance significantly on future events. We hypothesize that this happens as training with interactions enables the model to learn how to incorporate them better, allowing the model to further take advantage of them whenever provided. Further, human interactions based on content preferences provide clearer results compared to simulated ones (without cheating and using test set labels), as the model better learns the social media landscape, shown by it achieving better accuracy on the BLM interacted and non-interacted data (both halves of \(E2\)). From these results, we see that our real-world applicable human interactions models result in performance improvements in either Accuracy or Macro F1, often times both. As a whole, all our models improve performance (any non-gain in one of these metrics is offset by significant gains in the other). We additionally hypothesize that performing more interactions (particularly human) will achieve higher and more consistent results. ## 6 Discussion Now, we analyze our best BLM interaction model for fake news source detection (for each protocol) on \(E2\)-\(1\) by answering these research questions: (1) _Do interactions help learn better communities?_ (2) _What pairs of nodes do humans connect?_ (3) _How can our model be used in the real world?_ (4) _Do interactions change embeddings?_ App. D.1 ### Learned Communities We analyze how interactions help learn better info. communities. We evaluate cluster-purity by K-means clustering sources, articles, and users before and after interactions are done. To compute purity, each cluster is assigned to the class which is most frequent in it, and then the accuracy of this is measured. Users are assigned gold labels based on the most common label of all the nodes they are directly connected to in the graph. Results in Tab. 6 show purity increases after interactions, showing interactions help learn better communities. ### Human Interaction Analysis/Examples We analyze the interactions to determine what humans connected. We see humans make smart decisions in matching content preferences. Further, we show specific examples, demonstrating the ease, \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Model** & **Purity** & **\# Edge** \\ \hline No Inter. & 36.2, 37.8, 33.3 & - \\ \hline P1: Inductive Human & 39.2, 40.2, 35.3 & 84 \\ \hline P2: Train Human & 49.5, 37.4, 41.4 & 84 \\ \hline P3: Train + Inter. Human & 53.4, 41.9, 42.6 & 84 \\ \hline \end{tabular} \end{table} Table 6: Purity clustering (sources, articles, users) for the human interaction protocols on E2-1. As training increases with each protocol (P), purity does too, showing that interactions do help to learn better information communities. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline **Model** & **E1-2** & **E1-2** & **E2-1** & **E2-1** & **E2-2** & **E2-2** & **\#** \\ & **Acc** & **F1** & **Acc** & **F1** & **Acc** & **F1** & **Edges** \\ \hline BLM No Interactions & 37.93 & 30.70 & 35.21 & 27.65 & 30.30 & 24.84 & - \\ BLM No Interactions Train & 64.86 & 66.91 & 42.10 & 40.10 & 45.45 & 42.35 & - \\ CLM No Interactions Train & 49.29 & 44.84 & 44.77 & 42.35 & 44.44 & 33.06 & - \\ \hline BLM Sim. Interactions on Sub-Graphs Only & 62.16 & 62.95 & 57.89 & 48.53 & 45.45 & 43.49 & 2,162 \\ BLM Sim. Interactions on 100% of Data in E1-1 + E1-2 & 56.75 & 59.27 & 57.89 & 61.90 & 36.36 & 35.53 & 133,336 \\ BLM Sim. Interactions on 75% of Data in E1-1 + E1-2 & 65.51 & 64.01 & 63.15 & 61.84 & 45.45 & 43.60 & 74,414 \\ BLM Sim. Interactions on 25% of Data in E1-1 + E1-2 & 54.05 & 46.48 & 44.73 & 31.38 & 51.51 & 45.31 & 8,266 \\ \hline BLM Human Interactions in E1-1 + E2-1 & 67.56 & 71.56 & 50.00 & 43.60 & 51.51 & 40.09 & 84 \\ CLM Human Interactions in E1-1 + E2-1 & 53.52 & 44.53 & 53.48 & 43.07 & 46.80 & 38.73 & 47 \\ \hline \end{tabular} \end{table} Table 5: Protocol 3: Results on BLM + Climate Change (CLM) when we train on interactions and then do more in the inductive setting. E1 and E2 are the two separate inductive graphs. E1-1 is the interaction half of E1 that is trained on. E1-2 is the non-interaction half. E2 receives interactions on the interaction half (E2-1), but not the non-interaction half (E2-2). Human interactions improve accuracy on both halves of E2 and F1 on E2-1, compared to no interactions train, and more than only applying interactions without training for them as Tab.3, showing the benefit of training to learn to incorporate interactions. quickness, and lack of subjectivity of the interaction process. These details/ex. are in App. D.2. ### Real World Use Case As shown in Sec. 5, our interactive protocols enable rapidly (humans spent \(\sim\)3 min/sub-graph) learning better source factuality detection models for new events, even in the most challenging settings when there are no users, articles, or sources in common with prior data. This happens as contrary to providing additional labels, which can be time consuming and hard, interactions clear up content preferences, creating better social homophily and performance. Specifically, in a real-world use case, interacting at training time learns a better model for the new event setting (Protocol 2 results on E2-1). In addition, this model would become even stronger as more interactions are performed, even without any further training, as seen in Protocol 3. Thus, when new news events happen, humans can interact on a few settings (our interaction sub-graphs) and our setup enables the model to amplify this knowledge to rapidly detect fake news sources on a large scale. ## 7 Summary and Future Work We proposed an initial protocol to interactively build stronger information communities, applying it on source factuality detection. We focused on the early detection settings, where even strong models can struggle. Our approach of finding sub-graphs and then interacting on them via 3 protocols enables minimal, quick human interactions to achieve significant performance improvements. We hypothesize that our interactive framework can generalize to other social media analysis tasks like bias or topic detection, and testing it is our future work. Additionally, we aim to scale up our interaction process, to include additional human interactions and types of interactions. ## 8 Acknowledgements We thank the anonymous reviewers of this paper for all of their vital feedback. The project was funded by NSF CAREER award IIS-2048001 and IIS-2135573. ## 9 Ethics Statement In this section, we first discuss some limitations of our model (9.1), and then expand on that with a discussion on ethics as it relates to our data collection, data usage, human interaction, and the deployment of our models (9.2). ### Limitations This work tackles fake news source detection in English on Twitter (our social media platform of choice). Our methods may or may not apply to other languages with different morphology, and other social networking platforms. We leave the investigation of this to future work, but are optimistic that especially with the benefit of interactivity, our methods may generalize. The nature of our interactive framework also requires human interactors to interact, which could be a potential limitation. Interactors must have some general understanding of news content and be able to identify if two entities (users, sources, or articles) have similar content relationships. However, as interactors are just looking for content/perspective similarity, they need not be aware of the latest events or be fake news detection specialists. Further, human interactors don't analyze user-specific information or profile users themselves, they just determine if users have similar content relationships. We used a single GeForce GTX 1080 NVIDIA GPU to train our models, with 12 GB of memory. As our models are largely textual based, they do not require much GPU usage. However, scaling our experiments to larger scale settings in real world settings could require more compute, which may be a potential limitation. Our hyper-parameter search, mentioned in App A.3 was done manually. ### Ethics To the best of our knowledge no code of ethics was violated throughout the experiments done in this paper. We reported all hyper-parameters and other technical details necessary to reproduce our results, and release the code and dataset we collected. We evaluated our model on two datasets that we collected in this paper, and was collected by prior work, but it is possible that results may differ on other datasets. We believe our methodology is solid and applies to any social media fake news setting. Due to lack of space, we placed some of the technical details and discussion to the Appendix section. The results we reported supports our claims in this paper and we believe it is reproducible. Any qualitative result we report is an outcome from a machine learning model that does not represent the authors' personal views. For anything associated with the data we use, we do not include account information and all results are anonymous. In our dataset release, we include sources, users, and articles, with enough data to produce the results described in the paper and the Appendix. Sources are public information provided in (Baly et al., 2020), and we map each to an ID. We release article graph embeddings, which can be used to train our models. As these embeddings are neural network representations, they can't be mapped back to article text. However, we also release article URLs, so that the articles can be downloaded, if they are still publicly available. Additionally, we release the Twitter data that we used, in compliance with the Twitter API policies 4. In our dataset release, each user is referenced by their Twitter ID, and their graph ID (the graph ID is meaningless on it's own). We release the mapping of the Twitter ID to the graph ID. By us only releasing Twitter ID's, and not the actual Twitter text or user information, in order to download the exact Twitter data that we used, users must use the Twitter API to gather the latest public information5. This ensures that we respect user privacy, in accordance with the policies mentioned by the Twitter API, as only user content that is still public can be downloaded and we are not storing/releasing any data. We also provide the model representations for each user, article, and source we used as our initial embedding in the graph. As these are neural network model embeddings, they can't be mapped back to the individual text. Our data is meant for academic research purposes and should not be used outside of academic research contexts. All our data is in English. Footnote 4: [https://developer.twitter.com/en/developer-terms/more-on-restricted-use-cases](https://developer.twitter.com/en/developer-terms/more-on-restricted-use-cases) Footnote 5: [https://developer.twitter.com/en/docs/twitter-api](https://developer.twitter.com/en/docs/twitter-api) In this paper, we did not use any of the Twitter data for user surveillance purposes, and we encourage the community to do the same, to respect user privacy. We also do not profile users, we only use the user insights as an aggregate to classify news sources. Further, we only use public Twitter profiles, which there are enough of for our framework to work in real-time situations. When doing human interactions, we show humans public Twitter information, so that they can determine user similarity. To do this, we use the Twitter API to determine the Twitter data that is publicly available at the time of interaction, show that to humans, and then discard the Twitter information. Further, in our graph model, we do not store any user-specific information, we only store neural network model embeddings which are used for training and cannot be mapped back to the original text or user. The same is true for articles, so we we are actually discarding all the text (Twitter, article, and source). Users of our framework should also do the same - use public knowledge for interactions and not store any user/article specific data, rather use the appropriate APIs to retrieve the data when needed. Our framework in general is intended to be used to defend against fake news. While our framework could be used to build better methods of designing fake news, our methodology of interactive fake news detection could guard against that as well. We caution that our models and methods be considered and used carefully. This is because in an area like fake news detection, there are great consequences of wrong model decisions, such as unfair censorship and other social related issues. Further, despite our efforts, it is possible our models are biased, and this should also be taken into consideration. Our protocol of building sub-graphs based on model confusion that we used when showing humans what to interact on, can be used to get insights into the model to help prevent some of these issues as well. However, this is definitely an area of future work. In the interactive setting we proposed, our approach relies on getting insights from human interactors and using that to improve performance in fake news detection. While that lead to performance improvements in this work and we believe it will hold in different settings, there could be issues, such as biased humans. Running interactions at large scale with multiple human experts per subgraph can help mitigate some of these issues. For example, edges can be weighted in the graph based on how many humans chose to add them. Thus, extremely biased interactors decisions would be given less weight, and maybe even not considered by the model. We leave this for future work. However, despite this, there may still be some human interactor bias that can leak into the final fake news detection model, which is why perhaps important decisions should not be made only by machine learning models, but rather the models be used as a tool. As mentioned in the Appendix B.2.3, the human interactors we used were Compute Science PhD students. The interactors were awarded research credits for their work, as the hours they spent working on the task were considered as part of their research credit hours. They were explained the entire process before hand including what the interactions would be used for, and agreed to perform the interaction. The total interaction process took under 3 hours, including the time spent explaining the process. These and many other issues are things to consider when using fake news detection models such as the one proposed in this work.
ソーシャルメディアの隆盛は偽情報が広まることを可能にし、誤った情報を拡散し、人の信念を揺さぶる意図で公開されたテキストです。偽情報の急速な検出、特に新しい出来事の発生に伴い、誤情報を防止することが重要です。従来の研究では、この問題を指導的な学習システムを用いて解決してきましたが、自動化されたモデル化によって、偽情報が広まるソーシャルメディアの複雑さを表現するのは難しいです。逆に、すべてのニュースを人間が事実確認する方法はスケーラブルではありません。したがって、この論文では、人間がインタラクティブにアプローチする提案をしており、人間は自動システムを学習するために、より良いソーシャルメディアの表現品質の助けを必要とします。現実世界における実験の結果、ニュースソースの事実性を検出する性能が向上し、人々が少数のインタラクションを行った後に改善が見られます。
2310.20578
Fault-Tolerant Operation of Bosonic Qubits with Discrete-Variable Ancillae
Fault-tolerant quantum computation with bosonic qubits often necessitates the use of noisy discrete-variable ancillae. In this work, we establish a comprehensive and practical fault-tolerance framework for such a hybrid system and synthesize it with fault-tolerant protocols by combining bosonic quantum error correction (QEC) and advanced quantum control techniques. We introduce essential building blocks of error-corrected gadgets by leveraging ancilla-assisted bosonic operations using a generalized variant of path-independent quantum control (GPI). Using these building blocks, we construct a universal set of error-corrected gadgets that tolerate a single photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably, our construction only requires dispersive coupling between bosonic modes and ancillae, as well as beam-splitter coupling between bosonic modes, both of which have been experimentally demonstrated with strong strengths and high accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC in the full fault-tolerant setting. We numerically demonstrate the feasibility of our schemes using current experimental parameters in the circuit-QED platform. Finally, we present a hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code utilizing only beam-splitter couplings. Our estimates suggest that the overall noise threshold can be reached using existing hardware. These developed fault-tolerant schemes extend beyond their applicability to four-legged cat qubits and can be adapted for other rotation-symmetrical codes, offering a promising avenue toward scalable and robust quantum computation with bosonic qubits.
Qian Xu, Pei Zeng, Daohong Xu, Liang Jiang
2023-10-31T16:13:04
http://arxiv.org/abs/2310.20578v1
# Fault-Tolerant Operation of Bosonic Qubits with Discrete-Variable Ancillae ###### Abstract Fault-tolerant quantum computation with bosonic qubits often necessitates the use of noisy discrete-variable ancillae. In this work, we establish a comprehensive and practical fault-tolerance framework for such a hybrid system and synthesize it with fault-tolerant protocols by combining bosonic quantum error correction (QEC) and advanced quantum control techniques. We introduce essential building blocks of error-corrected gadgets by leveraging ancilla-assisted bosonic operations using a generalized variant of path-independent quantum control (GPI). Using these building blocks, we construct a universal set of error-corrected gadgets that tolerate a single photon loss and an arbitrary ancilla fault for four-legged cat qubits. Notably, our construction only requires dispersive coupling between bosonic modes and ancillae, as well as beam-splitter coupling between bosonic modes, both of which have been experimentally demonstrated with strong strengths and high accuracy. Moreover, each error-corrected bosonic qubit is only comprised of a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC in the full fault-tolerant setting. We numerically demonstrate the feasibility of our schemes using current experimental parameters in the circuit-QED platform. Finally, we present a hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code utilizing only beam-splitter couplings. Our estimates suggest that the overall noise threshold can be reached using existing hardware. These developed fault-tolerant schemes extend beyond their applicability to four-legged cat qubits and can be adapted for other rotation-symmetrical codes, offering a promising avenue toward scalable and robust quantum computation with bosonic qubits. ## I Introduction Quantum error correction (QEC) enables reliable quantum information processing [1, 2, 3]. However, paradigmatic QEC schemes, particularly those employing surface codes with physical qubits [4, 5, 6, 7], suffer from huge resource overhead [8, 9]. This resource-intensive nature creates a substantial gap between the theoretical potential of fault tolerance and the capabilities of current noisy intermediate-scale quantum (NISQ) [10] devices. Encoding quantum information into bosonic systems [11, 12, 13, 14, 15] by leveraging their infinite-dimensional Hilbert spaces offers a promising avenue to reduce the overhead of QEC [16, 17, 18, 19, 20, 21]. While robust quantum memories based on single-mode bosonic codes have been experimentally demonstrated with improved memory lifetime [22, 23, 24], realizing error-corrected operations on these bosonic qubits remains a formidable task. One of the primary complexities stems from the weak non-linear interactions inherent in bosonic modes, necessitating the use of discrete-variable ancillae in systems such as circuit quantum electrodynamics (circuit QED) platform [25, 26]. However, a significant challenge arises in this hybrid system, as errors in the ancillae tend to propagate back to the bosonic mode, potentially compromising the encoded quantum information [27]. To address this issue, several methods have been developed to maintain precise control over the bosonic mode even in the presence of noisy ancillary systems [28, 29, 30]. Nevertheless, a comprehensive fault-tolerance framework for this hybrid system, along with guidelines for constructing fully fault-tolerant protocols using advanced quantum control concepts, remains conspicuously absent. Consequently, while universal error-detection operations on bosonic qubits have been constructed [31, 32] and demonstrated [33], achieving a complete set of error-corrected operations has remained a significant challenge. In this work, we bridge this gap by introducing a fault-tolerance framework tailored to the hybrid system composed of bosonic data modes and discrete-variable ancillae. Inspired by concatenated qubit codes [34], we identify essential properties for gadgets encoded in bosonic codes (referred to as "level-1" gadgets) in Sec. III. These properties play a crucial role in determining the fault tolerance of a level-1 circuit, where the overall failure probability must be suppressed to a certain order of the physical error rate. Furthermore, we demonstrate how the defined fault tolerance can be achieved through the integration of bosonic QEC with compatible quantum control techniques. Specifically, in Secs. IV and V, we establish a connection between a generalized version of path-independent control [30] (referred to as GPI) and fault tolerance, highlighting the importance of GPI operations as fundamental building blocks for error-corrected gadgets. As an application of these fault-tolerant tools, in Sec. VI, we construct universal error-corrected gadgets using GPI operations for the four-legged cat qubit [35, 14, 36]. These gadgets can tolerate a single photon loss and an arbitrary ancilla fault, while only relying on dispersive coupling between bosonic modes and ancillae [29, 37, 38] and beam-splitter (BS) coupling between bosonic modes [39, 40]. Importantly, these coupling mechanisms have been experimentally demonstrated with strong coupling strengths. Each level-1 logical qubit, encoded in a four-legged cat code, utilizes only a single bosonic mode and a three-level ancilla, featuring the hardware efficiency of bosonic QEC. We numerically demonstrate the second-order error suppression for the level-1 gadgets. Moreover, we show that using a teleportation gadget that pumps energy into the system and suppresses phase-rotation errors, a robust cat-encoding memory is feasible even in the presence of finite \(\chi\) mismatches in the circuit-QEC platform with current experimental parameters [29]. Finally in Sec. VII, we present a practical and hardware-efficient architecture for fault-tolerant quantum computing by concatenating the four-legged cat qubits with an outer qubit code. While we primarily focus on the four-legged cat code throughout this work, we discuss in Sec. VIII that the fault-tolerant schemes developed herein can be readily adapted to other rotation-symmetric bosonic codes [41]. ## II System description and error model We first introduce some notations. We denote \([k]:=\{1,2,\cdots,k\}\) as the set of integers from \(1\) to \(k\). We denote \(\left[\int_{t_{k}}dt_{h}\right]_{h\in[k]}:=\int_{t_{k}}d_{t_{k}}\int_{t_{k-1}} dt_{k-1}\cdots\int_{t_{1}}dt_{1}\) as the multiple integral over variables in \(\{t_{h}\}_{h\in[k]}\), and similarly \(\left[\sum_{a_{k}}\right]_{h\in[k]}:=\sum_{a_{k}}\sum_{a_{k-1}}\cdots\sum_{a_ {1}}\) as the sum over variables in \(\{a_{h}\}_{h\in[k]}\). We denote \(A\propto B\) if there exists some \(c\in\mathbb{C}\) such that \(A=cB\). We denote \(\mathcal{T}\) as the time ordering operator. ### Preliminaries #### ii.1.1 Bosonic codes Single-mode bosonic error-correcting codes encode logical information into a subspace of the infinite-dimensional Hilbert space of an oscillator. Among them, the four-legged cat code [35, 14, 36] encodes a single logical qubit and has codewords \[\ket{\mu_{L}}=c_{\mu}\left[\ket{\alpha}+\ket{-\alpha}+(-1)^{\mu}(\ket{i \alpha}+\ket{-i\alpha})\right], \tag{1}\] where \(\mu=0/1\), \(\ket{\gamma}\) denotes a coherent state with an amplitude \(\gamma\in\mathbb{C}\), and \(c_{\mu}=1/(2\sqrt{2\exp(-|\alpha|^{2})(\cosh|\alpha|^{2}+(-1)^{\mu}\cos|\alpha |^{2})})\) are normalization constants. Given any quantum code encoding a single logical qubit, we denote \(P_{c}:=\ket{0_{L}}\bra{0_{L}}+\ket{1_{L}}\bra{1_{L}}\) as the projection onto the codespace, and \(\bar{X}_{c},\bar{Y}_{c},\bar{Z}_{c}\) the logical \(X\)-, \(Y\)-, \(Z\)-Pauli operators respectively. The capability of an error-correcting code to correct a given set of errors \(\mathbf{E}\) is given by the Knill-Laflamme (KL) condition [42]: \(P_{c}E_{l}^{\dagger}E_{j}P_{c}\propto P_{c}\) for any \(E_{i},E_{j}\in\mathbf{E}\). More specifically, we can evaluate the \(2\times 2\) QEC matrix \(\epsilon_{jk}^{c}\) for any pair of errors \(E_{j},E_{k}\)[36]: \[P_{c}E_{j}^{\dagger}E_{k}P_{c}=\epsilon_{jk}^{c}, \tag{2}\] where \(\epsilon_{jk}^{c}\) can be parametrized as \(\epsilon_{jk}^{c}=c_{jk}^{c}P_{c}+x_{jk}^{c}\bar{X}_{c}+y_{jk}^{c}\bar{Y}_{c}+ z_{jk}^{c}\bar{Z}_{c}\), where \(c_{jk}^{c},x_{jk}^{c},y_{jk}^{c},z_{jk}^{c}\in\mathbb{C}\). The KL condition is satisfied if \(x_{jk}^{c}=y_{jk}^{c}=z_{jk}^{c}=0\) for any \(j\) and \(k\). Consider the four-legged code and an error set containing a single-photon loss \(\mathbf{E}=\{I,a\}\), where \(a\) denotes the annihilation operator. First, we have \(P_{c}aP_{c}=0\), indicating that a single-photon loss is perfectly detectable. Second, \[P_{c}a^{\dagger}aP_{c}=\bar{n}P_{c}+\frac{\delta n}{2}\bar{Z}_{c}, \tag{3}\] where \(\bar{n}:=(\bra{0_{L}}a^{\dagger}a\ket{0_{L}}+\bra{1_{L}}a^{\dagger}a\ket{1_{ L}})/2\) denotes the mean photon number and \(\delta n:=\bra{0_{L}}a^{\dagger}a\ket{0_{L}}-\bra{1_{L}}a^{\dagger}a\ket{1_{L}}\) denotes the photon number difference between the two codewords. For an arbitrary \(\alpha\), \(\delta n\neq 0\), indicating the a single photon loss is not perfectly correctable. However, \(\delta n=O(e^{-2\alpha^{2}})\) as \(\alpha\gg 1\) and a single-photon loss is approximately correctable for large \(\alpha\). Furthermore, \(\delta n=0\) is exactly satisfied at a discrete set of finite \(\alpha\)[43], which we refer to as sweet spots. Similarly, one can show that for a continuous set of phase-rotation errors \(\mathbf{R}=\{e^{i\theta a^{\dagger}a}\}_{\theta\in[-\theta_{m},\theta_{m}]}\), the KL condition is approximately satisfied for large \(\alpha\) if \(\theta_{m}<\pi/4\)[41]. First, \(P_{c}e^{-i\theta_{1}a^{\dagger}a}e^{i\theta_{2}a^{\dagger}a}P_{c}=c_{12}P_{c}+ z_{12}\bar{Z}_{c}\) for any \(\theta_{1},\theta_{2}\in\mathbf{R}\) since \(e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\) preserves the photon number. Next, \[z_{12} =\left(\bra{+_{L}}e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\ket{-_{ L}}+\bra{-_{L}}e^{i(\theta_{2}-\theta_{1})a^{\dagger}a}\ket{+_{L}}\right)/2 \tag{4}\] \[\approx\left(\bra{i\alpha|\alpha e^{i(\theta_{2}-\theta_{1})}}+ \bra{-i\alpha|\alpha e^{i(\theta_{2}-\theta_{1})}}\right)/2+h.c.,\] where the the approximation utilizes that \(\ket{+_{L}}\approx(\ket{\alpha}+\ket{-\alpha})/\sqrt{2}\) and \(\ket{-_{L}}\approx(\ket{i\alpha}+\ket{-i\alpha})/\sqrt{2}\) for large \(\alpha\). Obviously, \(z_{12}\to 0\) as \(\alpha\gg 1\) as long as \(|\theta_{2}-\theta_{1}|\neq\pi/2\), which holds if \(\theta_{m}<\pi/4\). To conclude, the four-legged cat code can approximately correct a single photon loss and a continuous set of phase rotations with amplitude smaller than \(\pi/4\) (for large \(\alpha\)). In fact, cat codes serve as numerically optimized codes for certain regimes of a bosonic channel with both loss and dephasing errors [44]. #### ii.1.2 Open quantum system and Markovian quantum evolution A noisy Markovian evolution of a quantum system is described by a Lindblad master equation: \[\frac{d\rho}{dt}=\mathcal{L}(t)\rho=-i[H(t),\rho]+(\sum_{j}\mathcal{D}[\sqrt{ \gamma_{j}}J_{j}])\rho, \tag{5}\] where \(H(t)\) is the system Hamiltonian and \(\mathcal{D}[O]=O\bullet O^{\dagger}-\frac{1}{2}\{O^{\dagger}O,\bullet\}\) is a Lindblad superoperator associated with a jump operator \(O\), and \(\gamma_{j}\) is the jump rate for the error \(J_{j}\). Denote \(H_{\text{eff}}(t):=H(t)-\frac{i}{2}\sum_{j}\gamma_{j}J_{j}^{\dagger}J_{j}\) as the effective Hamiltonian, and \(\mathcal{S}:=\sum_{j}\gamma_{j}J_{j}\bullet J_{j}^{\dagger}\) as the superoperator describing quantum jumps. The Lindbladian \(\mathcal{L}(t)\) can be rewritten as \(\mathcal{L}(t)=-i[H_{\text{eff}}(t),\bullet]+\mathcal{S}\). Then, the joint quantum channel, given by the time integral of Eq. (5), admits a Dyson expansion with respect to \(\mathcal{S}\)[30]: \[\rho(t)=\mathcal{G}(t,0)\rho(0)=\sum_{q=0}^{\infty}\mathcal{G}_{q}(t,0)\rho(0), \tag{6}\] where \(\mathcal{G}_{0}(t,0)=\mathcal{W}(t,0):=W(t,0)\bullet W^{\dagger}(t,0)\), with \(W(t,0):=\mathcal{T}\exp\left[-i\int_{0}^{t}H_{\text{eff}}\left(t^{\prime} \right)dt^{\prime}\right]\), describes evolution without any quantum jumps, and \[\mathcal{G}_{q}(t)=\left[\int_{t_{h}=0}^{t}dt_{h}\right]_{h\in[q]} \mathcal{T}\big{(}\mathcal{W}(t,t_{q})\mathcal{S}\cdots \tag{7}\] \[\mathcal{S}\mathcal{W}(t_{2},t_{1})\mathcal{S}\mathcal{W}(t_{1},0 )\big{)},\] where \(\mathcal{G}_{q}\) (\(q\geq 1\)) describes the evaluation with \(q\) quantum jumps. We refer to such an expansion as the jump expansion and and \(\mathcal{G}^{[n]}:=\sum_{q=0}^{n}\mathcal{G}_{q}\) as the \(n\)-th order truncation of \(\mathcal{G}\) under the jump expansion. For quantum error correction, we care about the expansion of the channel \(\mathcal{G}\) in terms of the small noise parameter \(p:=\gamma_{i}t\) given an evolution time \(t\) (here, we have assumed equal noise rates for simplicity): \[\mathcal{G}(t,0)=\sum_{q^{\prime}}p^{q^{\prime}}\mathcal{G}_{q^{\prime}}^{ \prime}(t,0). \tag{8}\] Such an expansion can be obtained by Dyson expanding \(\mathcal{G}\) with respect to the full Lindblad superoperators of the noise (\(\sum_{j}\mathcal{D}[\sqrt{\gamma_{j}}J_{j}]\)) in Eq. (5), instead of their quantum-jump components \(\mathcal{S}\). We refer to such an expansion of \(\mathcal{G}\) as its noise expansion, and \(\mathcal{G}^{\prime[n]}:=\sum_{q^{\prime}=0}^{n}\mathcal{G}_{q}^{\prime}\) as the \(n\)-th order truncation of \(\mathcal{G}\) under the noise expansion. Observe that \(\mathcal{G}^{[n]}=\mathcal{G}^{\prime[n]}+O(p^{n+1})\), i.e. the \(n\)-th order truncation of a channel \(\mathcal{G}\) in terms of its jump expansion or its noise expansion is equivalent up to \(n\)-order of \(p\). Since \(\mathcal{G}^{[n]}\) is easier to evaluate for the purpose of this work, we will mostly consider the jump expansion of channels. A nice property of a channel's jump expansion is that it is automatically in a Kraus form: \[\mathcal{G}(t,0)=\sum_{q=0}\left[\int_{t_{h}=0}^{t}dt_{h}\right]_ {h\in[q]}\left[\sum_{j_{h}=1}^{N}\right]_{h\in[q]} \tag{9}\] \[\qquad\qquad G_{q}(\{t_{h}\},\{j_{h}\})\bullet G_{q}^{\dagger}(\{ t_{h}\},\{j_{h}\}),\] where \[G_{q}(\{t_{h}\},\{j_{h}\}):=\mathcal{T}W(T,t_{q})E_{j_{q}}\cdots E_{j_{2}}W(t_ {2},t_{1})E_{j_{1}}W(t,0). \tag{10}\] One can, therefore, view \(G_{q}(\{t_{h}\},\{j_{h}\})\) as a Kraus operator of the channel with discrete indices \(q,\{j_{h}\}\) and continuous indices \(\{t_{h}\}\). ### General setup As shown in Fig. 1(a), we consider gadgets for some bosonic code consisting of a sequence of ancilla-assisted operations (AAOs). For each AAO, a \(d_{A}\geq 2\) ancilla \(A\) is initialized in some initial state \(\ket{i}_{A}\), interacts with the bosonic mode \(C\) for some time \(T\), and is finally measured in some basis \(\mathcal{B}_{A}\). We consider continuous Markovian interactions between \(A\) and \(C\), which is described by the Lindblad master equation in Eq. (5) with a Hamiltonian \(H_{AC}(t)\) that acts on the joint system, a set of bosonic jump operators \(\{\sqrt{\kappa_{i}}F_{i}\}\), and a set of ancilla jump operators \(\{\sqrt{\gamma_{j}}J_{j}\}\). We allow adaptively choosing the later AAOs using the earlier ancilla measurement outcomes. Note that a direct operation on the bosonic mode can be viewed as a trivial AAO with the ancilla being initialized in some state \(\ket{i}\), idling for the evolution time, and measured in \(\ket{i}\) without any errors. Such an AAOs-composed bosonic gadget forms a channel \(\mathcal{N}\) on the bosonic mode, which can be decomposed as \(\mathcal{N}=\mathcal{N}_{n}\circ\mathcal{N}_{0}\), where \(\mathcal{N}_{0}\) is the target bosonic operation and \(\mathcal{N}_{n}=\sum_{k}N_{k}\bullet N_{k}^{\dagger}\) is a data noise channel represented by a set of noise Kraus operators \(\{N_{k}\}\). Fault-tolerance essentially concerns the relation between the physical channels \(\{\mathcal{G}\}\) and the resultant bosonic channel \(\mathcal{N}\). More specifically, we need to study how the noise in \(\mathcal{G}\), which we refer to as faults, propagates to the data errors \(\{N_{k}\}\) in \(\mathcal{N}_{n}\). We will need to quantify the size of the faults and the data errors and design AAOs such that small Figure 1: (a) Illustration of a level-1 bosonic gadget consisting of a sequence of ancilla-assisted operations. For each AAO, the ancilla is initialized to some state \(\ket{i}\) and measured in some basis \(\mathcal{B}_{A}\). The later AAOs can be performed adaptively using the earlier ancilla measurement outcomes. (b) Illustration of the AAO, GPI and PI operations. As a special case of AAO, the GPI operations with bosonic QEC can handle bosonic errors induced by relevant ancilla faults. The previous PI operations [30] can be regarded as a special GPI without bosonic QEC, which are designed to avoid any introduction of bosonic errors due to relevant ancilla faults. f the state \((k,\theta)\). Results only propagate to small data errors. We refer to a physical channel \(\mathcal{G}\) that contains no more than \(t\) faults as its \(n\)-th order truncation \(\mathcal{G}^{[n]}\). To quantify the size of the data bosonic errors, we need to specify a bosonic error-correcting code and an error basis. In this work, we will primarily focus on the cat codes [14; 35; 45] and a basis we termed loss-dephasing basis, which is closely related to photon loss and bosonic dephasing errors. ### Loss-dephasing basis and error metrics Typical bosonic errors include excitation loss (\(a\)), heating (\(a^{\dagger}\)), and bosonic dephasing (\(a^{\dagger}a\)). For such errors, a natural basis to use is \(\{e^{-i\theta a^{\dagger}a}a^{k},a^{\dagger k^{\prime}}e^{i\theta^{\prime}a^{ \dagger}a}\}_{k,k^{\prime}\in\mathbb{N};\theta,\theta^{\prime}\in(-\pi,\pi]}\), which is a complete basis for all single-mode bosonic operators. Neglecting heating errors \(a^{\dagger}\), which are typically small [29; 31], the relevant errors are then spanned by \(\{E_{k}(\theta):=e^{-i\theta a^{\dagger}a}a^{k}\}_{k\in\mathcal{N},\theta\in (-\pi,\pi]}\), which we refer to as the loss-dephasing basis. A four-legged cat code can only correct errors \(E_{k}(\theta)\) with small \(k\) and \(|\theta|\) (see Sec. II.1.1). This motivates us to quantify the size of \(E_{k}(\theta)\) as \(|E_{k}(\theta)|_{w}:=(k,|\theta|)\in\mathbb{N}\times[0,\pi]\). We compare the size of two errors by introducing a partial order with respect to the proper cone \(R_{+}^{2}:=[0,\infty)\times[0,\infty)\), i.e. \(|E_{k^{\prime}}(\theta^{\prime})|_{w}\geq|E_{k}(\theta)|_{w}\leftrightarrow( k^{\prime}-k,|\theta^{\prime}|-|\theta|)\in R_{+}^{2}\). We say that a bosonic noise channel \(\mathcal{N}_{n}\) contains at most \((k,\theta)\) errors if all its Kraus operators have size at most \((k,\theta)\), and a state \(|\phi^{\prime}\rangle\) is at most \((k,\theta)\) far from a target state \(|\phi\rangle\) if there exists a \(\mathcal{N}_{n}\) containing at most \((k,\theta)\) errors such that \(|\phi^{\prime}\rangle\) is in the support of \(\mathcal{N}_{n}(|\phi\rangle\,\langle\phi|)\). With this quantification of error sizes, for \(\alpha\gg 1\), the four-legged cat can correct errors \(|E_{k}(\theta)|_{w}\leq(1,\pi/4)\)[36]. Fig. 2(a) illustrates the 2-dimensional error space indicated by the number of photon loss and dephasing angle. ## III Fault-tolerance In this section, we formalize a fault-tolerance framework for the hybrid system with bosonic modes and discrete-variable ancillae in the context of concatenated codes [34]. Since the single-mode cat code alone cannot arbitrarily suppress logical errors, one needs to concatenate it with an outer qubit code for fault-tolerant quantum computing. That is, we will have three levels of gadgets. The level-0 gadgets are the physical operations; The level-1 gadgets are encoded in the cat code and the level-2 gadgets are encoded in the outer qubit code. A quantum circuit is fault-tolerantly executed using level-2 gadgets, and each level-2 gadget is executed using a level-1 circuit with multiple level-1 gadgets. In order for each level-2 gadget (or equivalent, a level-1 circuit) to be executed with a failure rate \(O(p^{t+1})\), which suppresses the physical error rate \(p\) to certain order \(t+1\), the level-1 gadgets suffice to satisfy the following properties: First, there exists a function \(f:\mathbb{N}\rightarrow\mathbb{N}\times[0,\pi]\) that satisfies: 1. \(f(m_{1})\leq f(m_{2})\leftrightarrow m_{1}\leq m_{2}\) if \(m_{1},m_{2}\leq t\). 2. \(f(m_{1}+m_{2})=f(m_{1})+f(m_{2})\) if \(m_{1}+m_{2}\leq t\). Roughly speaking, \(f(m)\) constraints the maximal size of data errors that \(a\) faults during a protocol can propagate to. For instance, for a bosonic code that can correct phase rotations smaller than \(\theta_{\max}\), we will choose \(f(m)=(m,m\theta_{0}\mod\pi)\) for some \(\theta_{0}\in[0,\theta_{\max}/t]\), which constraints that \(m\) faults can propagate to at most \(m\) photon losses and phase rotations of size at most \(m\theta_{0}\). We illustrate such an error propagation constrained by \(f\) in Fig. 2(b). Given \(f\) and \(t\), we then define \(t\)-FT fault-tolerant gadgets, including gate, error correction, state preparation, and measurement, for the hybrid system by generalizing the definitions in Ref. [34] for qubits. We remark that, the following FT definitions are related to the choice of the function \(f\). **Definition 1** (\(t\)-FT gate).: _A gate is \(t\)-FT if it satisfies: For an input codeword that has an error of size \((k,\theta)\), if at most \(m\) faults occur during the gate with \((k,\theta)+f(m)\leq f(t)\), the output state is at most \((k,\theta)+f(m)\) far from the codespace; Furthermore, ideally decoding the output state gives the same codeword as first ideally decoding the input state and then applying the ideal gate._ Note that this condition for the gate corresponds to the combination of Property 2 and Property 3 of Ref. [34]. **Definition 2** (\(t\)-FT Qec).: _A QEC gadget is \(t\)-FT if it satisfies:_ 1. _For an input codeword with an error of size_ \((k,\theta)\)_, if at most_ \(m\) _faults occur during the protocol with_ \((k,\theta)+f(m)\leq f(t)\)_, ideally decoding the output state gives the same codeword as ideally decoding the input state._ Figure 2: Illustration of bosonic error decomposition and the error propagation function \(f(m)\). (a) The bosonic loss-dephasing error can be expanded by the basis \(E_{k}(\theta)\). By defining a partial order of the size \(E_{k}(\theta)\), the bosonic error \(E_{k}(\theta)\) with at most \((k,\theta_{m})\) error can be illustrated as the green region in the plot. Here \(k=2\). (b) Suppose \(m\) faults occur during the gate implementation. To capture the propagation of faults to the final bosonic error, we introduce a function \(f(m)=(m,m\theta_{0}\mod\pi)\) as a upper bound of the induced final loss and dephasing errors. _._ 2. _For at most_ \(m\) _faults during the protocol with_ \(f(m)\leq f(t)\)_, no matter the size of the error on the input state, the output state is at most_ \(f(m)\)_-far from the code space._ Note that conditions (i) and (ii) correspond to Properties 1 and 0 of Ref. [34], respectively. State preparation and measurement are special cases of FT gates: **Definition 3** (\(t\)-FT state preparation).: _A state-preparation gadget is \(1\)-FT if it satisfies: If at most \(m\leq t\) faults occur during the protocol, the output state is at most \(f(m)\)-far from the target state; Furthermore, ideally decoding the output state gives the ideal target state._ **Definition 4** (\(t\)-FT measurement).: _A measurement gadget is \(t\)-FT if it satisfies: For an input codeword that has an error of size \((k,\theta)\), if at most \(m\) faults occur during the gate with \((k,\theta)+f(m)\leq f(t)\), the measurement is equivalent to applying the ideal measurement to the input codeword._ Based on the definition of the \(t\)-FT gadgets, we have the following proposition. **Proposition 1**.: _Using \(t\)-FT level-\(1\) gadgets, any level-\(1\) circuit has a failure rate \(O(p^{t+1})\), where \(p\in[0,1)\) is the physical error rate, i.e., the probability that one fault happens in the gadget._ Proof.: We follow the extended-rectangle formalism in Ref. [34]. Without loss of generality, we consider an ideal quantum circuit in Fig. 3(e). Here we take the single-qubit level-\(1\) circuit as an example. In practice, we realize this circuit using the noisy \(t\)-FT level-\(1\) gadgets shown in Fig. 3(a). To analyze the fault-tolerance property of this circuit, we draw several dashed boxes to cover the whole circuit. The dashed boxes are called extended rectangles exRec. For a quantum gate, an extended rectangle exRec (a dashed box in Fig. 3(a)) is a composition of a front EC gadget, a gate and a rear EC gadget, i.e. \(\text{exRec}=\text{EC}\circ\text{Ga}\circ\text{EC}\). We say that any operation Op is \(t\)-good if it contains no more than \(t\) faults. In what follows, we show that if all the dashed boxes in Fig. 3(a) are \(t\)-good, we can reduce the noisy circuit to the ideal circuit following the stream in Fig. 3. To this end, we introduce the ideal decoder ID (the blue triangles in Fig. 3 and 4), which performs an ideal recovery given a bosonic code. We also introduce a \((k,\theta)\)-filter \([(k,\theta)]\)F (the orange thin rectangles in Fig. 4) which performs a projection onto the space spanned by all states that can be obtained by acting on a codeword with an error no larger than \((k,\theta)\). First of all, we notice that if the last box in Fig. 3(a) is \(t\)-good, then based on the definition of \(t\)-FT QEC and measurement, we can equivalently convert the circuit in Fig. 3(a) to (b). Then, we follow the procedures in Fig. 4 to reduce the extended gadgets of quantum gates to the ideal gadgets: Denote the faults occur in the front EC gadget, the gate gadget and the rear EC gadget to be \(s,r,s^{\prime}\), respectively. Since the dashed box is \(t\)-good, we have \(s+r+s^{\prime}\leq t\). Fig. 4(a) and (b) are equivalent due to the second requirement of FT QEC in Def. 2; (b) and (c) are equivalent due to the first requirement of the FT gate in Def. 1; (c) and (d) are equivalent due to the first requirement of FT QEC in Def. 2; (d) and (e) are equivalent due to the second requirement of the FT gate in Def. 1. Then we can transform the circuit from Fig. 3(b) to (d) using the reduction in Fig. 4. Finally, we use the property of FT state preparation to reduce Fig. 3(d) to (e). The argument is similar to the ones for the extended gadgets of quantum gates in Fig. 4. In our gadget set-up in Sec. II.2, errors represented by Figure 3: Reduction of a FT level-\(1\) circuit to the ideal circuit. Figure 4: Reduction of the extended rectangular to an ideal gadget. quantum jump happens independently in the level-1 gadgets. Consider a level-1 circuit composed of many \(t\)-FT level-1 gadgets that can be grouped into extented rectangles (see e.g., Fig. 3(a)). If there are at most \(t\) quantum errors in each extended rectangle, we can convert it to an ideal gadget. In that case, only when more than \(t\) errors occur in the same extended rectangles at the same time can one logical error happen, which owns a probability of \(O(p^{t+1})\). In the following, we focus on constructing FT level-1 bosonic gadgets that satisfy the above definitions by integrating bosonic quantum error correction and quantum controls. More specifically, given a bosonic code \(\mathcal{C}\) that can correct loss and phase-rotation errors, e.g. the cat code, we try to design error-corrected \(\mathcal{C}\)-encoded gadgets by carefully engineering the Hamiltonian of their composing AAOs so that physical faults propagate controllably to data errors. An analogous example in the context of qubit fault-tolerance is the use of transversal gates [46], which guarantees that a single circuit fault only propagates to a single data error per code block. However, this quantum control task is more sophisticated when involving bosonic modes as we need to consider complicated continuous evolution in a larger Hilbert space. In order for a level-1 gadget to be FT, it has to tolerate both bosonic faults and ancilla faults. Tolerance against bosonic faults can be achieved relatively easily by using the error-transparency control [47], or more generally, the error-closure control [48]. Tolerance against ancilla errors is usually harder to achieve since some DV ancilla errors tend to propagate uncontrollably and a small ancilla fault could lead to a catastrophic error in the bosonic mode. Fortunately, path-independent control [49; 29; 30] has been proposed for controlling the ancilla faults propagation to the bosonic mode. However, the previously defined PI condition [30] is more stringent than what is required by fault tolerance. Thus in the next section, we will generalize the PI control, relax its requirement, and rigorously connect its generalized version to fault-tolerance analyzed in this section. ## IV Generalized path-independent operations We first review the PI control proposed in Ref. [30]. Again, we consider a Markovian interaction between a bosonic mode \(C\) and a \(d\geq 2\)-level ancilla \(A\) described by Eq. (5), where only the ancilla noises are considered, i.e. \[\frac{d\rho}{dt}=-i[H_{AC}(t),\rho]+\sum_{j}\mathcal{D}[\sqrt{\gamma_{j}}J_{j}]\rho \tag{11}\] where \(J_{j}\) are some jump operators acting only on the ancillary system. The ancilla is initialized in some initial state \(\ket{i}_{A}\), and measured in a certain basis \(\{\ket{r}_{A}\}\) after an interaction time \(T\). Let \(\mathcal{G}(T)\) denote the joint channel generated by the Lindblad master equation in Eq. (11) for a time \(T\). With a slight abuse of notation, we may neglect the subscripts \(A\) or \(C\) labeling the ancilla or the bosonic mode for states or operators without ambiguity. We denote \(\bra{\langle r|\mathcal{G}|i\rangle}:=\bra{r}\mathcal{G}(\ket{i}\bra{i}\otimes \bullet)\ket{r}\) as the (unnormalized) channel on the bosonic mode conditioned on the initial and final ancilla state \(\ket{i}\) and \(\ket{r}\)[49]. A PI gate is defined as follows. **Definition 5** (PI gate).: _An ancilla-assisted gate \(\mathcal{G}(T)\) is PI in an ancilla basis \(\mathcal{B}_{A}\) with an ancilla initial state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\),_ \[\bra{\langle r|\mathcal{G}(T)|i\rangle}\propto U_{ri}\bullet U_{ri}^{\dagger}, \tag{12}\] _where \(U_{ri}\) is some \(r\)-dependent unitary on the bosonic mode._ The PI condition in Eq. (12) implies that each conditional channel does not contain any errors (it is a unitary channel without information loss) propagated from the ancilla, although the unconditional channel might. In other words, no information is lost to the environment by performing such a noisy operation if the ancilla measurement outcome is accessible. See Fig. 5 for an illustration of such a PI gate. Note that the PI condition in Eq. (12) is for the joint channel, which could be hard to evaluate directly. As such, Ref. [49] provided an easy-to-evaluate algebraic condition for the Hamiltonian \(H_{AC}(t)\) and the jump operators \(\{J_{j}\}\) in order for \(\mathcal{G}\) to satisfy Eq. (12), which we present in Appendix A. The PI definition in Def. 5 considers an infinite number of ancilla-faults since it examines the full \(\mathcal{G}(T)\). In practice, when the ancilla noises are small, by correcting only a finite number of ancillary faults, we can suppress the real logical error rates to a higher order. As such, we define the following finite-order PI gate that only concerns a finite truncation of \(\mathcal{G}(T)\): **Definition 6** (Finite-order PI gate).: _An ancilla-assisted gate is \(n\)-PI in an ancilla basis \(\mathcal{B}_{A}\) with an ancilla initial state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\) and any \(k\leq n\),_ \[\bra{\langle r|\mathcal{G}^{[k]}(T)|i\rangle}\propto U_{ri}\bullet U_{ri}^{ \dagger}, \tag{13}\] _where \(U_{ri}\) is some \(r\)-dependent unitary on the bosonic mode._ In Appendix A, we present an algebraic condition for the Hamiltonian and jump operators in order for \(\mathcal{G}\) to satisfy Eq. (13). The PI condition, even with a finite order, is demanding since it requires the conditional channels to be exactly unitary channels and thus allows no error propagation at all. However, observe that if the bosonic mode is protected by some bosonic codes, fault-tolerance can still be achieved even if we allow error propagations, as long as the propagated errors are small and correctable. Based on this observation, we generalize the PI control and present a less stringent condition that, nevertheless, still complies with the idea of fault-tolerance: **Definition 7** (GPI operation).: _Given a single-mode bosonic code with a codespace projection \(P_{c}\), we say that an ancilla-assisted operation is \(n\)-th order generalized path-independent (GPI) in an ancilla basis \(\mathcal{B}_{A}\) with an initial ancilla state \(\ket{i}\) if for any \(\ket{r}\in\mathcal{B}_{A}\) and \(k\leq n\),_ \[\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\propto(\sum_{s}K_{ri}^{s}\bullet K_{ri }^{s\dagger}), \tag{14}\] _where \(\{K_{ri}^{s}\}_{s}\) satisfies the KL condition, i.e. \(P_{c}K_{ri}^{s\dagger}K_{ri}^{s^{\prime}}P_{c}\propto P_{c}\) for any \(K_{ri}^{s},K_{ri}^{s^{\prime}}\in\{K_{ri}^{s}\}_{s}\)._ Note that any conditional channel \(\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\) can be written in the form of Eq. (14), with a set of \((r,i)\)-dependent Kraus operators \(\{K_{ri}^{s}\}_{s}\). The condition that \(\{K_{ri}^{s}\}_{s}\) satisfies the KL condition implies that the conditional channel \(\bra{\bra{r}\mathcal{G}^{[k]}(T)\ket{i}}\) contains only correctable errors. The GPI condition generalizes from the PI condition in Def. 6 from the following two aspects. First, the GPI condition considers any operation (any CPTP map) to the bosonic mode as a target, while the PI condition only considers unitary gates. Second, the GPI condition allows finite propagation of ancilla faults to the bosonic mode for each conditional channel, as long as the propagated errors are correctable by the bosonic code. See Fig. 1(b) for an illustration of the relation between ancilla-assisted operations, GPI operations and PI operations. In Appendix A, we present an algebraic condition for GPI operations again by only examining the Hamiltonian and jump operators. Note that we directly present the GPI condition in the finite-order form, which is of practical relevance. ### GPI examples Here, we provide two examples of GPI operations for the four-legged cat code. #### iv.1.1 GPI SNAP gate with a three-level \(\chi\)-mismatched ancilla As an example, we consider the photon-number selective phase (SNAP) gate [37] in circuit-QED systems. In the rotating frame, a three-level ancilla with levels is dispersively coupled to a bosonic mode via the Hamiltonian \[H_{0}=-(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e}\bra{e})\otimes a^{\dagger}a, \tag{15}\] and the ancilla is frequency-selectively driven between \(\ket{g}\) and \(\ket{f}\) states: \[H_{c}(t)=\Omega\sum_{n=0}^{N}e^{-i(n\chi_{f}t-\phi_{n})}\ket{f}\bra{g}+h.c., \tag{16}\] where \(\vec{\phi}:=(\phi_{0},\phi_{1},\cdots,\phi_{N})\) is some phase vector that we can choose. We consider ancilla jump operators \(\{J_{1}=\sqrt{\gamma}\ket{e}\bra{f},J_{2}=\sqrt{\gamma}\ket{g}\bra{e},J_{3}= \sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\}\), where \(J_{1}\) describes the ancilla decays from \(\ket{f}\) to \(\ket{e}\), \(J_{2}\) the ancilla decay from \(\ket{e}\) to \(\ket{g}\), and \(J_{3}\) an ancilla dephasing with arbitrary phases \(\Delta_{e},\Delta_{f}\in\mathbb{C}\). We will use this error model throughout the paper whenever using a three-level ancilla. In the interaction picture associated with \(H_{0}\), the system Hamiltonian reads \[\tilde{H}=\Omega\left[\ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right], \tag{17}\] where \(S(\vec{\phi}):=\sum_{n=1}^{N}e^{i\phi_{n}}\ket{n}\bra{n}\) applies a number dependent phase shift \(\vec{\phi}\) to the bosonic mode. Note that we have performed the rotating wave approximation by assuming \(\Omega\ll\chi_{f}\). Denote \(\Delta\chi:=\chi_{f}-\chi_{e}\) as the \(\chi\) mismatch. The ancilla jump operators transform to \(\tilde{J}_{1}(t)=\sqrt{\gamma}|e\rangle\langle f|\otimes e^{-i\Delta\chi ta^{ \dagger}a},\tilde{J}_{2}(t)=\sqrt{\gamma}|g\rangle\langle e|\otimes e^{-i\chi_ {e}ta^{\dagger}a}\), and \(\tilde{J}_{3}=J_{3}\). We initialize the ancilla in \(\ket{g}\) and let the system evolve for a time \(T=\pi/2\Omega\), and measure the ancilla in the \(\{\ket{g},\ket{e},\ket{f}\}\) basis. In the absence of errors, the ancilla will end up in \(\ket{f}\) while the central system undergoes the target number-dependent phase shifts \(S(\vec{\phi})\), i.e. \(\bra{f}e^{-i\tilde{H}_{e}T}|g\rangle=S(\vec{\phi})\). With ancilla errors, we can explicitly write down the truncated conditional channels (in the interaction picture) up to the first order: \[\bra{\bra{g}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto\mathcal{I},\] \[\bra{\bra{f}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto S(\vec{\phi}) \bullet S^{\dagger}(\vec{\phi}),\] \[\bra{\bra{e}\tilde{\mathcal{G}}^{[1]}(T)\ket{g}}\propto\int_{t=0}^ {T}dt\ e^{-i\Delta\chi ta^{\dagger}a}S(\vec{\phi})\bullet S^{\dagger}(\vec{ \phi})e^{i\Delta\chi ta^{\dagger}a}. \tag{18}\] If there is no \(\chi\)-mismatch, i.e. \(\Delta\chi=0\), this gate is strictly a 1-PI gate (see Eq. (13)); If there is a finite \(\chi\)-mismatch, the gate is no longer PI. Nevertheless, for a bosonic code that can correct phase rotations in the range \([-\theta_{m}/2,\theta_{m}/2]\) (e.g. \(\theta_{m}=\pi/2\) for the four-legged cat), the gate is still a 1-GPI gate if \(\Delta\chi T\leq\theta_{m}\) (see Eq. (14)). Figure 5: Illustration of a PI gate. Given an ancilla initial state \(\ket{i}\) and a measurement basis \(\mathcal{B}_{A}\), the bosonic mode undergoes a \(r\)-dependent unitary \(U_{ri}\) for any ancilla measurement outcome \(\ket{r}\in\mathcal{B}_{A}\), independent of the different system paths (see e.g. the green and orange curves, where an ancilla relaxation happens for the green curve). In Appendix A, we show that one can verify the GPI property of this SNAP gate more easily without calculating the conditional channels by checking a set of algebraic conditions for the Hamiltonian and jump operators. Also, in Appendix C, we present another GPI SNAP scheme using a qutrit and an extra flag qubit, which can tolerate even larger \(\chi\)-mismatch \(\Delta\chi T\). #### iv.2.2 GPI parity measurement with a three-level \(\chi\)-mismatched ancilla As another example of GPI operations, we consider the parity measurement for correcting photon loss errors [38] using a three-level \(\chi\)-mismatched ancilla. The system Hamiltonian (in the rotating frame) is \(H_{0}=-(\chi_{e}|e\rangle\langle e|+\chi_{f}|f\rangle\langle f|)\otimes a^{ \dagger}a\) (without ancilla drives). We denote \(|\pm\rangle\) as \((|g\rangle\pm|f\rangle)/\sqrt{2}\). The ancilla is initialized in \(|+\rangle\) and measured in the basis \(\{\ket{+},\ket{-},\ket{e}\}\). In the absence of ancilla errors, the operation performs a projection onto the even (odd) subspace of the bosonic mode conditioned on the ancilla measured in \(\ket{+}\) (\(\ket{-}\)): \[\begin{split}\langle\langle+|\mathcal{G}^{[0]}|+\rangle\rangle &=P_{+}\bullet P_{+},\\ \langle\langle-|\mathcal{G}^{[0]}|+\rangle\rangle& =P_{-}\bullet P_{-},\end{split} \tag{19}\] where \(P_{\pm}:=(I\pm e^{-i\pi a^{\dagger}a})/2\) is the projection on the even/odd parity subspace of the bosonic mode. In the presence of ancilla errors \(\{J_{1}=\sqrt{\gamma}\ket{e}\bra{f}\), \(J_{2}=\sqrt{\gamma}\ket{g}\bra{e}\), \(J_{3}=\sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\), we move to the interaction picture associated with \(H_{0}\). Now, the system Hamiltonian is \(0\) and the ancilla jump operators read \(\tilde{J}_{1}(t)=\sqrt{\gamma}|e\rangle\langle f|\otimes e^{-i\Delta\chi t^{ \dagger}a},\tilde{J}_{2}(t)=\sqrt{\gamma}|g\rangle\langle e|\otimes e^{-i\chi_ {e}t^{\dagger}a},\) and \(\tilde{J}_{3}=J_{3}=\sqrt{\gamma}\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\), same to those in the previous SNAP example. Here, without loss of generality, we set \(\Delta_{f}=-1\). We can calculate the the noise expansion of the joint channel up to the first order (see Eq. (9)): \[\begin{split}\tilde{\mathcal{G}}^{[1]}(T)&=W(T,0) \bullet W^{\dagger}(T,0)+\gamma\int_{t=0}^{T}G_{1}(t,1)\bullet G_{1}^{\dagger} (t,1)\\ &+\gamma\int_{t=0}^{T}G_{1}(t,3)\bullet G_{1}^{\dagger}(t,3),\end{split} \tag{20}\] where \(W(t_{2},t_{1}):=\exp\left[-iH_{\text{eff}}(t_{2}-t_{1})\right]\) with \(H_{\text{eff}}:=-\frac{i}{2}\sum_{j=1}^{3}\tilde{J}_{j}^{\dagger}\tilde{J}_{j }=-\frac{i}{2}\gamma[(1+\left|\Delta_{e}\right|^{2})\ket{e}\bra{e}+2\ket{f} \bra{f}]\), and \(G_{1}(t,j):=W(T,t)\tilde{J}_{j}(t)W(t,0)\). Note that we have dropped the term associated with the first-order quantum jump with \(\tilde{J}_{2}\), which is zero when the ancilla starts from \(\ket{+}\). Going back to the lab frame, the truncated channel is \(\mathcal{G}^{[1]}(T)=\tilde{\mathcal{G}}^{[1]}(T)\circ[U_{0}(T)\bullet U_{0}^ {\dagger}(T)]\), where \(U_{0}(T):=e^{-iH_{0}T}\). Then we can calculate the truncated conditional channels: \[\begin{split}\langle\langle+|\mathcal{G}^{[1]}|+\rangle\rangle& =[(1-\frac{p}{2})P_{+}+\frac{p}{2}P_{-}]\bullet[(1-\frac{p}{2})P _{+}+\frac{p}{2}P_{-}],\\ &+pP_{-}\bullet P_{-}+O(p^{2})\\ \langle-|\mathcal{G}^{[1]}|+\rangle\rangle&=[(1-\frac{p}{2 })P_{-}+\frac{p}{2}P_{+}]\bullet[(1-\frac{p}{2})P_{-}+\frac{p}{2}P_{+}],\\ &+pP_{+}\bullet P_{+}+O(p^{2})\\ \langle\langle e|\mathcal{G}^{[1]}|+\rangle\rangle&= \frac{p}{2T}\int_{t=0}^{T}dte^{-i(\Delta\chi t+\pi)a^{\dagger}a}\bullet e^{i( \Delta\chi t+\pi)a^{\dagger}a}\\ &+O(p^{2}),\end{split} \tag{21}\] where \(p:=\gamma T\). For a four-legged cat with \(\alpha\gg 1\), Eq. (21) satisfies the GPI condition as long as \(\Delta\chi T<\pi/2\). Note that the first two terms in Eq. (21) imply that one might obtain wrong parity measurement outcomes with a probability \(O(p)\) if the ancilla is measured in \(|\pm\rangle\). Such effective measurement errors can be suppressed to the second order by repeating the above parity measurement three times and performing majority voting, which will be discussed in the next section when we rigorously consider fault-tolerance. ## V Connection between GPI and Fault-tolerance In this section, we establish the connection between GPI quantum control and fault-tolerance defined in Sec. III. Let the bosonic mode be encoded in some bosonic code with a code projection \(P_{c}\). **Proposition 2**.: _Each AAO contained in a \(t\)-FT level-\(1\) gadget with an ancilla initial state \(\ket{i}\) and an ancilla measurement basis \(\mathcal{B}_{A}\) has to be \(t\)-GPI with respect to \(\ket{i}\), \(\mathcal{B}_{A}\), and the code projection \(P_{c}\)._ Proof.: Any \(t\)-FT gadget requires that if any \(m\leq t\) faults occur during the protocol, the output is guaranteed to be correct. However, if one AAO is not \(t\)-GPI, there exists an ancilla measurement outcome \(r\), conditioned on which the bosonic channel (see Eq. (14)) contains non-correctable errors. As a result, the final output can no longer be guaranteed to be correct. Conversely, we can combine pieces of \(t\)-GPI operations to get a \(t^{\prime}\leq t\)-FT gadgets, as shown in Fig. 1. In order to make \(t^{\prime}=t\), there are extra requirements for the AAOs, which are typically easier to satisfy than GPI. Instead of making rigorous statements about these requirements for generic gadgets, we will make case studies when constructing concrete FT gadgets. Nevertheless, we comment on some high-level ideas used to design the extra ingredients that can be combined with GPI to achieve fault tolerance here: (i) Operations are error transparent/closure against bosonic errors (see Sec. B); (ii) The propagation from ancilla faults to bosonic errors is linear; (iii) There exists at least one ancilla state \(|r\rangle\) such that the ideal conditional bosonic channel \(\langle\langle r|\mathcal{G}^{[0]}|i\rangle\rangle\) gives the target operation. As the first example, we construct 1-FT Z-axis rotation \(Z(\theta)\) for the four-legged cat using the GPI SNAP gate presented in Sec. IV.1.1. To implement a \(Z(\theta)\) gate, we choose a GPI SNAP gate with \(\Delta\chi T<\pi/2\) and \[S(\vec{\theta})=P_{0}+P_{3}+e^{i\theta}(P_{2}+P_{1}), \tag{22}\] where \(P_{j}:=\sum_{i=0}|4i+j\rangle\,\langle 4i+j|\). We consider the same ancilla jump errors as those presented in Sec. IV.1.1. In addition, we consider a single jump operator \(a\) representing a single photon loss for the bosonic mode. Then we implement the 1-FT \(Z(\theta)\) gate based on Algorithm 1 below. The 3-level ancilla basis is denoted by \(|g\rangle\),\(|e\rangle\) and \(|f\rangle\) according to Eq. (15). ``` 1\(o\gets e\).//\(o\)recordstheancillameasurementoutcome 2while\(o\neq f\)do 3 Prepare the anilla in the \(|g\rangle\) state, apply the GPI SNAP gate with \(S(\vec{\theta})\) in Eq. (22) for a time \(T=\pi/2\Omega\), and measure the ancilla in the \(|g\rangle,|e\rangle,|f\rangle\) basis with an outcome \(o\in\{g,e,f\}\). 4if\(o=e\)then 5 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 6 ``` **Algorithm 1**1-FT \(Z(\theta)\) gate Now, we verify that the above protocol satisfies the definition of a 1-FT gate in Def. 1. Here, we choose \(f(m)=(m,m\Delta\chi T/2)\) with \(\Delta\chi T/2<\pi/4\). Suppose the input error is of size \((k,\theta_{0})\) and there are \(m\) faults during the protocol. There are two cases where \((k,\theta_{0})+f(m)\leq f(1)\). First, \(m=0\) and \((k,\theta_{0})\leq(1,\Delta\chi T/2)\). Obviously, the gate is error-transparent to the phase rotation \(e^{-i\theta a^{\dagger}a}\), i.e. it simply commutes through the gate and remains at the output, since it commutes with the system Hamiltonian (see Eq. (15) and (16)). Moreover, as we shown in Appendix B.1, the gate is also error-transparent to a single photon loss \(a\) when using the form of \(S(\vec{\phi})\) in Eq. (22). Therefore, the input \((k\leq 1,\theta\leq\Delta\chi T/2)\) error simply remains at the output and stays correctable. Second, \(m=1\) and \((k,\theta)=(0,0)\). In this case, either an ancilla dephasing, or an ancilla decay, or a single photon loss occurs during the protocol. A single ancilla dephasing might cause the ancilla ending in \(|g\rangle\) instead of \(|f\rangle\) but does not add any error to the bosonic mode; A single ancilla decay from \(|f\rangle\) to \(|e\rangle\) only causes a correctable phase rotation with an angle \(|\delta\theta|\leq\Delta\chi T/2<\pi/4\)[50]; A single-photon loss simply remains at the output and stays correctable. As the second example, we construct a 1-FT QEC protocol for correcting a single-photon loss. Note that we will present a full EC gadget correcting both photon loss and dephasing errors in the next section. The protocol utilizes the 1-GPI parity measurement presented in Sec. IV.1.2, with a \(\chi\) mismatch \(\Delta\chi T<\pi/2\). ``` 1\(o_{i}\gets e\) for\(i\in\{1,2,3\}\).//\(\{o_{i}\}_{i\in[3]}\)recordthree consecutiveparitymeasurementoutcomesfor\(i\gets 1\)to\(3\)do 2while\(o_{i}=e\)do 3 Prepare the anilla in the \(|+\rangle\) state, apply the dispersive coupling for a time \(T=\pi/\chi_{J}\), and measure the ancilla in the \(=\{|+\rangle,|-\rangle,|e\rangle\}\) basis with an measurement outcome \(o_{i}\). 4if\(o_{i}=e\)then 5 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 6 ``` **Algorithm 2**1-FT photon-loss correction Now, we verify that the protocol in Alg. 2 satisfies the definition of a 1-FT QEC in Def. 2. Similar to the previous \(Z(\theta)\) gate example, we choose \(f(m)=(m,m\Delta\chi T/2)\). Assume there is an input error of size \((k,0)\) and \(m\) faults occur during the protocol. Note that since we are only correcting single photon losses for now, we assume the input has no dephasing errors. To verify condition (i) in Def. 2, we consider either \(k=1\), \(m=0\) or \(k=0\), \(m=1\). In the earlier case, the single photon loss can be perfectly corrected and the output has no error; in the latter case, we consider either an ancilla dephasing, an ancilla decay, or a single photon loss. An ancilla dephasing may flip a single parity measurement outcome but does not affect the final majority voting; A single ancilla decay only causes a correctable phase rotation with amplitude \(\leq\Delta\chi T/2<\pi/4\), which is a correctable error; A single photon loss during the protocol either gets corrected or goes undetected but remains as a correctable single photon loss at the output. For condition (ii) in Definition 2, one simply observes that a single photon loss error at the input can be detected and then corrected (although a logical error may happen when combined with another photon loss during the protocol), while a single photon loss or an ancilla decay can cause at most a \(f(1)=(1,\Delta\chi T/2)\) error that can go undetected and remains at the output. ## VI Fault-tolerant operations of four-legged cat code In this section, we focus on the four-legged cat and construct universal 1-FT level-1 gadgets that can correct a single-photon loss and any single ancilla fault, using GPI operations. The universal operation set we consider is \[\mathcal{S}=\{\text{EC},Z(\theta),X(\phi),XX(\delta),\mathcal{P}_{|+\rangle}, \mathcal{M}_{Z},\mathcal{M}_{X}\}, \tag{23}\] including error correction, \(Z\)-axis rotation, \(X\)-axis rotation, \(XX\) rotation (\(\exp(-i\delta XX/2)\)), state preparation in the \(X\) basis, measurement in the \(Z\) basis, and measurement in the \(X\) basis. One essential element for our construction is the GPI SNAP gate and GPI parity measurement described in Sec. IV.1.1 and Sec. IV.1.2, respectively. Recall that both of these two operations use a three-level ancilla, which is dispersive coupled to the bosonic mode via \(-(\chi_{e}\ket{e}\bra{e}+\chi_{f}\ket{f}\bra{f})\otimes a^{\dagger}a\), potentially with a \(\chi\) mismatch \(\Delta\chi:=\chi_{f}-\chi_{e}\). Denote the gate time for the SNAP gates as \(T\) and that for a parity measurement as \(T_{P}\). Typically \(T\gg T_{P}\) since the driving strength \(\Omega\) for the SNAP gate (see Eq. (16)) is much smaller than \(\chi_{f}\) in order for the rotating-wave-approximation to hold [29]. We choose \(f(m)=(m,m\Delta\chi T/2)\) with \(\Delta\chi T/2<\pi/8\)[51] for proving the fault-tolerance of the gadgets. Unless specially noted, all the SNAP gates and parity measurement gadgets we use have a \(\chi\) mismatch \(\Delta\chi\). Similar to the previous sections, we consider \(\{a,\ket{e}\bra{f},\ket{g}\bra{e},\sum_{s\in\{e,f\}}\Delta_{s}\ket{s}\bra{s}\}\) as the errors, representing a single photon loss, an ancilla decay from \(\ket{f}\) to \(\ket{e}\), an ancilla decay from \(\ket{e}\) to \(\ket{g}\), and an ancilla dephasing, respectively. ### Z-axis rotation A 1-FT \(Z\)-axis rotation with an arbitrary angle (\(\theta\)) using GPI SNAP gate is presented in Alg. 1 in the previous section. Note that a 1-FT logical gate using strictly PI SNAP gate (with no \(\chi\) mismatch) has been experimentally demonstrated for a small binomial bosonic code [29]. Here, the main difference is that our protocol allows a finite \(\chi\) mismatch. ### X-axis rotation In the large \(\alpha\) limit, a \(X\)-axis rotation is given by \[X(\phi)\approx e^{i\phi}\ket{C_{\alpha}^{+}}\bra{C_{\alpha}^{+}}+\ket{C_{i \alpha}^{+}}, \tag{24}\] where \(\ket{C_{\beta}^{\pm}}:=c_{\beta}^{\pm}(\ket{\beta}\pm\ket{-\beta})\), with \(c_{\beta}^{\pm}\) being normalization constants. We implement \(X(\phi)\) by adding a phase \(\phi\) to the subspace spanned by the two coherent states \(\ket{\alpha}\) and \(\ket{-\alpha}\). As illustrated in Fig. 6(a), we first displace the cavity by \(\alpha\) and apply a phase shift to the vacuum \(S(\vec{\phi})=e^{i\phi}\ket{0}\bra{0}+I-\ket{0}\bra{0}\) using the SNAP gate (see Sec. IV.1.1). Then we displace the cavity by \(-2\alpha\) and apply another \(S\). Finally, we displace the cavity by \(\alpha\) back to the codespace. The joint operation is: \[\begin{split} U_{X}&=D(\alpha)S(\vec{\phi})D(-2 \alpha)S(\vec{\phi})D(\alpha)\\ &=[D(\alpha)S(\vec{\phi})D(\alpha)^{\dagger}][D(-\alpha)S(\vec{ \phi})D(\alpha)^{\dagger}]\\ &\approx e^{i\theta}P_{\pm\alpha}+I-P_{\pm\alpha},\end{split} \tag{25}\] where \(P_{\pm\alpha}:=\ket{\alpha}\bra{\alpha}+\ket{-\alpha}\bra{-\alpha}=\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}+\ket{C_{\alpha}^{-}}\bra{C_{\alpha}^{-}}\). We now show that this gate is 1-FT if the \(\chi\)-mismatch during the SNAP gates is zero. Assume there is a \((k,\delta\theta)\) input error and \(m\) faults occur during the gate. Again, for 1-FT gate (see Def. 1), we only need to consider either \((k=0,\delta\theta=0)\), \(m=1\), or \((k\leq 1,\delta\theta\leq\Delta\chi T/2)\), \(m=0\). First, we consider a single fault occurring during \(U_{X}\). A single-photon loss simply commutes through the entire gate since the two SNAP gates \(S\) are error-transparent (see Appendix B) and \(D(\alpha)\) commutes with \(a\) up to a constant. A single-ancilla decay or dephasing during the \(S\) gate does not cause any error to the bosonic mode when assuming perfect \(\chi\) matching. Therefore, a single fault during the gate causes at most a \((1,0)<f(1)\)-error at the output, which is correctable. Second, we consider a \((k\leq 1,\delta\theta\leq\Delta\chi T/2)\) input error \(e^{i\delta\theta a^{\dagger}a}a^{k}\). We first notice that \(U_{X}e^{i\delta\theta a^{\dagger}a}a^{k}P_{c}\propto a^{k}U_{X}e^{i\delta \theta a^{\dagger}a}P_{c}\) since \(U_{X}\) is error-transparent for \(a^{k}\) (see Eq. (25)). Here, \(P_{c}:=\ket{+_{L}}\bra{+_{L}}+\ket{-_{L}}\bra{-_{L}}\approx C_{\alpha}^{+} \bra{C_{\alpha}^{+}}+\ket{C_{i\alpha}^{+}}+\ket{C_{i\alpha}^{+}}\bra{C_{i \alpha}^{+}}\) is the projector onto the code space of the four-legged cat. Then we only need to make sure that \(U_{X}\) is also error-transparent to dephasing \(e^{i\delta\theta a^{\dagger}a}\). Let \(E:=U_{X}e^{i\delta\theta a^{\dagger}a}U_{X}^{\dagger}\) be the effective error that \(e^{i\delta\theta a^{\dagger}a}\) propagates to after the gate. \(E\) satisfies \[\begin{split} EP_{c}&=e^{i\delta\theta a^{\dagger}a}P_{ c}+(1-e^{-i\phi})(P_{\pm\alpha}-I)e^{i\delta\theta a^{\dagger}a}\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}\\ &+(e^{i\phi}-1)P_{\pm\alpha}e^{i\delta\theta a^{\dagger}a}\ket{C_{ i\alpha}^{+}}\bra{C_{i\alpha}^{+}},\end{split} \tag{26}\] where we can see that \(U_{X}\) is not error-transparent against the dephasing due to the last two terms of Eq. (26). Fortunately, we can make it approximately error-transparent by modifying the SNAP gate: \[S(\vec{\phi})\to e^{i\phi}(P_{[s]})+I-P_{[s]}, \tag{27}\] where \(P_{[s]}:=\sum_{i=0}^{s}\ket{i}\bra{i}\) is the projection onto the \(s\)-neighborhood of vacuum. Then the gate unitary becomes \(U_{X}\to e^{i\phi}P_{\pm\alpha,s}+I-P_{\pm\alpha,s}\), where \(P_{\pm\alpha,s}:=D(\alpha)P_{[s]}D^{\dagger}(\alpha)+D(-\alpha)P_{[s]}D^{ \dagger}(-\alpha)\) is the projection onto a neighborhood of \(\ket{\alpha}\) and \(\ket{-\alpha}\). Now, the effective error for the dephasing error becomes \[\begin{split} EP_{c}&=e^{i\delta\theta a^{\dagger}a}P_{ c}+(1-e^{-i\phi})(P_{\pm\alpha,s}-I)e^{i\delta\theta a^{\dagger}a}\ket{C_{ \alpha}^{+}}\bra{C_{\alpha}^{+}}\\ &+(e^{i\phi}-1)P_{\pm\alpha,s}e^{i\delta\theta a^{\dagger}a}\ket{C_ {i\alpha}^{+}}\bra{C_{i\alpha}^{+}}.\end{split} \tag{28}\] For \(\ket{\delta\theta}\leq\Delta\chi T/2<\pi/8\), we can choose \(s=O(\ket{\alpha^{2}})\) such that the last two terms vanish in the \(\alpha\gg 1\) limit, i.e., \[\begin{split}\langle C_{ae^{i\delta\theta}}^{+}|P_{\pm\alpha,s}|C_ {ae^{i\delta\theta}}^{+}\rangle\to 1,\\ \langle C_{i\alpha e^{i\delta\theta}}^{+}|P_{\pm\alpha,s}|C_{i \alpha e^{i\delta\theta}}^{+}\rangle\to 0.\end{split} \tag{29}\] Then we have \(EP_{c}\approx e^{i\delta\theta a^{\dagger}a}P_{c}\) and the gate is error-transparent to dephasing as well. Note that 1-fault-tolerance can no longer be rigorously attained (even in the larger-\(\alpha\) limit) if using SNAP gates \(S\) with a finite \(\chi\)-mismatch. Taking the second \(S\) gate as an example, and suppose it has a \(\chi\)-mismatch \(\Delta\chi^{\prime}\), a single ancilla decay could cause a \(e^{i\delta\theta^{\prime}a^{\dagger}a}\) phase rotation with \(\ket{\delta\theta^{\prime}}\leq\Delta\chi^{\prime}T/2\) after \(S\), which propagates to \(e^{-i\delta\theta^{\prime}[a^{\dagger}a+\alpha(a+a^{\dagger})+\alpha^{2}]}\) after the later displacement. The extra displacement error after the gate is uncorrectable. Thus a single ancilla fault during the \(X\)-rotation can cause a first-order logical error with a probability \(cp\), where \(c\) is a constant depending on \(\Delta\chi^{\prime}T\). Nevertheless, if \(\Delta\chi^{\prime}T\) is small enough, the coefficient \(c\) can be made comparable or even smaller than \(p\), and we can still achieve good error suppression in practical regimes, as is shown in later numerical results in Fig. 7(a). ### XX rotation For large \(\alpha\), the \(XX\) rotation reads \[\begin{split} XX(\delta)&\approx e^{i\delta}(\ket{ C_{\alpha}^{+},C_{\alpha}^{+}}\bra{C_{\alpha}^{+},C_{\alpha}^{+}}+\ket{C_{i \alpha}^{+},C_{i\alpha}^{+}}\bra{C_{i\alpha}^{+},C_{i\alpha}^{+}})\\ &\quad+(\ket{C_{\alpha}^{+},C_{i\alpha}^{+}}\bra{C_{\alpha}^{+},C_{i\alpha}^{+}}+\ket{C_{i\alpha}^{+},C_{\alpha}^{+}}\bra{C_{i\alpha}^{+},C_{ \alpha}^{+}}).\end{split} \tag{30}\] We implement \(XX(\delta)\) by adding a phase \(\delta\) to the subspace spanned by \(\ket{\pm\alpha,\pm\alpha}\) and \(\ket{\pm i\alpha,\pm i\alpha}\). As illustrated in Fig. 6(b), we interfere the two modes through a \(50:50\) beamsplitter, apply the number dependent phase shift \(S(\vec{\delta})=e^{-i\delta}\ket{0}\bra{0}+I-\ket{0}\bra{0}\) to both ports, and then interfere through another \(50:50\) beamsplitter: \[U_{XX}=\text{BS}(\frac{\pi}{2})^{\dagger}(S\otimes S)\text{BS}(\frac{\pi}{2}), \tag{31}\] where \(\text{BS}(\theta):=\exp[\frac{\theta}{2}(ab^{\dagger}-a^{\dagger}b)]\) with \(a\) and \(b\) denoting the annihilation operator of the two involved modes, respectively. To understand the effect of \(U_{XX}\), we consider a coherent-state input \(\ket{\alpha,\beta}\). The first BS interferes the two coherent states: \[\text{BS}\ket{\alpha,\beta}=\ket{(\alpha+\beta)/\sqrt{2},(\alpha-\beta)/\sqrt {2}}, \tag{32}\] We take the approximation \(S\ket{\gamma}\approx e^{-i\delta\openone[\gamma=0]}\ket{\gamma}\), where \(\openone[x]\) is the indicator function. Then the two SNAP gates in Eq. (31) add a nontrivial phase to the R.H.S. of Eq. (32) if \(\alpha=\beta\) or \(\alpha=-\beta\): \[(S\otimes S)\text{BS}\ket{\alpha,\beta}=e^{-i\delta\openone[\alpha=\beta]+ \openone[\alpha=-\beta]}\ket{\frac{\alpha+\beta}{\sqrt{2}},\frac{\alpha-\beta }{\sqrt{2}}}. \tag{33}\] Finally, the last BS restores the input coherent state potentially with an extra phase: \[U_{XX}\ket{\alpha,\beta}=e^{-i\delta\openone[\alpha=\beta]+\openone[\alpha=- \beta]}\ket{\alpha,\beta}. \tag{34}\] We remark that, when \(\alpha\) and \(\beta\) are only chosen from a set of discrete values \(\{\alpha_{i}\}_{i}\) which are well-separated in the phase space, Eq. (34) provides an exact expression of the action of \(U_{XX}\). The rigorous form of \(U_{XX}\) is given in Eq. (101) in Appendix B.3. To conclude, a two-mode coherent state accumulates a nontrivial phase if and only if the two coherent states have matched amplitudes and aligned/anti-aligned phases. Let \(P_{\pm(i)\alpha}\) be the projection onto a four-dimensional subspace spanned by \(\ket{\alpha},\ket{-\alpha},\ket{i\alpha},\ket{-i\alpha}\), we then have \[\begin{split}& P_{\pm(i)\alpha}\otimes P_{\pm(i)\alpha}U_{XX}P_{ \pm(i)\alpha}\otimes P_{\pm(i)\alpha}\\ &=e^{i\delta}(P_{\pm\alpha}\otimes P_{\pm\alpha}+P_{\pm i\alpha} \otimes P_{\pm i\alpha})\\ &\quad\quad+(P_{\pm\alpha}\otimes P_{\pm i\alpha}+P_{\pm i \alpha}\otimes P_{\pm\alpha}).\end{split} \tag{35}\] Note that Eq. (35) implies \(P_{c}^{(AB)}U_{XX}P_{c}^{(AB)}=XX(\theta)\) where \(P_{c}^{AB}=P_{c}^{(A)}\otimes P_{c}^{(B)}\) is the projector onto the collective code space of 4-legged cat on bosonic modes \(A\) and \(B\). Now, we prove this \(XX(\theta)\) gate is 1-FT according to Def. 1. We first consider the case where there is an input error \(e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}b^{k_{a}}a^{k_ {a}}\) with \(k_{a},k_{b}\leq 1\) and \(\ket{\delta\theta_{a}},\ket{\delta\theta_{b}}\leq\Delta\chi T/2<\pi/8\), but no fault during the gate. \(b^{k_{a}}a^{k_{a}}\) simply commutes through the gate when acting on the code space due to the error-transparency form of \(U_{XX}\) in Eq. (35). Similar to proof for the \(X\)-axis rotation in Eq. (28), one can show that \(U_{XX}\) is also approximately error-transparent to the phase rotation \(e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}\) by replacing \(S\to e^{-i\delta}(\sum_{i=0}^{s}\ket{i}\bra{i})+I-(\sum_{i=0}^{s}\ket{i}\bra{ i})\). We put the proof in Appendix B.3. As a result, the input error commutes through the gate and remains correctable. To complete the proof that the \(U_{XX}\) is 1-FT, we also need to show that for a perfect input state and any single fault during \(U_{XX}\), each of the two output modes has an error of size at most \(f(1)=(1,\Delta\chi T/2)\). As shown in Appendix B.3, a single-photon loss during the gate propagates to an error of the form \(c_{1}a+c_{2}b\), where \(c_{1},c_{2}\in\mathbb{C}\), due to the error transparency of the SNAP gates and the error closure of the BSs. By using a \(\chi\)-matched ancilla for each SNAP gate, any single ancilla fault does not propagate to any bosonic data errors. We note that similar to the \(X\)-axis rotation, the \(XX\) rotation is not strictly 1-FT if there is a finite \(\chi\)-mismatch when executing the SNAP gates, as an induced phase rotation would propagate to uncorrectable errors after the last BS. Nevertheless, as we show numerically in Fig. 7, high-fidelity \(XX\) rotation can still be realized in practical regimes even with a finite but small \(\chi\)-mismatch. ### X-basis state preparation The \(+1\)\(X\) basis eigenstate is a two-legged cat state with an even photon parity \(\ket{+_{L}}=\ket{C_{\alpha}^{+}}=c_{\alpha}^{+}(\ket{\alpha}+\ket{-\alpha})\). Observe that \(\ket{+_{L}}\propto P_{+}\ket{\alpha}\), i.e. the even-parity projection of a coherent state \(\ket{\alpha}\). Thus, we can prepare the even cat state by first preparing a coherent state \(\ket{\alpha}\), and then performing a non-destructive parity measurement to project it to an even cat state (up to a parity frame update). For 1-FT state preparation, unlike the 1-FT photon-loss correction protocol in Alg. 2, we do not need to repeat the parity measurement three times as it allows a noisy output with up to \(f(1)=(1,\Delta\chi T/2)\) error for up to a single fault during the parity measurement (see Def. 3). Concretely, we implement the following protocol: ``` 1Prepare the bosonic mode in the coherent state \(\ket{\alpha}\). 2\(o\gets e\)// records the parity measurement outcome 3while\(o=e\)do 4 Prepare the ancilla in the \(\ket{+}\) state, apply the dispersive coupling for a time \(T=\pi/\chi_{f}\), and measure the ancilla in the \(=\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o\). 5if\(o=e\)then 6 Apply a phase rotation \(e^{i\Delta\chi Ta^{\dagger}a/2}\) to the bosonic mode. 7 Apply a parity correction if \(o=-\). ``` **Algorithm 3**\(1\)-FT \(X\)-basis state preparation Note that the \(X\)-basis state preparation here allows a finite \(\chi\)-mismatch. ### Z-basis measurement The \(Z\)-basis measurement admits the form of measuring photon number modulo \(4\). In order to obtain the correct logical measurement outcome in the presence of a single-photon loss, as required by Def. 4, we insert a non-destructive parity measurement before each logical \(Z\) measurement. The full FT protocol is presented in Alg. 4. Note that each modulo-\(4\) photon number measurement \(o_{i,b}\) is conditioned on the parity measurement outcome \(o_{i,a}\), i.e. we distinguish the photon number between \(0\mod 4\) and \(2\mod 4\) for even parity (\(o_{i,b}=+\)) and between \(3\mod 4\) and \(2\mod 4\) for odd parity (\(o_{i,a}=-\)). To verify that the \(1\)-FT measurement condition in Def. 4 holds, one simply observe that a single photon loss before the measurement can be captured by the parity measurements, and any single fault during the measurement protocol can only cause at most one measurement error on one of \(\{o_{i,b}\}_{i=1,2,3}\), and thus does not affect the majority voting. Note that the \(Z\)-basis measurement here can also allow a finite \(\chi\)-mismatch between the ancilla and the bosonic mode, as dephasing errors commute with the measurements. ``` 1for\(i\gets 1\)to\(3\)do 2\(o_{i,a}\gets e\); 3while\(o_{i,a}=e\)do 4 Prepare the ancilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/\chi_{f}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,a}\). 5\(o_{i,b}\gets e\) 6while\(o_{i,b}=e\)do 7if\(o_{i,a}=+\)then 8 Prepare the anilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/2\chi_{f}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,b}\). 9else 10 Prepare the anilla in the \(\ket{+}\) sate, apply the dispersive coupling for a time \(T=\pi/2\chi_{f}\), apply an ancilla phase rotation \(e^{-i\frac{\pi}{2}\ket{f}\bra{f}}\), and measure the ancilla in the \(\{\ket{+},\ket{-},\ket{e}\}\) basis with an measurement outcome \(o_{i,b}\). 11 12 13 14 15 16 17 Obtain the logical measurement outcome as the majority voting of \(\{o_{i,b}\}_{i=1,2,3}\). ``` **Algorithm 4**\(1\)-FT \(Z\)-basis measurement ### X-basis measurement The \(X\)-basis measurement amounts to distinguishing the phase of the coherent states modulo \(\pi\). We achieve this by interfering the mode \(a_{i}\) with another mode \(b_{i}\) in a coherent state \(\ket{\alpha}\) through a \(50:50\) beam splitter and measuring if the two output modes \(a_{o},b_{0}\) have less than \(s\) photons. We obtain a logical \(-\) if both modes have more than \(s\) photons and a logical \(+\) otherwise, i.e. we implement the following POVMs: \[M_{-} =[I_{a_{o}}-\sum_{i=0}^{s}(\ket{i}_{a_{o}}\langle i|)]\otimes[I_{b_ {o}}-\sum_{i=0}^{s}(\ket{i}_{b_{o}}\langle i|)] \tag{36}\] \[\approx(\hat{I}_{a_{i}}-\sum_{i=0}^{s}(\ket{\alpha,i}_{a_{i}} \langle\alpha,i|))(I_{a_{i}}-\sum_{i=0}^{s}(\ket{-\alpha,i}_{a_{i}}\langle- \alpha,i|)),\] \[M_{+} =I-M_{-}\] \[\approx\sum_{i=0}^{s}(\ket{\alpha,i}_{a_{i}}\langle\alpha,i|))+ \sum_{i=0}^{s}(\ket{-\alpha,i}_{a_{i}}\langle-\alpha,i|)),\] where each subscript labels the mode that a state or an operator belongs to. Measuring if one mode has less than \(s\) photons can be realized by dispersively coupling it to a qubit, driving the qubit from \(\ket{g}\) to \(\ket{e}\) conditioned on the mode having less than \(s\) photons, and measuring the qubit in the \(\ket{g},\ket{e}\) basis. In the interaction picture associated with the dispersive coupling, the Hamiltonian reads \[\tilde{H}_{AC}=\Omega\left(\ket{e}\bra{g}\otimes P_{[s]}+h.c.\right). \tag{37}\] Recall that \(P_{[s]}:=\sum_{i=0}^{s}\ket{i}\bra{i}\). In the absence of errors, the \(0\)-th order conditional operations are \[\begin{split}\langle\langle e|\mathcal{G}^{[0]}(T)\ket{g}\rangle \rangle&=P_{[s]}\bullet P_{[s]}+O(p),\\ \langle\langle g|\mathcal{G}^{[0]}(T)\ket{g}\rangle\rangle& =(I-P_{[s]})\bullet(I-P_{[s]})+O(p).\end{split} \tag{38}\] A single fault will affect the measurement outcome or cause a bosonic error diagonal in the Fock basis. The former can be tolerated by repeating the above measurement three times and performing majority voting, while the latter simply commutes with the measurements and does not affect the (later) measurement outcome. To check this \(X\)-basis measurement scheme is 1-FT according to Def. 4, we also need to verify that the measurement outcome is correct for any input error \((k,\theta)\leq(1,\Delta\chi T/2)\). First, a single-photon loss does not affect the measurement outcome since \(a\) does not change the phase of any coherent states. Second, a small phase rotation rotates \(\ket{\alpha}\) to \(\ket{\alpha e^{i\theta}}\). Similar to the argument for the \(X\)-axis rotation, the \(X\)-basis measurement outcome is correct as long as the POVM \(M_{+}\) captures \(\ket{\pm\alpha e^{i\theta}}\) but not \(\ket{\pm i\alpha e^{i\theta}}\). ### Error correction To correct both loss and dephasing errors, i.e. data errors with \((k>0,\ket{\theta}>0)\), we employ a Knill-type QEC [52] using a teleportation circuit shown in Fig. 6(c). A fresh ancilla bosonic mode \(b\) is initialized in \(\ket{+}\) state and gets entangled with the data mode \(a\) via a \(XX\) rotation along with singe-mode rotations. The data mode \(a\) is then measured in the \(Z\) basis, where the measurement outcome is used to apply a feedback \(Z\) operation on the \(b\) mode. All the gadgets here are 1-FT using previous constructions. Moreover, they are error-transparent to any input error on the \(a\) mode smaller than \(f(1)=(1,\Delta\chi T/2)\). Therefore, the input data error simply commutes through all the gates and does not propagate to the \(b\) mode. Furthermore, the 1-FT \(Z\)-basis measurement is correct for an error smaller than \(f(1)\). Therefore, such an input error can be corrected by the EC gadget. To verify the 1-FT EC conditions, we need to further show that a single fault during the teleportation circuit only leads to a correctable residual error of size at most \(f(1)\) at the output of the \(b\) mode. Since we are using 1-FT gates, the output for the \(a\) or \(b\) mode (before the \(Z\) measurement) has an error at most \(f(1)\). As shown in Fig. 7(a), we numerically evaluate the average infidelity of the teleportation gadget in Fig. 6(c). In the absence of \(\chi\) mismatch (see the blue curve), we show that it has an error rate that scales as \((\kappa/\Omega)^{2}\), manifesting the fault tolerance of its composing gadgets, which cover the entire \(\mathcal{S}\) other than the \(X\)-basis measurement. There is an error floor in the low \(\kappa/\Omega\) regime, which is exponentially suppressed by \(|\alpha|^{2}\), due to the finite-size imperfection of the \(X\) rotation and the \(XX\) rotation. In the presence of a finite \(\chi\) mismatch, a rigorous second Figure 6: Illustration of the \(X\)-axis rotation (a), \(XX\) rotation (b), and teleportation-based EC (c) in the level-1 gadgets \(\mathcal{S}\) for the four-legged cat. Figure 7: (a) Average infidelities of an error-correction gadget using teleportation in Fig. 6(c) as a function of \(\gamma/\Omega\) with perfect \(\chi\) matching (blue line) or finite \(\chi\) mismatches (orange line). Here, we use experimental parameters from Ref. [29] for the coherent interaction strengths \(\chi_{f}=2\pi\times 1\)MHz, \(\Omega=0.3\chi_{f}\), \(g_{\text{ss}}=2\chi_{f}\). We consider single-photon loss, ancilla decay from \(f\) to \(e\), ancilla decay from \(e\) to \(g\), and ancilla dephasing \(\mathcal{D}[\ket{e}\bra{e}+2\ket{f}\bra{f}]\) with rates \(\kappa\), \(\gamma_{f\to e}\), \(\gamma_{e\to g}\), and \(\gamma_{\phi}\), respectively. We assume the ancilla error rates are much larger than the cavity loss rate and set \(\gamma_{f\to e}=\gamma_{e\to g}=\gamma\), \(\gamma_{\phi}=\gamma/4\), and \(\kappa=\gamma/10\)[29]. We choose \(\alpha=2.9\), which is a sweet spot for the four-legged cat that minimizes the back action of photon loss [43]. (b) The accumulation of average infidelity and decay of mean photon number \(\langle a^{\dagger}a\rangle\) for 40 rounds of repeated parity measurements (infidelities are shown for every two rounds) followed by teleportation. We use the same coherent parameters \(\chi_{f},\Omega\) and \(g_{\text{BS}}\) as in (a), a finite \(\chi\) mismatch \(\Delta\chi=\Omega/10\), and the experimental error rates from Ref. [29]: \(\kappa=2\)KHz, \(\gamma_{f\to e}=\gamma_{e\to g}=\gamma=20\)KHz and \(\gamma_{\phi}=5\)KHz (with the same ratios between these error rates are in (a)). The teleportation pumps energy into the system and suppresses the random phase rotations caused by \(\Delta\chi\). The three Wigner plots depict the density matrix at the input, before and after the teleportation respectively. order error suppression can no longer be attained due to the induced random phase rotations during the \(X\)- and \(XX\)-rotation gates. However, sufficient error suppression can still be achieved with a finite but small \(\chi\)-mismatch in practically relevant regimes (see the orange and green curves). In practice, where photon loss is typically the predominant error source, repeated parity measurements that correct photon losses (see Alg. 2) suffice for extending the lifetime of the cats. Such a robust memory that reaches the break-even point has been experimentally demonstrated [22]. However, only parity measurements are not enough to protect the cats during long computational tasks as the mean photon number would keep decaying (the parity measurement and gates in \(\mathcal{S}\) are energy-preserving operations that commute with \(a^{\dagger}a\)) due to deterministic energy loss to the environment. We propose to solve this problem by inserting the teleportation gadget periodically in between certain rounds of parity measurements, which pumps energy into the system and restores the amplitude of the cats. Furthermore, the teleportation can suppress the accumulation of random phase rotations if, for example, there is some finite \(\chi\)-mismatch or small cavity dephasing errors \(\kappa_{\phi}\mathcal{D}[a^{\dagger}a]\). We demonstrate such effects numerically in Fig. 7(b). ## VII Concatenated QC and Hardware-Efficient Fault-Tolerance With the set of 1-FT level-1 gadgets in \(\mathcal{S}\), we can concatenate the level-1 logical qubits (four-legged cats) with a level-2 qubit code for arbitrary error suppression. We show such a concatenated architecture in Fig. 8. The basic elements for each level-1 qubit are simply a storage bosonic mode and a three-level ancilla that are dispersively coupled. The ancilla is used for the fault-tolerant operation of the bosonic qubit in each storage mode, including state preparation, photon-loss correction, gates, and measurements (see Sec. VI). In addition, a small number of extra bosonic modes shared by neighboring storage modes, which we refer to reservoir modes, are used to pump energy into the storage modes periodically via teleportation (see Fig. 6(c)). The level-2 QEC requires certain couplings between level-1 qubits. Importantly, we can achieve this by introducing only BS coupling between nearest-neighbor storage bosonic modes (see Fig. 8) for 2D stabilizer codes. The level-2 syndrome-extraction circuits are made of non-destructively measurement of high-weight Pauli operators, featuring a sequence of two-qubit entangling gates such as the CNOT gate. In Fig. 9(a), we show how one can get a level-1 CNOT gate using 1-FT single-mode and two-mode rotations in \(\mathcal{S}\). Although the complied circuit is long with a depth 6, we note that one can usually reduce the depth per CNOT gate when considering the full stabilizer measurement circuits. As an example, we can measure a weight-\(n\)\(X\) Pauli operator using a circuit of depth \(2n+4\) (see Fig. 9(b)). We leave the evaluation and optimization of the error rates of level-1 gates, e.g. the CNOT gate, as well as the threshold and resource overhead of concatenated codes to future work. Nevertheless, we remark that each CNOT gate (se Fig. 9(a)) uses similar gadgets as those for teleportation (see Fig. 6(c)), and each CNOT gate in a syndrome extraction depth (see Fig. 9(b)) has a similar depth as the teleportation on average, we expect the CNOT gates have a similar error rate as the teleportation shown in Fig. 7(b). Using this rough estimates, a gate error rate below \(10^{-2}\), which corresponds to the gate error threshold for the surface codes, is achievable using the current circuit-QED hardware. To sum up, our construction of \(\mathcal{S}\) in this work enables a practical, hardware-efficient architecture for fault-tolerant quantum computing, which features only one bosonic mode and one qutrit per level-1 logical qubit and only requires low-order physical interactions (dispersively coupling and beam-splitter interaction) that have been experimentally demonstrated. Furthermore, realizing high-fidelity level-1 gadgets with error rates below the threshold requirement of the level-2 codes is promising for near-term experimental demonstrations. ## VIII Discussion The fault-tolerant gadgets \(\mathcal{S}\) that we develop in Sec. VI for the four-legged cat can be applied to other rotation-symmetric codes [41], whose codewords are invariant under a \(N\)-fold rotation \(R=\exp[i(2\pi/N)a^{\dagger}a]\). Taking Figure 8: Hardware layout for concatenated 2D codes with four-legged cats. Each level-1 logical qubit (blue box) consists of a storage bosonic mode and a three-level ancilla, which are dispersively coupled. BS coupling between neighboring storage bosonic modes is required for the level-2 QEC. In addition, reservoir modes (with only one shown here as an example) shared between neighboring storage modes are used to pump energy into the system via teleportation (see Fig. 6(c)). \(N=2\) for example, an arbitrary code with a two-fold rotation symmetry have codewords \[\begin{split}|+_{\Theta}\rangle&\approx\frac{1}{\sqrt{ \mathcal{N}_{+}}}(I+e^{i\pi a^{\dagger}a})|\Theta\rangle,\\ |-_{\Theta}\rangle&\approx\frac{1}{\sqrt{\mathcal{N }_{-}}}(e^{i\pi a^{\dagger}a/2}+e^{i3\pi a^{\dagger}a/2})|\Theta\rangle,\end{split} \tag{39}\] where \(\mathcal{N}_{\pm}\) are normalization constants, and the approximation holds when the base state \(\ket{\Theta}\) is localized in phase space, i.e. \(\bra{\Theta}e^{i\pi a^{\dagger}a/2}\ket{\Theta}\approx 0\). The fault-tolerant gadgets in \(\mathcal{S}\) can be applied to such an arbitrary rotation-symmetric code with a localized base state \(\ket{\Theta}\), except that for the \(X\)-basis state preparation in Alg. 3, we need to replace the initial state with \(\ket{\Theta}\) in the first step. In particular, the \(X\) rotation and \(XX\) rotation still work since they are based on the phase-space locality of the base state. In Tab. 1, we compare our construction of fault-tolerant gadgets for rotation-symmetrical codes that can correct photon losses with those in the literature. In particular, compared to the gadgets in Ref. [41] using bosonic ancillae, our gadgets using qutrit ancillae avoid the demand of nonlinear interaction between bosonic modes and the phase measurement, which are both challenging to engineer in practice. ###### Acknowledgements. We thank Wenlong Ma and Takahiro Tsunoda for helpful discussions. The authors acknowledge support from the ARO (W911NF-23-1-0077), ARO MURI (W911NF-21-1-0325), AFOSR MURI (FA9550-19-1-0399, FA9550-21-1-0209), NSF (OMA-1936118, ERC-1941583, OMA-2137642), NTT Research, Packard Foundation (2020-71479), and the Marshall and Arlene Bennett Family Research Program. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers. The authors are also grateful for the support of the University of Chicago Research Computing Center for assistance with the numerical simulations carried out in this paper. ## Appendix A Appendix A: Algebric conditions for PI and GPI In this section, we provide algebraic conditions for PI gates (Def. 5, Def. 6) and GPI gates (special case of Def. 7 when the target operation is a unitary). Recall that we are considering a Markovian evolution for the joint system of ancilla \(A\) and bosonic mode \(C\), described by the Lindblad master equation in Eq. (11), with a joint Hamiltonian \(H_{AC}(t)\) and a set of ancilla errors \(\{J_{j}\}\). We first provide definitions and properties of a set of structured algebras that we will use. Let \(\mathcal{B}_{A}=\{\ket{m}_{A}\}_{m\in[d_{A}]}\) be an orthonormal basis for a \(d_{A}\)-dimensional ancilla, and \(\mathcal{B}_{C}=\{\ket{n}_{C}\}_{n\in[\infty]}\) be an orthonormal basis for an infinite-dimensional bosonic mode. Let \(\mathbf{M}_{A}\) (\(\mathbf{M}_{C}\)) be the ordinary matrix algebra for the ancilla (bosonic mode). \(\mathbf{M}_{A}\) (\(\mathbf{M}_{C}\)) is a vector space over \(\mathbb{C}\) with a basis \(\{\ket{m}_{A}\bra{n}_{m,n\in[d_{A}]}\) (\(\{\ket{m}_{C}\bra{n}_{m,n\in[\infty]}\)). In addition, \(\mathbf{M}_{A}\) (similarly for \(\mathbf{M}_{C}\)) is equipped with a multiplication operation \[\ket{a}_{A}\bra{b}\ket{c}_{A}\bra{d}=\delta_{b,c}\ket{a}_{A}\bra{d}, \tag{40}\] for any \(a,b,c,d\in[d_{A}]\). For any algebra \(\mathbf{M}\), we denote \(\mathbf{M}=\bra{\mathbf{S}}\) for a set \(\mathbf{S}\) if any element \(a\) in \(\mathbf{M}\) can be written as \(a=\sum_{j}c_{j}\alpha_{j}\), where \(c_{j}\in\mathbb{C}\) and \(\alpha_{j}\) is a product of some elements in \(\mathbf{S}\). In other words, \(\mathbf{M}\) is generated by \(\mathbf{S}\). Let \(\mathbf{M}_{AC}=\mathbf{M}_{A}\otimes\mathbf{M}_{C}\) be the matrix algebra for the joint system. We define the reduction of an algebra on the joint system to an algebra only on the ancilla as a surjective map from \(\mathbf{M}_{AC}\) to \(\mathbf{M}_{A}\): **Definition 8** (Reduction of a joint algebra).: _Given any algebra \(\mathbf{H}\subseteq\mathbf{M}_{AC}\) on the joint system and an ancilla basis \(\mathcal{B}_{A}\) we define the reduction of \(\mathbf{a}\) on \(\mathcal{B}_{A}\) as:_ \[\begin{split}\mathbf{H}|_{\mathcal{B}_{A}}:=\langle\{\ket{m} \bra{n}\ket{m},\ket{n}\in\mathcal{B}_{A};\\ \exists h\in\mathbf{H},\bra{m}h\ket{n}\neq 0\}\rangle.\end{split} \tag{41}\] Next, we define a family of subalgebras of \(\mathbf{M}_{AC}\) that satisfy a special property: **Definition 9** (PI matrix algebra. Definition 1 of Ref. [49]).: _Let \(\mathcal{B}_{A}\) be some orthonormal basis for \(A\). We say that a subalgebra \(\mathbf{P}\) of \(\mathbf{M}_{\mathbf{AC}}\) is a PI algebra associated with \(\mathcal{B}_{A}\) if it satisfies:_ 1. \(\mathbf{P}=\langle\{\ket{m}\bra{n}\otimes U_{mn}\}\rangle\) _for some set of_ \((m,n)\in[d_{A}]\times[d_{A}]\) _and_ \((m,n)\)_-dependent unitaries_ \(U_{mn}\in\mathbf{M}_{C}\)_._ 2. \(\mathbf{P}\) _is isomorphic to its reduction on_ \(\mathcal{B}_{C}\) _via the map_ \(\ket{m}\bra{n}\otimes U_{mn}\rightarrow\ket{m}\bra{n}\)_._ Figure 9: Compilation of level-1 CNOT (a) and a stabilizer \(X^{\otimes 2}\) measurement circuit (b) using our constructed 1-FT level-1 gadgets in \(\mathcal{S}\). Note that the second condition posts three requirements on the unitaries \(U_{mn}\): 1. \(U_{ma}U_{bn}=\delta_{a,b}U_{mn}\). 2. \(U_{mn}=U_{nm}^{\dagger}\). 3. \(U_{mm}=I\). These requirements lead to the following properties of operators in a PI algebra: **Proposition 3** (Property of operators lying in a PI algebras).: _Let \(\mathbf{P}=\left\langle\left\{\left|m\right\rangle\left\langle n\right|\otimes U _{mn}\right\}\right\rangle\) be some PI algebra associated with an ancilla basis \(\mathcal{B}_{A}\), and let \(\mathbf{P}|_{\mathcal{B}_{A}}\) be its reduction on \(\mathcal{B}_{A}\). For any operator \(O\in\mathbf{P}\) and \(\left|r\right\rangle,\left|i\right\rangle\in\mathcal{B}_{A}\), we have \(\left\langle r\right|O\left|i\right\rangle\propto U_{ri}\), where \(U_{ri}:=I\) if \(\left|r\right\rangle\left\langle i\right|\notin\mathbf{P}|_{\mathcal{B}_{A}}\)._ Proof.: If \(O\in\mathbf{P}\), we can write \(O\) as \(O=\sum_{m,n}o_{mn}\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\) for some \(o_{mn}\in\mathcal{C}\). If \(\left|r\right\rangle\left\langle i\right|\in\mathbf{P}|_{\mathcal{B}_{A}}\), we have \(\left\langle r\right|O\left|i\right\rangle=o_{ri}U_{ri}\propto U_{ri}\); If \(\left|r\right\rangle\left\langle i\right|\notin\mathbf{P}|_{\mathcal{B}_{A}}\), we have \(\left\langle r\right|O\left|i\right\rangle=0\times I\). Note that Prop. 3 also implies that for any operator \(O=\prod_{i}O_{i}\) that is a product of operators lying in a PI algebra \(\mathbf{P}=\left\langle\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\right\rangle\), we have \(\left\langle r\right|O\left|i\right\rangle\propto U_{ri}\). ### PI conditions Ref. [49] provides a simple algebraic condition for PI gates using PI algebras: **Proposition 4** (Algebraic condition for PI gates. Theorem 1 of Ref. [49]. ).: _An ancilla-assisted gate is PI (see Def. 5) in an ancilla basis \(\mathcal{B}_{A}\) if the Hamiltonian and all the Lindblad jump operators are all in some PI algebra associated with \(\mathcal{B}_{A}\)._ Note that the condition in Prop. 4 guarantees PI up to infinite order. In the following, we generalize it for finite-order PI gates. First, the effrective Hamiltonian \(H_{\mathrm{eff}}=H_{AC}(t)-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}\) needs to be in some PI algebra, i.e. \(H_{\mathrm{eff}}(t)=\sum_{m,n=1}^{d_{A}}\xi_{mn}(t)\left|m\right\rangle\left\langle n \right|\otimes U_{mn}\) for some \(\xi_{mn}(t)\in\mathbb{C}\) and unitaries \(U_{mn}\). Next, we define a \(n\)-th order path algerbra \(\mathbf{p}^{[n]}\) containing all the possible paths contained in the noise expansion of the system dynamics up to \(n\)-th order, and an associated \(n\)-th order reachable state set \(\mathbf{S}_{A}^{[n]}\) containing all ancilla states reachable via the \(n\)-th order paths in \(\mathbf{p}^{[n]}\) when starting from \(\left|i\right\rangle\), and a \(n\)-th order error set \(\mathbf{E}^{[n]}\subseteq\{J_{j}\}\) containing all possible errors that can act nontrivially on \(\mathbf{S}_{A}^{[n-1]}\): **Definition 10** (Finite-order path algebras, reachable states, and error sets).: _Given a Hamiltonian \(H_{AC}(t)\) that lies in some PI algebra associated with an ancilla basis \(\mathcal{B}_{A}\), a set of errors \(\{J_{j}\}\), and an initial ancilla state \(\left|i\right\rangle\), we define the zeroth-order path algebra \(\mathbf{p}^{[0]}\) as an algebra that contains all the paths in the effective Hamiltonian \(H_{\mathrm{eff}}(t):=H_{AC}(t)-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}=\sum_{m,n=1}^{d_{A}}\xi_{mn}(t)\left|m\right\rangle\left\langle n\right|\otimes U_{mn}\):_ \[\mathbf{p}^{[0]}:=\langle\left\{\left|m\right\rangle\left\langle n\right| \otimes U_{mn}\mid\exists t\in[0,T],\xi_{mn}(t)\neq 0\right\}\rangle, \tag{11}\] _Let \(\mathbf{p}^{[0]}|_{\mathcal{B}_{A}}\) be the reduction of \(\mathbf{p}^{[0]}\) on \(\mathcal{B}_{A}\). We define a zeroth order reachable state set including all states reachable via the zeroth order paths when starting from \(\left|i\right\rangle\):_ \[\mathbf{S}_{A}^{[0]}:=\{\left|m\right\rangle\in\mathcal{B}_{A}\mid\left|m \right\rangle\left\langle i\right|\in\mathbf{p}^{[0]}|_{\mathcal{B}_{A}}\}, \tag{12}\] _and we define a zero-th order error set \(\mathbf{E}^{[0]}:=\emptyset\)._ _Then, we define a \(n\geq 1\)-th order path algebra \(\mathbf{p}_{AC}^{[n]}\) and a \(n\)-th order reachable state set inductively:_ \[\mathbf{E}^{[n]} =\mathbf{E}^{[n]}\cup\{J_{j}\left|\,\exists\left|m\right\rangle \in\mathbf{S}_{A}^{[n-1]},J_{j}\left|m\right\rangle\neq 0\}, \tag{13}\] \[\mathbf{P}^{[n]} =\langle\mathbf{p}^{[n-1]}\cup\mathbf{E}^{[n]}\rangle,\] \[\mathbf{S}_{A}^{[n]} =\{\left|m\right\rangle\,\left|\,\left|m\right\rangle\left\langle i \right|\in\mathbf{p}^{[n]}|_{\mathcal{B}_{A}}\}.\] **Proposition 5** (Algebraic conditions for finite-order PI gates).: _Given an ancilla-assisted gate \(\mathcal{G}(T)\) generated by a Hamiltonian \(H_{AC}(t)\) and jump errors \(\{J_{j}(t)\}\). \(\mathcal{G}(T)\) is n-GPI in an ancilla basis \(\mathcal{B}_{A}\) for an initial ancilla state \(\left|i\right\rangle\) if \(H_{AC}(t)\cup\{J_{j}^{\dagger}J_{j}\}\cup\mathbf{E}^{[n]}\) are in some PI algebra, \begin{table} \begin{tabular}{c c c} \hline \hline Gadgets & Prior schemes & Our scheme \\ \hline \multirow{4}{*}{Error correction} & PI parity measurement [38]; & \multirow{2}{*}{\begin{tabular}{c} MPI parity measurement \\ + one-bit teleportation with a shared ancillary mode. \\ Engineered dissipation [18; 53]. \\ \end{tabular} } \\ & & \\ \hline \multirow{2}{*}{\(Z\)-type gates} & PI SNAP gate [29; 30]; & \multirow{2}{*}{ \begin{tabular}{c} GPI SNAP gate \\ \end{tabular} } \\ & Self-Kerr \((a^{\dagger}a)^{2}\) Hamiltonian [41]. & \\ \hline \multirow{2}{*}{\(X\)-type gates} & Teleported Hadamard gate & \(X\)-axis rotation \\ & with an ancillary bosonic mode [41]. & using cavity displacements and SNAP gates \\ \hline Entangling gate & CZ gate using cross-Kerr \(a^{\dagger}a\otimes b^{\dagger}b\)[41]. & \(XX\) rotation using beam-splitter + SNAP gates. \\ \hline \(X\)-basis measurement & Phase measurement [41]. & Beam splitter + SNAP gates. \\ \hline \end{tabular} \end{table} Table 1: Comparison of different constructions of fault-tolerant gadgets for rotation-symmetrical codes that can correct photon losses. We denote \(Z\)-type gates as those that preserve the photon number (alternatively, those that add photon-number dependent phases), and \(X\)-type gates as those that do not preserve the photon number. _where \(\mathbf{E}^{[n]}\) is the and \(n\)-th order error set constructed from \((H_{AC}(t),\{J_{j}\},\mathcal{B}_{A},|i))\)._ Proof.: Let \(H_{AC}(t)\cup\{J_{j}^{\dagger}J_{j}\}\cup\mathbf{E}^{[n]}\) be in some PI algebra \(\mathbf{P}=\langle\{|m\rangle\,\langle n|\otimes U_{mn}\}\rangle\). The effective Hamiltonian \(H_{\mathrm{eff}}(t)=H_{AC}(t)-\frac{i}{2}J_{j}^{\dagger}J_{j}\in\mathbf{P}\). Furthermore, the non-jump propagator \(W(t_{2},t_{1}):=\mathcal{T}\exp[-\int_{t=t_{1}}^{t_{2}}dtH_{\mathrm{eff}}(t)]\) is also in \(\mathbf{P}\). Let \(\mathcal{I}_{k}:=\{j\mid J_{j}\in\mathbf{E}^{[k]}\}\) be the indices of the \(k\)-th order error set. Now, consider a \(k\)-th order Dyson expansion of \(\mathcal{G}(T)\) (see Eq. (9)): \[\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle= \left[\int_{t_{h}=0}^{T}dt_{h}\right]_{h\in[k]}\left[\sum_{j_{h} \in\mathcal{I}_{k}}\right]_{h\in[k]}\] \[G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})\bullet G_{ri}^{lk}(\{t_{h },j_{h}\}_{h\in[k]}), \tag{10}\] where \[G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})= \langle r|\,\mathcal{T}\Bigg{\{}W(T,t_{k}) \tag{11}\] \[\times\prod_{h=1}^{k}K_{j_{h}}(t_{h})W(t_{h},t_{h-1})\Bigg{\}} \left|i\right\rangle,\] with \(t_{0}=0\). Since all the operators in Eq. (11) are in \(\mathbf{P}^{\prime}\), we have \(G_{ri}^{k}(\{t_{h},j_{h}\}_{h\in[k]})\propto U_{ri}\) according to Prop. 3. As such, \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\propto U_{ri}\bullet U_{ri}\). Therefore, \(\langle\langle r|\mathcal{G}^{[k]}(T)|i\rangle\rangle\propto U_{ri}\bullet U_{ri}\) for any \(k\leq n\) and the \(n\)-PI condition in Eq. (13) is satisfied. ### GPI conditions Here, we further relax the condition for finite-order PI in Prop. 5 and provide algebraic conditions for finite-order GPI gates. **Proposition 6** (Algebraic conditions for finite-order GPI gates).: _Given a bosonic code with a code projection \(P_{c}\) and an ancilla-assisted gate generated by a Hamiltonian \(H_{AC}(t)\) and jump errors \(\{J_{j}(t)\}\). \(\mathcal{G}(T)\) is \(n\)-GPI in an ancilla basis \(\mathcal{B}_{A}\) for an initial ancilla state \(|i\rangle\) if_ 1. _There exists some PI algebra_ \(\mathbf{P}=\langle\{|m\rangle\,\langle n|\otimes U_{mn}\}\rangle\) _such that_ \(H_{AC}(t)\in\mathbf{P}\)_, and any error_ \(J_{j}(t)\in\mathbf{E}^{[n]}\)_, where_ \(\mathbf{E}^{[n]}\) _is the_ \(n\)_-th order error set constructed from_ \((H_{AC}(t),\{J_{j}\},\mathcal{B}_{A},|i\rangle)\)_, is in the form_ \(J_{j}(t)=\sum_{m,n}\left|m\right\rangle\langle n|\otimes R_{mn}^{j}(t)U_{mn}\)_, where_ \(R_{mn}^{j}(t)\) _are unitaries._ 2. _Let_ \(\xi:=\{R_{mn}^{j}(t)\mid J_{j}\in\mathbf{E}^{[n]};m,n\in[d_{A}];t\in[0,T]\}\)_. Any error_ \(E\in\xi\) _satisfies_ \[[E,U_{mn}]=0.\] (12) 3. _Let_ \(\epsilon:=\{\mathcal{T}\prod_{i=1}^{n}E_{i}(t_{i})\}_{E_{i}(t_{i})\in\xi\cup I}\)_. Errors in_ \(\epsilon\) _satisfy the Knill-Laflamme condition with respect to_ \(P_{c}\)_._ Proof.: We follow the same proof as that for Prop. 5. Now, each Kraus operator \(G_{ri}^{k}(\{t_{h},j_{h}\})\) of \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\) (see Eq. (10) and Eq. (11)) reads \[G_{ri}^{k}(\{t_{h},j_{h}\})=U_{ri}\sum_{m_{h},n_{h}}c_{m_{h},m_{h}}\left[ \mathcal{T}\prod_{h=1}^{k}R_{m_{h}n_{h}}^{j_{h}}(t_{h})\right], \tag{13}\] for some \(c_{m_{h},n_{h}}\in\mathbb{C}\). Here, we have replaced the form of \(J_{j}\) in the first condition of Prop. 6 into Eq. (11). The operators \(R_{m_{h}n_{h}}^{j_{h}}\) cumulate to the front as if they were transparent to the unitaries due to the second condition in Prop. 6. Then, the Kraus operators of \(\langle\langle r|\mathcal{G}_{k}(T)|i\rangle\rangle\), given by the union of all \(G_{ri}^{k}(\{t_{h},j_{h}\})\), are all linear combinations of errors in \(\{U_{ri}\mathcal{T}\prod_{i=1}^{k}E_{i}(t_{i})\}_{E_{i}(t_{i})\in\xi}\), where \(\xi\) is defined in the second condition of Prop. 6. Finally, the Kraus operators of \(\langle\langle r|\mathcal{G}^{[n]}(T)|i\rangle\rangle\) are all linear combinations of errors in \(\epsilon\), up to a same unitary \(U_{ri}\). Therefore, the errors are correctable if \(\epsilon\) satisfies the KL condition Prop. 6. As an example, we consider the GPI SNAP gate in Sec. IV.1.1 using a \(\chi\)-mismatched three-level transmon. In the presence of the ancilla relaxations, one can show that this gate is 1-GPI by checking the conditions in Proposition 6. In the interaction picture, the first-order error set \(\mathbf{E}^{[1]}=\{\ket{e}\bra{f}\otimes e^{-i\Delta\chi ta^{\dagger}a}\}\). We can find a PI algebra such that the first condition of Prop. 6 is satisfied: \[\mathbf{P}=\langle\{|f\rangle\,\langle g|\otimes S,|g\rangle\,\langle f|\otimes S ^{\dagger},|e\rangle\,\langle f|\otimes I\}\rangle. \tag{14}\] Here, there is only a single \(J_{j}\) with \(R_{ef}(t)=e^{-i\Delta\chi ta^{\dagger}a}\). Therefore, \(\xi=\{R_{ef}(t)\}_{t\in[0,T]}\) and \(\epsilon=\xi\cup I\). By choosing \(S\) as a logical operator for the four-legged cat and noticing that \([R_{ef}(t),S]=[R_{ef}(t),S^{\dagger}]=0\), the second condition of Proposition 6 is also satisfied. Finally, \(\epsilon=\{R_{ef}(t)\}_{t\in[0,T]}\cup I\) satisfies the KL condition w.r.t. \(P_{c}\) of the four-legged cat as long as \(\Delta\chi T<\pi/2\) (see Sec. II.1.1). ## Appendix B Error transparent/closure control for bosonic errors In this section, we review the error-transparent [47] and error-closure [32] quantum control techniques that enable fault tolerance against central system (bosonic) errors. For bosonic errors, we neglect their contribution to the no-jump evolution (the no-jump propagator is purely generated by the Hamiltonian) in the jump expansion of a quantum channel (see Eq. (7)). Such an approximation can be justified when considering the photon loss, whose back-action associated with \(a^{\dagger}a\) is correctable in the large-\(\alpha\) regime for cat codes (see Sec. II.1.1). We consider a unitary \(U(t)\) generated by a Hamiltonian \(H(t)\) that acts only on the bosonic mode. Consider a bosonic code with a code projection \(P_{c}\) and a bosonic error \(E\). The error-transparent control aims to engineer \(H(t)\) such that the dynammics is transparent to \(E\), i.e. errors that occur during the gate are equivalent to those that occur after the unitary: \[U(T,t_{p})E\cdots EU(t_{2},t_{1})EU(t_{1},0)P_{c}\propto E^{p}U(T,0)P_{c}, \tag{20}\] for any \(T>t_{p}>\cdots>t_{1}>0\) and \(p\geq 1\). We say that the unitary is \(k\)-th order error-transparent to \(E\) if Eq. (20) is satisfied for \(p\leq k\). When using \(U(T,0)\) as a logical gate for the bosonic code, we typically require \(H(t)\) (and thereby, \(U(t)\)) to be block-diagonal, i.e. \((I-P_{c})H(t)P_{c}=0\). In this case, Eq. (20) is equivalent to \[U(T,t)EU^{\dagger}(T,t)P_{j}\propto EP_{j}, \tag{21}\] for any \(T>t>0\) and \(j\leq k-1\), where \(P_{j}:=E^{j}P_{c}\). Obviously, Eq. (21) is satisfied if \(E\) commutes with \(H(t)\) when acting on the error spaces up to \((k-1)\)-th order, i.e. \[[E,H(t)]P_{j}=0, \tag{22}\] for \(j\leq k-1\). We note that Eq. (22) is only a sufficient condition for the error-transparency definition in Eq. (21). For instance, Eq. (21) is also satisfied if \([E,H(t)]P_{j}\propto EP_{j}\). In this work, we are interested in ancilla-assisted gates. Similar to the PI/GPI, in the case where we initialize the ancilla in \(\ket{i}\) and projectively measure it in a basis \(\mathcal{B}_{A}=\{\ket{m}_{A}\}\), we only care about the conditional bosonic channels given a measurement outcome \(m\). As such, we consider the following conditional error transparency, which is easier to achieve than unconditional error-transparency presented above: **Definition 11** (Conditional error transparency).: _Given a bosonic code with a code projection \(P_{c}\), an initial ancilla state \(\ket{i}\) and an ancilla orthonormal basis \(\mathcal{B}_{A}=\{\ket{m}_{A}\}\), we say that an ancilla assisted unitary \(U(t)\) is \((P_{c},\ket{i},\mathcal{B}_{A})\)-error-transparent to a bosonic error \(E\) up to \(k\)-th order if for any \(\ket{r}\in\mathcal{B}_{A}\) and \(p\leq k\):_ \[\bra{r}U(T,t_{p})E\cdots EU(t_{1},0)P_{c}\propto E^{p}\bra{r}U(T,0)\ket{i}P_ {c}, \tag{23}\] Ref. [32] generalizes the error transparency condition to the so-called error closure condition. In the case of a single error \(E\) and a static Hamiltonian \(H_{0}\), they first construct a vector space \(\epsilon\) with a basis \(\{E,[H_{0},E]\}\) over \(\mathbb{C}\), and the error-closure condition is satisfied if for any \(e\in\epsilon\): 1. \([H_{0},e]\in\epsilon\). 2. Errors in \(\epsilon\) are correctable (satisfying the KL condition) with respect to \(P_{c}\). Such a condition guarantees that each first-order error trajectory gives: \[e^{iH_{0}(T-t)}Ee^{iH_{0}t}=e^{iH_{0}(T-t)}Ee^{-iH_{0}(T-t)}e^{iH_{0}t}=E^{ \prime}e^{iH_{0}t}, \tag{24}\] where \(E^{\prime}:=e^{iH_{0}(T-t)}Ee^{-iH_{0}(T-t)}\in\epsilon\) using the first condition. Then the desired unitary is implemented up to a correctable error \(E^{\prime}\) according to the second condition. The error closure condition generalizes the error transparency condition as it allows errors to propagate to correctable errors, rather than rigorously commuting through the unitary, in a similar spirit as our generalization from PI to GPI. ### \(Z\)-axis rotation Recall that a \(Z\)-axis rotation is implemented by a GPI SNAP gate (see Sec. IV.1.1). In the interaction picture associated with the base Hamiltonian \(H_{0}=-(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e}\bra{e}\otimes a^{\dagger}a\), the Hamiltonian is static \(\tilde{H}=\Omega\left[\ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right]\), and the photon loss error reads \(\tilde{a}(t)=e^{i(\chi_{f}\ket{f}\bra{f}+\chi_{e}\ket{e})t}\otimes a\). Note that \([\tilde{a}(t),\tilde{H}]I\otimes P_{c}\neq 0\) and the unconditional error transparency does not hold. Fortunately, we now show that the conditional error transparency in Eq. (23) holds up to a single-photon loss if we choose \(S(\vec{\phi})\) appropriately. For \(p=1\), the L.H.S. of Eq. (23) reads \[\bra{r}\tilde{U}(T,t)\tilde{a}(t)U(t,0)\ket{g}P_{c} \tag{25}\] \[=\sum_{m\in\{f,g\}}e^{i\chi_{mt}t}\tilde{U}_{rm}(T,t)a\tilde{U}_{ mg}(t,0)P_{c}.\] Recall that \(\tilde{U}(t_{2},t_{1})\) is in a PI algebra (see Appendix A) \(\mathbf{P}=\bra{\ket{f}\bra{g}\otimes S,\ket{g}\bra{f}\otimes\mathbb{S}^{ \dagger},\ket{g}\bra{g}\otimes I,\ket{e}\bra{e}\otimes I,\ket{f}\bra{f} \otimes I}\). Therefore, \(\tilde{U}_{fg}\propto S(\vec{\phi})\), \(\tilde{U}_{gf}\propto S^{\dagger}(\vec{\phi})\), and \(\tilde{U}_{gg},\tilde{U}_{ff}\propto I\). Choosing \(S(\vec{\phi})\) as a logical gate, we have \(\tilde{U}_{mg}(t,0)P_{c}=P_{c}\tilde{U}_{mg}(t,0)P_{c}\) for \(m\in\{f,g\}\). If \([\tilde{U}_{rm}(T,t)a]P_{c}=0\) for any \(r,m\in\{g,f\}\), we can then swap \(\tilde{U}_{mi}(T,t)\) and \(a\) in Eq. (25) and obtain \(\bra{r}\tilde{U}(T,t)\tilde{a}(t)U(t,0)\ket{g}P_{c}\propto a\tilde{U}_{rg}(T,0)P _{c}\). Such a condition is equivalent to \[[a,S(\vec{\phi})]P_{c}=[a,S^{\dagger}(\vec{\phi})]P_{c}=0, \tag{26}\] which is simply that the applied unitary \(S(\vec{\phi})/S^{\dagger}(\vec{\phi})\) is error transparent to \(a\). This can be satisfied by setting \(S(\phi)=P_{0}+P_{3}+e^{i\theta}(P_{2}+P_{1})\). ### \(X\)-axis rotation Here, we show that the \(X\)-axis rotation in error-transparent to a single photon loss by showing the two involved SNAP gates (see Eq. (25)) satisfy the conditional error transparency in Def. 11. Taking the first SNAP gate as an example, the proof is the same as that for the \(Z\)-axis rotation in the previous section, except that we now need to change \(P_{c}\) to \(D(\alpha)P_{c}\) when verifying Eq. (26). Recall that \(S=e^{i\theta}P_{[s]}+I-P_{[s]}\) for the \(X\)-axis rotation, where \(P_{[s]}=\sum_{i=0}^{s}\left|i\right\rangle\left\langle i\right|\) is a projection into a neighborhood of vacuum. We take the large-\(\alpha\) approximation \(\left|+_{L}\right\rangle\approx\left|C_{\alpha}^{+}\right\rangle\) and \(\left|-_{L}\right\rangle\approx\left|C_{i\alpha}^{+}\right\rangle\). Then, \[\begin{split} aSD(\alpha)\left|+_{L}\right\rangle& \approx 2\alpha\left|2\alpha\right\rangle\approx SaD(\alpha)\left|+_{L} \right\rangle,\\ aSD(\alpha)\left|-_{L}\right\rangle&\approx aD( \alpha)\left|-_{L}\right\rangle\approx SaD(\alpha)\left|-_{L}\right\rangle, \end{split} \tag{103}\] where we have used \(S\left|\beta\right\rangle\approx\left|\beta\right\rangle\) for \(\left|\beta\right|\gg 1\). Eq. (103) thus verifies that \(S\) commutes with \(a\) when acting on \(D(\alpha)P_{c}\). ### \(Xx\) rotation Here, we show that the \(XX\) rotation gate in Sec. VI.3 is robust against a small input phase rotation or the ancilla relaxation/dephasing or cavity loss that occurs in the middle. We first consider the input phase rotation error \(E=e^{i(\delta\theta_{a}a^{\dagger}a+\delta\theta_{b}b^{\dagger}b)}\). Our aim is to show that \[U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})\approx E(P_{c}\otimes P_{c}). \tag{104}\] If we consider the SNAP gate \(S\) to be replaced to the robust version \(S=e^{-i\theta}P_{[s]}+(I-P_{[s]})\), we can write \(U_{XX}\) in the following form, \[U_{XX}=e^{i\theta}P_{[s],I}+(I-P_{[s],I)}, \tag{105}\] where \[P_{([s],I)}=BS(\frac{\pi}{2})^{\dagger}\left(P_{[s]}\otimes I+I\otimes P_{[s] }\right)BS(\frac{\pi}{2}), \tag{106}\] is a projector onto the space where the input bosonic modes \(A\) and \(B\) have almost clean interference results (i.e., one output mode is close to vacuum) under the balanced beam-splitter \(BS(\frac{\pi}{2})\). We can get Eq. (34) in Sec. VI.3 from Eq. (105). To prove Eq. (104), we note that \[\begin{split}& U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})=EP_{ \pm,\pm}+EP_{\pm,\mp}+\\ &(e^{-i\theta}-1)(I-P_{([s],I)})EP_{\pm,\pm}+(e^{i\theta}-1)P_{( [s],I)}EP_{\pm,\mp},\end{split} \tag{107}\] where \[\begin{split} P_{\pm,\pm}&=\left|+_{L},+_{L} \right\rangle\left\langle+_{L},+_{L}\right|+\left|-_{L},-_{L}\right\rangle \left\langle-_{L},-_{L}\right|,\\ &\approx\left|C_{\alpha}^{+},C_{\alpha}^{+}\right\rangle\left\langle C _{\alpha}^{+},C_{\alpha}^{+}\right|+\left|C_{i\alpha}^{+},C_{i\alpha}^{+} \right\rangle\left\langle C_{i\alpha}^{+},C_{i\alpha}^{+}\right|\\ & P_{\pm,\mp}&=\left|+_{L},-_{L}\right\rangle \left\langle+_{L},-_{L}\right|+\left|-_{L},+_{L}\right\rangle\left\langle-_{L },+_{L}\right|,\\ &\approx\left|C_{\alpha}^{+},C_{\alpha}^{+}\right\rangle\left\langle C _{\alpha}^{+},C_{i\alpha}^{+}\right|+\left|C_{i\alpha}^{+},C_{\alpha}^{+} \right\rangle\left\langle C_{i\alpha}^{+},C_{\alpha}^{+}\right|.\end{split} \tag{108}\] To simplify Eq. (107), we notice that, \[\begin{split}&\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}|P_{([s],I)}|C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}\rangle\\ =&\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e ^{i\theta_{b}}}^{+}|BS(\frac{\pi}{2})^{\dagger}(P_{[s]}\otimes I+I\otimes P_ {[s]})\\ &\quad BS(\frac{\pi}{2})|C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e ^{i\theta_{b}}}^{+}\rangle.\end{split} \tag{109}\] Since \[\begin{split}& BS(\frac{\pi}{2})|C_{\alpha e^{i\theta_{a}}}^{+},C_{ \alpha e^{i\theta_{b}}}^{+}\rangle\\ =&\mu_{\alpha}^{2}BS(\frac{\pi}{2})\left(|\alpha e^{i \delta\theta_{a}}\rangle+|-\alpha e^{i\delta\theta_{a}}\rangle\right)_{A} \left(|\alpha e^{i\delta\theta_{b}}\rangle+|-\alpha e^{i\delta\theta_{b}} \rangle\right)_{B}\\ =&\mu_{\alpha}^{2}\bigg{(}\big{(}\left|\frac{\alpha }{\sqrt{2}}(e^{i\delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{A }\big{(}\left|\frac{\alpha}{\sqrt{2}}(e^{i\delta\theta_{a}}-e^{i\delta \theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(e^{i\delta\theta_{a}}-e^{i\delta \theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(e^{i \delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(-e^{i\delta\theta_{a}}+e^{i \delta\theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(-e^{i \delta\theta_{a}}-e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\\ &\quad+|\frac{\alpha}{\sqrt{2}}(-e^{i\delta\theta_{a}}-e^{i \delta\theta_{b}})\big{)}\,\big{)}_{A}\big{(}\left|\frac{\alpha}{\sqrt{2}}(-e^{i \delta\theta_{a}}+e^{i\delta\theta_{b}})\right\rangle\big{)}_{B}\bigg{)}.\end{split} \tag{110}\] Here \(\mu_{\alpha}=1/\sqrt{2(1+\exp(-2|\alpha|^{2}))}\) is the normalization factor of the cat state. when \(|\delta\theta_{a}|,|\delta\theta_{b}|<\pi/8\), we have \(|e^{i\delta\theta_{a}}-e^{i\delta\theta_{b}}|<2\sin(\pi/8)\). As a result, either the components on mode \(A\) or the ones on mode \(B\) will be almost covered in the region of \(P_{[s]}\), which implies that when \(\alpha\gg 1\), we can choose a value of \(s=O(|\alpha|^{2})\) such that \[\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}|P_{([s],I)} |C_{\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}\rangle\to 1. \tag{111}\] Similarly, we will have \[\begin{split}&\langle C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i \theta_{b}}}^{+}|P_{([s],I)}|C_{i\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i \theta_{b}}}^{+}\rangle\to 1,\\ &\langle C_{\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i\theta_{b}}}^{+}|P_{([s ],I)}|C_{\alpha e^{i\theta_{a}}}^{+},C_{i\alpha e^{i\theta_{b}}}^{+}\rangle \to 0,\\ &\langle C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}|P_{( [s],I)}|C_{i\alpha e^{i\theta_{a}}}^{+},C_{\alpha e^{i\theta_{b}}}^{+}\rangle \to 0.\end{split} \tag{112}\] When Eq. (111) and (111) hold, we can simplify Eq. (107) to \[U_{XX}EU_{XX}^{\dagger}(P_{c}\otimes P_{c})\approx E(P_{\pm,\pm}+P_{\pm,\mp}) =E(P_{c}\otimes P_{c}). \tag{113}\] i.e., \(U_{XX}\) is robust against small phase rotation. Now, we consider the error occurs during the process of \(XX\) rotation. First, we show that a single photon loss during the \(XX\) rotation can only propagate to at most a single loss per mode by combing the idea of error transparency and error closure. Recall that a \(XX\) rotation is implemented by two SNAP gates sandwiched by two BSs: \(U_{XX}=\text{BS}(S\otimes S)\text{BS}\). We first show that a single photon loss during the BS can only propagate to an error of the form \(c_{1}a+c_{2}b\), where \(c_{1},c_{2}\in\mathbb{C}\), since the BS satisfies the errorclosure condition. To prove Eq. (14), we take the approximation \(\ket{+_{L}}\approx|C_{\alpha}^{+}\rangle\) and \(\ket{-_{L}}\approx|C_{i\alpha}^{+}\rangle\) and show that \([S\otimes S,c_{1}a+c_{2}b]\text{BS}\ket{\pm_{L},\pm_{L}}=0\). For \(\ket{+_{L},-_{L}}\), \[(S\otimes S)(c_{1}a+c_{2}b)\text{BS}\ket{+_{L},-_{L}}=(c_{1}a+c_{ 2}b)\text{BS}\ket{+_{L},-_{L}} \tag{15}\] \[= (c_{1}a+c_{2}b)(S\otimes S)\text{BS}\ket{+_{L},-_{L}},\] since \(S\otimes S\) acts trivially on both \(\text{BS}\ket{+_{L},-_{L}}\) and \((c_{1}a+c_{2}b)\text{BS}\ket{+_{L},-_{L}}\). The same argument also applies to \(\ket{-_{L},+_{L}}\). For \(\ket{+_{L},+_{L}}\), we have \(\text{BS}\ket{+_{L},+_{L}}\propto(\ket{2\alpha,0}+\ket{0,2\alpha}+\ket{-2 \alpha,0}+\ket{0,-2\alpha})\). Then \[(S\otimes S)(c_{1}a+c_{2}b)\text{BS}\ket{+_{L},+_{L}} \tag{16}\] \[= e^{i\theta}2\alpha[c_{1}(\ket{2\alpha,0}+\ket{-2\alpha,0})+c_{2} (\ket{0,2\alpha}+\ket{0,-2\alpha})]\] \[= (c_{1}a+c_{2}b)(S\otimes S)\text{BS}\ket{+_{L},+_{L}}.\] Similarly, we can show \([S\otimes S,c_{1}a+c_{2}b]\text{BS}\ket{-_{L},-_{L}}=0\). Combining the error-closure property of the BSs and the error-transparency property of the SNAP gates, we conclude that a single photon loss during the execution of the \(XX\) rotation can propagate to an error of the form \(c_{1}^{\prime}a+c_{2}^{\prime}b\), which is correctable by the four-legged cats. ## Appendix C More GPI examples Here, we provide more examples of ancilla-assisted bosonic operations that are GPI. Recall that the SNAP gate using a three-level transmon that we present in the main text (Sec. IV.1.1) is GPI only if the \(\chi\) mismatch \(\Delta\chi\) is small than \(\pi/2T\). In the scenario where \(\Delta\chi\geq\pi/2T\), we can add another flag qubit [54, 55, 56] to make the gate GPI. Notice that the major reason why the SNAP gate with a single ancilla is not GPI when \(\Delta\chi\) is large is that the random dephasing range on the bosonic mode is too large due to the uncertainty of when an ancilla relaxation from \(\ket{f}\) to \(\ket{e}\) happens. Therefore, we can add an extra flag qubit to narrow down the ancilla-relaxation time window, and thus reducing the dephasing range. As shown in Fig. 10, we apply two \(X\) gates to the flag qubit controlled by the ancilla \(\ket{e}\) state at time \(T/2\) and \(T\), respectively. As before, we consider adjacent-level relaxation errors for both the ancilla and the flag qubits, as well as arbitrary forms of dephasing errors. The flag qubit starts from \(\ket{g}\) and only gets excited to \(\ket{e}\) if the ancilla relaxes from \(\ket{f}\) to \(\ket{e}\) at a time \(t\in[T/2,T]\). As such, a single ancilla relaxation incurs a random phase rotation of angle \(\theta=\Delta\chi t\) on the bosonic mode, where \(t\in[0,T/2]\) if the flag qubit is measured in \(\ket{g}\) while \(t\in(T/2,T]\) if the flag qubit is measured in \(\ket{e}\). Formally, we can calculate the bosonic channels conditioned on the measurement outcomes of both the ancilla and flag qubits: \[\langle\langle g,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\mathcal{I}, \tag{17}\] \[\langle\langle f,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto S\bullet S^{\dagger},\] \[\langle\langle e,g|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\int_{\theta=0}^{\Delta\chi T/2}Se^{-i\theta a^{\dagger}a} \bullet e^{i\theta a^{\dagger}a}S^{\dagger},\] \[\langle\langle e,e|\mathcal{G}^{[1]}|g,g\rangle\rangle \propto\int_{\theta=\Delta\chi T/2}^{\Delta\chi T}Se^{-i\theta a^ {\dagger}a}\bullet e^{i\theta a^{\dagger}a}S^{\dagger},\] where the first and second bits represent the ancilla and the flag qubit state for \(\ket{\phi,\psi}\rangle\), respectively. According to Eq. (17), the gate is 1-GPI if \(\Delta\chi T/2<\pi/2\), or \(\Delta\chi<\pi/T\). Therefore, we can allow twice as large \(\chi\) mismatch by introducing another flag qubit. Note that we do not necessarily require the CNOT gates to be infinitely fast and noiseless, and Eq. (17) holds as long as the CNOT Hamiltonian is diagonal in the ancilla basis, e.g. \(H_{\text{CNOT}}\propto\ket{e}_{A}\bra{e}\otimes\left(\ket{e}_{f}\bra{g}+\ket{ g}_{f}\bra{e}\right)\). We remark that one can similarly construct a 1-GPI parity measurement that can tolerate larger \(\chi\) mismatch by introducing another flag qubit. ## Appendix D Details of numerical simulations Here, we provide the details of numerical simulations of the teleportation-based QEC and the parity-measurement shown in Fig. 7. In Fig. 6(c) we have shown that the teleportation-based QEC circuit can be decomposed to a logical \(\ket{+_{L}}\) state preparation gadget, a \(X(\pi/2)\) gate on the input mode, two \(Z(\pi/2)\) gates acting on two bosonic modes respectively, a \(XX(\pi/2)\) gate on two bosonic modes, a logical \(Z\) measurement on the input mode, and a potential \(Z\) gate on the output mode. Note that the final \(Z\) gate can be done in the software by updating the Pauli frame. The \(X(\pi/2)\) and \(XX(\pi/2)\) gates can further be decomposed to displacement operators, SNAP gates and/or beam-splitting operations following Fig. 6(a) and (b). The 1-FT \(\ket{+_{L}}\)-state preparation and the 1-FT logical \(Z\) measurement are done by the procedures in Algorithm 3 and 4, respectively, based on repeated single-shot parity/\(Z\)-basis measurement by dispersive-coupling to a 3-level ancilla and majority vote. In the numerical simulation, we assume the displacement operations can be performed quickly and ignore the Figure 10: GPI SNAP gate with a flag qubit. The flag qubit is excited to \(\ket{e}\) only if the ancilla decays from \(\ket{f}\) to \(\ket{e}\) at \(t\in[T/2,T]\). faults that occur during them. We also assume a perfect preparation of the coherent state \(\ket{\alpha}\) and the measurement on the 3-level ancilla. On the other hand, we consider a noisy simulation of all other three basic gadgets including the dispersive coupling between a bosonic mode and a 3-level ancilla, the SNAP gate, and the beam-splitting interaction. Below, we discuss the simulation details of these three noisy gadgets. The dispersive coupling Hamiltonian \(H_{0}\) is given by Eq. (15). We set the dispersive coupling coefficient \(\chi_{f}=2\pi\times 1\)MHz. In the simulation, we mainly consider 4 types of Markovian noise: ancilla relaxation \(J_{f\to e}=\sqrt{\gamma_{f\to e}}\ket{e}\bra{f}\), \(J_{e\to g}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}\), ancilla dephasing \(J_{ph}=\sqrt{\gamma_{\phi}}(\ket{e}\bra{e}+2\ket{f}\bra{f})\) and cavity loss \(F_{a}=\sqrt{\kappa_{1}}a\). The SNAP gate Hamiltonian \(\tilde{H}\) in the interaction picture of \(H_{0}\) is given by Eq. (17). For the convenience of numerical simulation, we move to another interaction picture associated with a \(\chi\)-matched Hamiltonian, \[H_{0}^{\prime}=-\chi_{f}(\ket{f}\bra{f}+\ket{e}\bra{e})\otimes a^{\dagger}a. \tag{18}\] In the interacting picture of \(H_{0}^{\prime}\), the SNAP gate Hamiltonian becomes \[\tilde{H}^{\prime}=\Delta\chi\ket{e}\bra{e}\otimes a^{\dagger}a+\Omega\left[ \ket{f}\bra{g}\otimes S(\vec{\phi})+h.c.\right], \tag{19}\] where \(\Delta\chi=\chi_{f}-\chi_{e}\). We set the Rabi drive strength \(\Omega=0.3\chi_{f}\). The jump operators are then converted to \(\tilde{J}_{f\to e}=\sqrt{\gamma_{f\to e}}\ket{e}\bra{f}\), \(\tilde{J}_{e\to g}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}e^{i\chi_{f}ta^{ \dagger}a}\), \(\tilde{J}_{ph}=\sqrt{\gamma_{\phi}}(\ket{e}+2\ket{f}\bra{f})\) and \(\tilde{F}_{a}=\sqrt{\kappa_{1}}(P_{g}+e^{i\chi_{f}t}(P_{e}+P_{f}))\otimes a\) where \(P_{k}:=\ket{k}\bra{k}\) for \(k=g,e\), and \(f\). Note that \(\tilde{J}_{e\to g}\) and \(\tilde{F}_{a}\) are time-dependent which rotate quickly. To ease the simulation, we make a conservative estimate and approximate \(\tilde{J}_{e\to g}\) by \(\tilde{J}_{e\to g}^{\prime}=\sqrt{\gamma_{e\to g}}\ket{e}\bra{f}\otimes e^{i \frac{a}{2}a^{\dagger}a}\), i.e., as long as the \(e\to g\) relaxation happens, a large dephasing error will be introduced on the cavity. To simplify \(\tilde{F}_{a}\), we first notice that \(\mathcal{D}[\tilde{F}_{a}]\approx\mathcal{D}[\sqrt{\kappa_{1}}P_{g}\otimes a] +\mathcal{D}[\sqrt{\kappa_{1}}(P_{e}+P_{f})\otimes a]\) where we ignore all the fast rotating terms. This can be understood as a dephasing error between \(P_{g}\) and \(P_{e}+P_{f}\) introduced by the cavity loss. As a simple approximation, we merge the cavity-induced ancilla dephasing with the real ancilla dephasing, i.e., we set the cavity loss jump operator to be \(\tilde{F}_{a}^{\prime}=\sqrt{\kappa_{1}}a\), while the effective ancilla dephasing rate for \(\tilde{J}_{f\to e}\) becomes \(\gamma_{\phi}^{\prime}=\gamma_{\phi}+\kappa_{1}/4\). The factor \(1/4\) is introduced because we set \(\Delta_{f}=2\) for \(f\)-level in \(J_{ph}\). The beam-splitting Hamiltonian is \(H_{\text{BS}}=ig_{\text{BS}}(ab^{\dagger}-a^{\dagger}b)\) where \(g_{\text{BS}}\) is the BS interaction strength. We set the beam-splitting interaction strength \(g_{\text{BS}}=2\chi_{f}\). The major noise we consider during the procedure are the cavity loss \(F_{a}=\sqrt{\kappa_{1}}a\) and \(F_{b}=\sqrt{\kappa_{1}}b\). To simulate the dissipative time-evolution described above, we use the Monte-Carlo solver in the QuTiP package [57]. This can be easily done for a composite system with one bosonic mode and a 3-level ancilla. However, in the simulation of \(XX(\pi/2)\) gate, we need to finish the following 3-step simulation: 1. For a product input state \(\ket{\psi}_{A}\otimes\ket{\psi^{\prime}}_{B}\) on two bosonic modes \(A\) and \(B\), simulate the noisy beam-splitting interaction \(\text{BS}(\frac{\pi}{2})\). The output state is a two-mode entangled state \(\ket{\Psi}_{AB}\). 2. For the entangled state \(\ket{\Psi}_{AB}\) input, simulate the noisy tensor-ed SNAP gate \(S\otimes S\) with two 3-level ancilla \(A^{\prime}\) and \(B^{\prime}\). The output state is a two-mode entangled state \(\ket{\Psi^{\prime}}_{AB}\). 3. For the entangled state \(\ket{\Psi^{\prime}}_{AB}\) input, simulate the noisy beam-splitting interaction \(\text{BS}^{\dagger}(\frac{\pi}{2})\). The output state is a two-mode entangled state \(\ket{\Psi^{\prime\prime}}_{AB}\). The major bottleneck is the Step 2, where we need to consider a simulation of two bosonic modes \(A\) and \(B\) and two 3-level ancilla \(A^{\prime}\) and \(B^{\prime}\). To get rid of this costly simulation, we first perform a Schmidt decomposition on the entangled state \(\ket{\Psi^{\prime}}_{AB}=\sum_{k}\sqrt{p_{k}}\ket{u_{k}}_{A}\otimes\ket{v_{k} }_{B}\). Then, we simulate the SNAP gate on the bosonic mode \(A\) and \(B\) separately. Then, we simulate the SNAP gate on each components \(\ket{u_{k}}\) or \(\ket{v_{k}}\) separately, i.e. \((S\otimes S)\ket{\Psi^{\prime}}_{AB}=\sum_{k}\sqrt{p_{k}}(S\ket{u_{k}}_{A}) \otimes(S\ket{v_{k}}_{B})\). Then, taking the simulation of the SNAP gate on mode \(A\) as an example, we need to estimate when the quantum jumps occur and which jump operator occurs [57]. This is determined by the following unnormalized expectation values \[O_{j}(t)=\bra{\tilde{\Psi}}(t)J_{j}^{\dagger}J_{j}|\tilde{\Psi}(t)\rangle\,, \tag{20}\] where \[\begin{split}\ket{\tilde{\Psi}(t)}&=e^{-itH_{\text{ eff}}}\ket{\Psi}_{AB}\otimes\ket{gg}_{A^{\prime}B^{\prime}}\,,\\ H_{\text{eff}}&=(H_{0}^{\prime})_{AA^{\prime}} \otimes I_{BB^{\prime}}-\frac{i}{2}\sum_{j}J_{j}^{\dagger}J_{j}.\end{split} \tag{21}\] Here the summation of \(j\) is taken for the 4 different jump operators we consider. We can simplify the form of \(p_{j}\) to \[\begin{split}& O_{j}(t)=\sum_{k}p_{k}O_{j}^{(k)}(t)\\ &=\sum_{k}p_{k}\bra{u_{k}\otimes g}e^{itH_{\text{eff}}}J_{j}^{ \dagger}J_{j}e^{-itH_{\text{eff}}}|u_{k}\otimes g\rangle\,.\end{split} \tag{22}\] Here, \(\ket{u_{k}\otimes g}:=\ket{u_{k}}_{A}\otimes\ket{g}_{A^{\prime}}\). It is easy to verify that, for the ancillary relaxation and dephasing, the value of \(O_{j}^{(k)}(t)\) for all the Schmidt components are the same. We also check numerically that for the cavity loss, the value of \(O_{j}^{(k)}(t)\) for the Schmidt components with large weight \(p_{k}\) are almost the same. This means that we can approximate the overall simulation of \(\ket{\Psi}_{AB}\) by fixing the quantum jump location of all the components \(\{\ket{u_{k}}_{A}\otimes\ket{g}_{A^{\prime}}\}\) to be the same. This can be done in the Monte-Carlo solver in QuTiP by passing the same random seed for all the Schmidt components.
冗長な量子計算において、ボソニック qubits を用いることは、ノイズのある離散変数補助器を必要とする場合が多い。本研究では、このような混合系に対する包括的な実用的な故障耐性フレームワークを確立し、ボソニック量子エラー補正 (QEC) と高度な量子制御技術を組み合わせて、それを故障耐性プロトコルと組み合わせる。ボソニック量子エラー補正 (QEC) と高度な量子制御技術を組み合わせた、エラー補正の基盤となる構成要素を導入する。これは、任意の補助器のエラーに対して、4足猫 qubits を使用して、汎用的なエラー補正の装置を構築する。この装置は、ボソニックモードと補助器間の分散結合と、ボソニックモード間のビームsplitter結合のみが必要であり、これらの結合は、実験的に強く、正確な強度の両方を示している。さらに、
2310.20208
ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection
Recent camouflaged object detection (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios. Apart from the high intrinsic similarity between camouflaged objects and their background, objects are usually diverse in scale, fuzzy in appearance, and even severely occluded. To this end, we propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and videos, \ie zooming in and out. Specifically, our approach employs the zooming strategy to learn discriminative mixed-scale semantics by the multi-head scale integration and rich granularity perception units, which are designed to fully explore imperceptible clues between candidate objects and background surroundings. The former's intrinsic multi-head aggregation provides more diverse visual patterns. The latter's routing mechanism can effectively propagate inter-frame differences in spatiotemporal scenarios and be adaptively deactivated and output all-zero results for static representations. They provide a solid foundation for realizing a unified architecture for static and dynamic COD. Moreover, considering the uncertainty and ambiguity derived from indistinguishable textures, we construct a simple yet effective regularization, uncertainty awareness loss, to encourage predictions with higher confidence in candidate regions. Our highly task-friendly framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks. Our code can be found at {https://github.com/lartpang/ZoomNeXt}.
Youwei Pang, Xiaoqi Zhao, Tian-Zhu Xiang, Lihe Zhang, Huchuan Lu
2023-10-31T06:11:23
http://arxiv.org/abs/2310.20208v4
# ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection ###### Abstract Recent camouflaged object detection (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios. Apart from the high intrinsic similarity between camouflaged objects and their background, objects are usually diverse in scale, fuzzy in appearance, and even severely occluded. To this end, we propose an effective unified collaborative pyramid network which mimics human behavior when observing vague images and videos, _i.e._, zooming in and out. Specifically, our approach employs the zooming strategy to learn discriminative mixed-scale semantics by the multi-head scale integration and rich granularity perception units, which are designed to fully explore imperceptible clues between candidate objects and background surroundings. The former's intrinsic multi-head aggregation provides more diverse visual patterns. The latter's routing mechanism can effectively propagate inter-frame difference in spatiotemporal scenarios and adaptively ignore static representations. They provides a solid foundation for realizing a unified architecture for static and dynamic COD. Moreover, considering the uncertainty and ambiguity derived from indistinguishable textures, we construct a simple yet effective regularization, uncertainty awareness loss, to encourage predictions with higher confidence in candidate regions. Our highly task-friendly framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks. The code will be available at Image Camouflaged Object Detection, Video Camouflaged Object Detection, Image and Video Unified Architecture. ## 1 Introduction Camouflaged objects are often "seamlessly" integrated into the environment by changing their appearance, coloration or pattern to avoid detection, such as chameleons, cuttlefishes and flatfishes. This natural defense mechanism has evolved in response to their harsh living environments. Broadly speaking, camouflaged objects also refer to objects that are extremely small in size, highly similar to the background, or heavily obscured. They subtly hide themselves in the surroundings, making them difficult to be found, _e.g._, soldiers wearing camouflaged uniforms and lions hiding in the grass. Therefore, camouflaged object detection (COD) presents a significantly more intricate challenge compared to traditional salient object detection (SOD) or other object segmentation. Recently, it has piqued ever-growing research interest from the computer vision community and facilitates many valuable real-life applications, such as search and rescue [1], species discovery [2], medical analysis (_e.g._, polyp segmentation [3, 4, 5], lung infection segmentation [6], and cell segmentation [7]), agricultural management [8, 9], and industrial defect detection [10]. Recently, numerous deep learning-based methods have been proposed and achieved significant progress. Nevertheless, they still struggle to accurately and reliably detect camouflaged objects, due to the visual insignificance of camouflaged objects, and high diversity in scale, appearance and occlusion. By observing our experiments, it is found that the current COD detectors are susceptible to distractors from background surroundings. Thus it is difficult to excavate discriminative and subtle semantic cues for camouflaged objects, resulting in the inability to clearly segment the camouflaged objects from the chaotic background and the predictions of some uncertain (low-confidence) regions. In addition, recent works [11, 12, 13] have attempted to Fig. 1: The main purpose of our proposed framework. The static image or video clip containing camouflaged objects can be fed into our network to yield accurate masks using the same architecture. The proposed framework, founded on the flexible routing mechanism, unifies and simplifies the processing pipeline of image and video COD tasks. Specifically, feature propagation inside multiple-scale pyramids with different information granularities ensures the accurate capture of imperceptible objects. Furthermore, the difference-aware adaptive routing mechanism empowers the proposed method with temporal conditional interaction capability only for video data, which provides a solid foundation for perceiving camouflaged objects within both image and video data. introduce temporal motion cues of objects to further expand the information dimension of the COD task. However, as shown in Fig. 1, the existing best solutions for image [14, 15, 1] and video [13] COD tasks have different information emphases (_i.e._, the appearance information of static images and the motion information of videos), resulting in incompatibility in architecture design and feature pipeline between the two types of tasks. Taking these into mind, in this paper, we summarize the COD issue into three aspects: 1) _How to accurately locate camouflaged objects under conditions of inconspicuous appearance and various scales?_ 2) _How to design a unified framework that is compatible with both image and video feature processing pipelines?_ 3) _How to suppress the obvious interference from the background and infer camouflaged objects more reliably?_ Intuitively, to accurately find the vague or camouflaged objects in the scene, humans may try to refer to and compare the changes in the shape or appearance at different scales by zooming in and out the image or video. This specific behavior pattern of human beings motivates us to identify camouflaged objects by mimicking the zooming in and out strategy. With this inspiration, in this paper, we propose a unified collaborative pyramid network, named _ZoomNeXt_, which significantly improves the existing camouflaged object detection performance for the image and video COD tasks. **Firstly,** for accurate object location, we employ scale space theory [16, 17, 18] to imitate zooming in and out strategy. Specifically, we design two key modules, _i.e._, the multi-head scale integration unit (MHSIU) and the rich granularity perception unit (RGPU). For the camouflaged object of interest, our model extracts discriminative features at different "zoom" scales using the triplet architecture, then adopts MHSIU's to screen and aggregate scale-specific features, and utilizes RGPUs to further reorganize and enhance mixed-scale features. The difference-aware adaptive routing mechanism of the module RGPU can also make full use of the inter-frame motion information to capture camouflaged objects in the video. This flexible data-aware architecture design seamlessly unifies the processing pipeline for static images and video clips into a single convolutional model. Thus, our model is able to mine the accurate and subtle semantic clues between objects and backgrounds under mixed scales and diverse scenes, and produce accurate predictions. Besides, the shared weight strategy used in the proposed method achieves a good balance of efficiency and effectiveness. **Secondly,** the last aspect is related to reliable prediction in complex scenarios. Although the object is accurately located, the indistinguishable texture and background will easily bring negative effects to the model learning, _e.g._ predicting uncertain/ambiguity regions, which greatly reduces the detection performance and cannot be ignored. This can be seen in Fig. 9 (Row 3 and 4) and Fig. 10. To this end, we design a uncertainty awareness loss (UAL) to guide the model training, which is only based on the prior knowledge that a good COD prediction should have a clear polarization trend. Its GT-independent characteristic makes it suitable for enhancing the GT-based BCE loss. This targeted enhancement strategy can force the network to optimize the prediction of the uncertain regions during the training process, enabling our ZoomNeXt to distinguish the uncertain regions and segment the camouflaged objects reliably. Our contributions can be summarized as follows: 1. For the COD task, we propose _ZoomNeXt_, which can credibly capture the objects in complex scenes by characterizing and unifying the scale-specific appearance features at different "zoom" scales and the purposeful optimization strategy. 2. To obtain the discriminative feature representation of camouflaged objects, we design MHSIU's and RGPUs to distill, aggregate and strengthen the scale-specific and subtle semantic representation for accurate COD. 3. Through a carefully-designed difference-aware adaptive routing mechanism, our unified conditional architecture seamlessly combines the image and video feature pipelines, enhancing scalability and flexibility. 4. We propose a simple yet effective optimization enhancement strategy, UAL, which can significantly suppress the uncertainty and interference from the background without increasing extra parameters. 5. Without bells and whistles, our model greatly surpasses the recent 30 state-of-the-art methods on four image and two video COD data benchmarks, respectively. A preliminary version of this work was published in [15]. In this paper, we extend our conference version in the following aspects. First, we **explore the video COD task**. We extend the targeted task from the single image COD task to image and video COD tasks. Second, we **improve image-specific _ZoomNet_ to image-video unified _ZoomNeXt_. We extend the original static image COD framework to enable further compatibility with the new auxiliary video-related temporal cues. Third, we **improve the performance of our method by introducing more structural extensions, _i.e._, multi-head scale integration unit and rich granularity perception unit. They make our approach superior not only to the conference version, but even to several recent cutting-edge competitors. Fourth, we also **introduce and discuss more recently published algorithms for image and video tasks** so as to track recent developments in the field and make fair and reasonable comparisons. Last but not least, we **provide more analytical experiments and visualization** to further study the effectiveness and show the characteristics of the provided components in our method including the MHSIU, RGPU, and GT-free loss function UAL, which helps to better understand their behavior. ## 2 Related Work ### _Camouflaged Object_ The study of camouflage has a long history in biology. This behavior of creatures in nature can be regarded as the result of natural selection and adaptation. In fact, in human life and other parts of society, it also has a profound impact, _e.g._, arts, popular culture, and design. More details can be found in [19]. In the field of computer vision, research on camouflaged objects is often associated with salient object detection (SOD) [20, 21], which mainly deals with those salient and easily observed objects in the scene. In general, saliency models are designed for the general observation paradigm (_i.e._, finding visually prominent objects). They are not suitable for the specific observation (_i.e._, finding concealed objects). Therefore, it is necessary to establish models based on the essential requirements and specific data of the task to learn the special knowledge. ### _Camouflaged Object Detection (COD)_ Different from the traditional SOD [22, 23, 24, 25] task, the COD task pays more attention to the undetectable objects (mainly because of too small size, occlusion, concealment or self-disguise). Due to the differences in the attributes of the objects of interest, the goals of the two tasks are different. The difficulty and complexity of the COD task far exceed the SOD task due to the high similarity between the object and the environment. Although it has practical significance in both the image and video fields, the COD task is still a relatively under-explored computer vision problem. Some valuable attempts have been made in recent years. The pioneering works [26, 27, 28] collect and release several important image COD datasets CAMO [26], COD10K [27], and NC4K [28]. And, another vanguard [13] extends the task to the video COD task and introduces a large-scale comprehensive dataset MoCA-Mask [13]. These datasets have become representative data benchmarks for the COD task nowadays and built a solid foundation for a large number of subsequent studies. Recent works [26, 28, 29, 30, 31, 32, 33] construct the multi-task learning framework in the prediction process of camouflaged objects and introduce some auxiliary tasks like classification, edge detection, and object gradient estimation. Among them, [32] also attempts to address the intrinsic similarity of foreground and background for COD in the frequency domain. Some uncertainty-aware methods [34, 35, 36] are proposed to model and cope with the uncertainty in data annotation or COD data itself. In the other methods [37, 38], contextual feature learning also plays an important role. And [39] introduces an explicit coarse-to-fine detection and iterative refinement strategy to refine the segmentation effect. Compared to convolutional neural networks, the vision transformer can encode global contextual information more effectively and also change the way existing methods [13, 14, 40, 41, 42, 43] perceive contextual information. Specifically, [14, 43] are based on the strategy of dividing and merging foreground and background to accurately distinguish highly similar foreground and background. [13] utilizes the multi-scale correlation aggregation and the spatiotemporal transformer to construct short-term and long-term information interactions, and optimize temporal segmentation. [40] aims to enhance locality modeling of the transformer-based model and the feature aggregation in decoders, while [41] preserves detailed clues through the high-resolution iterative feedback design. In addition, there are also a number of bio-inspired methods, such as [1, 27, 44]. They capture camouflaged objects by imitating the behavior process of hunters or changing the viewpoint of the scene. Although our method can also be attributed to the last category, ours is different from the above methods. Our method simulates the behavior of humans to understand complex images by zooming in and out strategy. The proposed method explores the scale-specific and imperceptible semantic features under the mixed scales for accurate predictions, with the supervision of BCE and our proposed UAL. At the same time, through the adaptive perception ability of the temporal difference, the proposed method also harmonizes both image and video data in a single framework. Accordingly, our method achieves a more comprehensive understanding of the diverse scene, and accurately and robustly segments the camouflaged objects from the complex background. ### _Conditional Computation_ Conditional computation [45] refers to a series of carefully constructed algorithms in which each input sample actually uses only a subset of feature processing nodes. It can further reduce computation, latency, or power on average, and increase model capacity without a proportional increase in computation. As argued in [45], _sparse activations_ and _multiplicative connections_ may be useful ingredients of conditional computation. Over the past few years, it has become an important solution to the time-consuming and computationally expensive training and inference of deep learning models. Specifically, as a typical paradigm, the mixture of experts (MoE) technique [46] based on sparse selection has shown great potential on a variety of different types of tasks, including language modeling and machine translation [47], multi-task learning [48], image classification [49, 50], and vision-language model [51]. Existing methods mainly rely on MoE and gating strategies to realize dynamic routing of feature flow nodes, and focus on the distinction between different samples of homogeneous data. In the proposed scheme, we propose a new form of conditional computation for differentiated processing of image and video data. Specifically, we construct adaptive video-specific bypasses using the inter-frame difference in video, which automatically "ignores" static images for good task compatibility. ### _Scale Space Integration_ The scale-space theory aims to promote an optimal understanding of image structure, which is an extremely effective and theoretically sound framework for addressing naturally occurring scale variations. Its ideas have been widely used in computer vision, including the image pyramid [52] and the feature pyramid [53]. Due to the structural and semantic differences at different scales, the corresponding features play different roles. However, the commonly-used inverted pyramid-like feature extraction structures [54, 55, 56] often cause the feature representation to lose too much texture and appearance details, which are unfavorable for dense prediction tasks [57, 58] that emphasize the integrity of regions and edges. Thus, some recent CNN-based COD methods [27, 29, 37, 38] and SOD methods [59, 60, 61, 62, 63, 64, 65] explore the combination strategy of inter-layer features to enhance the feature representation. These bring some positive gains for accurate localization and segmentation of objects. However, for the COD task, the existing approaches overlook the performance bottleneck caused by the ambiguity of the structural information of the data itself which makes it difficult to be fully perceived at a single scale. Different from them, we mimic the zoom strategy to synchronously consider differentiated relationships between objects and background at multiple scales, thereby fully perceiving the camouflaged objects and confusing scenes. Besides, we also further explore the fine-grained feature scale space between channels. ## 3 Method In this section, we first elaborate on the overall architecture of the proposed method, and then present the details of each module and the uncertainty awareness loss function. ### _Overall Architecture_ **Preliminaries** Let \(\mathbf{I}\in\mathbb{R}^{3\times H\times W}\) and \(\{\mathbf{I}_{t}\in\mathbb{R}^{3\times H\times W}\}_{t=1}^{T}\) be the input static image and the input video clip with \(T\) frames of the network, where \(3\) is the number of color channels and \(H,W\) are the height and width. Our network is to generate a gray-scale map \(\mathbf{P}\) or clip \(\{\mathbf{P}_{t}\}_{t=1}^{T}\) with values ranging from 0 to 1, which reflects the probability that each location may be part of the camouflaged object. **Zooming Strategy** The overall architecture of the proposed method is illustrated in Fig. 2. Inspired by the zooming strategy from human beings when observing confusing scenes, we argue that different zoom scales often contain their specific information. Aggregating the differentiated information on different scales will benefit exploring the inconspicuous yet valuable clues from confusing scenarios, thus facilitating COD. To implement it, intuitively, we resort to the image pyramid. Specifically, we customize an image pyramid based on the single scale input to identify the camouflaged objects. The scales are divided into a main scale (_i.e._ the input scale) and two auxiliary scales. The latter is obtained by re-scaling the input to imitate the operation of zooming in and out. **Feature Processing** We utilize the shared triplet feature encoder to extract features on different scales and feed them to the scale merging subnetwork. To integrate these features that contain rich scale-specific information, we design a series of multi-head scale integration units (MHSIUs) based on the attention-aware filtering mechanism. Thus, these auxiliary scales are integrated into the main scale, _i.e._, information aggregation of "zoom in and out" operation. This will largely enhance the model to distill critical and informative semantic cues for capturing difficult-to-detect camouflaged objects. After that, we construct rich granularity perception units (RGPUs) to gradually integrate multi-level features in a top-down manner to enhance the mixed-scale feature representation. It further increases the receptive field range and diversifies feature representation within the module. The captured fine-grained and mixed-scale clues promote the model to accurately segment the camouflaged objects in the chaotic scenes. **Loss Improvement** To overcome the uncertainty in the prediction caused by the inherent complexity of the data, Fig. 2: Overall framework of the proposed ZoomNeXt. The shared triplet feature encoder is used to extract multi-level features corresponding to different input “zoom” scales. At different levels of the scale merging subnetwork, MHSIUs are adopted to screen and aggregate the critical cues from different scales. Then the fused features are gradually integrated through the top-down up-sampling path in the hierarchical difference propagation decoder. RGPUs further enhance the feature discrimination by constructing a multi-path structure inside the features. Finally, the probability map of the camouflaged object corresponding to the input image or frame can be obtained. In the training stage, the binary cross entropy and the proposed uncertainty awareness loss are used as the loss function. we design an uncertainty awareness loss (UAL) to assist the BCE loss, enabling the model to distinguish these uncertain regions and produce an accurate and reliable prediction. ### _Triplet Feature Encoder_ We start by extracting deep features through a shared triplet feature encoder for the group-wise inputs, which consists of a feature extraction network and a channel compression network. The feature extraction network is constituted by the commonly-used ResNet [66], EfficientNet [67], or PVTv2 [68] that removes the classification head. And the channel compression network is cascaded to further optimize computation and obtain a more compact feature. For the trade-off between efficiency and effectiveness, the main scale and the two auxiliary scales are empirically set to \(1.0\times\), \(1.5\times\) and \(0.5\times\). And, three sets of 64-channel feature maps corresponding to three input scales are produced by these structures, _i.e._\(\{f_{i}^{k}\}_{i=1}^{5}\), \(k\in\{0.5,1.0,1.5\}\). Next, these features are fed successively to the multi-head scale merging subnetwork and the hierarchical difference propagation decoder for subsequent processing. ### _Scale Merging Subnetwork_ We design an attention-based multi-head scale integration unit (MHSIU) to screen (weight) and combine scale-specific information, as shown in Fig. 3. Several such units make up the scale merging subnetwork. Through filtering and aggregation, the expression of different scales is self-adaptively highlighted. The details of this component are described next. **Scale Alignment** Before scale integration, the features \(f_{i}^{1.5}\) and \(f_{i}^{0.5}\) are first resized to be consistent resolution with the main scale feature \(f_{i}^{1.0}\). Specifically, for \(f_{i}^{1.5}\), we use a hybrid structure of "max-pooling + average-pooling" to down-sample it, which helps to preserve the effective and diverse responses for camouflaged objects in high-resolution features. For \(f_{i}^{0.5}\), we directly up-sample it by the bi-linear interpolation. Then, these features are fed into the "attention generator". **Multi-Head Spatial Interaction** Unlike the form of spatial attention in our conference version [15] that relies heavily on a single pattern, here we perform parallel independent transformations on \(M\) groups of feature maps, which is inspired by the structure of the multi-head paradigm in Transformer [69]. This design helps extend the model's ability to mine multiple fine-grained spatial attention patterns in parallel and diversify the representation of the feature space. Specifically, several three-channel feature maps are calculated through a series of convolutional layers. After the cascaded softmax activation layer in each attention group, the attention map \(A_{m}^{k}\) corresponding to each scale can be obtained from these groups and used as respective weights for the final integration, where scale group index \(k\in\{1,2,3\}\) and attention group index \(m\in\{1,2,\ldots,M\}\). The process is formulated as: \[\begin{split} F_{i}&=\left[\mathcal{U}(f_{i}^{0.5} ),f_{i}^{1.0},\mathcal{D}(f_{i}^{1.5})\right],\\ \tilde{F}_{i}&=\left\{\texttt{trans}(F_{i},\phi^{m} )\right\}_{m=1}^{M},\\ A_{i}&=\{\texttt{softmax}(\tilde{F}_{i,m})\}_{m=1}^ {M},\\ \tilde{F}_{i}&=\{\texttt{trans}(F_{i},\gamma^{m})\}_{ m=1}^{M},\\ f_{i}&=\{A_{i,m}^{1}\cdot\tilde{F}_{i,m}^{1}+A_{i,m}^{2}\cdot\tilde{F}_{i,m}^{2}+A_{i,m}^{3}\cdot\tilde{F}_{i,m}^{3}\}_{m=1}^{M },\end{split} \tag{1}\] where \([\star]\) represents the concatenation operation. \(\mathcal{U}\) and \(\mathcal{D}\) refer to the bi-linear interpolation and hybrid adaptive pooling operations mentioned above, respectively. \(\texttt{trans}(\star,\phi)\) and \(\texttt{trans}(\star,\gamma)\) indicate the several simple linear transformation layers in the attention generator, and \(\phi\) and \(\gamma\) mean the parameters of these layers. Note that some linear, normalization and activation layers before and after the sampling operation are not shown in Equ. 1 and Fig. 3 for simplicity. And features enhanced in different groups are concatenated along the channel dimension and fed into a decoder for further processing. These designs aim to adaptively and selectively aggregate the scale-specific information to explore subtle but critical semantic cues at different scales, boosting the feature representation. ### _Hierarchical Difference Propagation Decoder_ After MHSIUs, the auxiliary-scale information is integrated into the main-scale branch. Similar to the multi-scale case, different channels also contain differentiated semantics. Fig. 4: Rich granularity perception unit. Group-wise interaction and channel-wise modulation are used to explore the discriminative and valuable semantics from different channels. Note that each group of features is executed sequentially. The latter one integrates part of the features of the previous one before the feature transformation. Fig. 3: Illustration of the multi-head scale integration unit. Thus, it is necessary to excavate valuable clues contained in different channels. To this end, we design the rich granularity perception unit (RGPU) to conduct information interaction and feature refinement between channels, which strengthen features from coarse-grained group-wise iteration to fine-grained channel-wise modulation in the decoder, as depicted in Fig. 4. **Input** The input \(\tilde{f}_{i}\) of the RGPU\({}_{i}\) contains the multi-scale fused feature \(f_{i}\) from the MHSIU\({}_{i}\) and the feature \(\tilde{f}_{i+1}\) from the RGPU\({}_{i+1}\) as \(\hat{f}_{i}=f_{i}+\mathcal{U}(\tilde{f}_{i+1})\). **Group-wise Iteration** We adopt \(1\times 1\) convolution to extend the channel number of feature map \(\tilde{f}_{i}\). The features are then divided into \(G\) groups \(\{g_{j}\}_{j=1}^{G}\) along the channel dimension. Feature interaction between groups is carried out in an iterative manner. Specifically, the first group \(g_{1}\) is split into three feature sets \({g^{\prime}}_{1}^{k}\)\({}_{k=1}^{3}\) after a convolution block. Among them, the \({g^{\prime}}_{1}^{1}\) is adopted for information exchange with the next group, and the other two are used for channel-wise modulation. In the \(j^{th}\) (\(1<j<G\)) group, the feature \(g_{j}\) is concatenated with the feature \({g^{\prime}}_{j-1}^{1}\) from the previous group along the channel, followed by a convolution block and a split operation, which similarly divides this feature group into three feature sets. It is noted that the output of the group \(G\) with the similar input form to the previous groups only contains \({g^{\prime}}_{G}^{2}\) and \({g^{\prime}}_{G}^{3}\). Such an iterative mixing strategy strives to learn the critical clues from different channels and obtain a powerful feature representation. The iteration structure of feature groups in RGPU is actually equivalent to an integrated multi-path kernel pyramid structure with partial parameter sharing. Such a design further enriches the ability to explore diverse visual cues, thereby helping to better perceive objects and refine predictions. The iteration processing is also listed in Alg. 1. **Channel-wise Modulation** The features \([{g^{\prime}}_{j}^{2}]_{j=1}^{G}\) are concatenated and converted into the feature modulation vector \(\alpha\) by a small convolutional network, which is employed to weight another concatenated feature as \(\tilde{f}_{i}=\alpha\cdot[{(g^{\prime}}_{j}^{3})_{j=1}^{G}]\). **Difference-based Conditional Computation** Because the inter-frame difference can directly reflect the temporal motion cues of the camouflaged object in the video, we design the difference-aware adaptive routing mechanism to realize the video-specific inter-frame information propagation and seamlessly unify the image and video COD tasks. Specifically, the features from the group-wise iteration and the channel-wise modulation first utilize the temporal shift operation shift to obtain the difference representation \(X=\texttt{shift}(\tilde{f}_{i})-\tilde{f}_{i}\) between adjacent frames. To enhance the motion cues of the object of interest between frames, the intra-frame self-attention is performed as \(Z=XW_{V}\texttt{softmax}(\frac{(XW_{K})^{T}XW_{0}}{\sqrt{HW}})\). And these cues are further diffused among video clips by the following convolutional layers performed in the time dimension and added to the original feature \(\tilde{f}_{i}\). _For the static image, this pipeline outputs an all-zero tensor, thus maintaining the original features_. **Output** The output feature of the RGPU can be obtained by the stacked activation, normalization and convolutional layers, which are defined as: \(\tilde{f}_{i}=\texttt{fuse}(\hat{f}_{i}+\tilde{f}_{i})\). Based on cascaded RGPUs and several stacked convolutional layers, a single-channel logits map is obtained. The final confidence map \(\mathbf{P}\) or map group \(\{\mathbf{P}_{t}\}_{t=1}^{T}\) that highlights the camouflaged objects is then generated by a sigmoid function. ### _Loss Functions_ **Binary Cross Entropy Loss (BCE)** The BCE loss function is widely used in various binary image segmentation tasks and its mathematical form is: \[\small I_{BCE}^{i,j}=-\mathbf{g}_{i,j}\log\mathbf{p}_{i,j}-(1-\mathbf{g}_{i,j })\log(1-\mathbf{p}_{i,j}), \tag{2}\] where \(\mathbf{g}_{i,j}\in\{0,1\}\) and \(\mathbf{p}_{i,j}\in[0,1]\) denote the ground truth and the predicted value at position \((i,j)\), respectively. As shown in Fig. 9, due to the complexity of the COD data, if trained only under the BCE, the model produces serious ambiguity and uncertainty in the prediction and fails to accurately capture objects, of which both will reduce the reliability of COD. **Uncertainty Awareness Loss (UAL)** To force the model to enhance "confidence" in decision-making and increase the Fig. 5: Curves of different forms of the proposed UAL. penalty for fuzzy prediction, we design a strong constraint as the auxiliary of the BCE, _i.e._, the uncertainty awareness loss (UAL). In the final probability map of the camouflaged object, the pixel value range is \([0,1]\), where \(0\) means the pixel belongs to the background, and \(1\) means it belongs to the camouflaged object. Therefore, the closer the predicted value is to \(0.5\), the more uncertain the determination about the property of the pixel is. To optimize it, a direct way is to use ambiguity as the supplementary loss for these difficult samples. To this end, we first need to define the ambiguity measure of the pixel \(x\), which maximizes at \(x=0.5\) and minimizes at \(x=0\) or \(x=1\). As a loss, the function should be smooth and continuous with only a finite number of non-differentiable points. For brevity, we empirically consider two forms, \(\Phi^{\alpha}_{pow}(x)=1-|2x-1|^{\alpha}\) based on the power function and \(\Phi^{\alpha}_{exp}(x)=e^{-(\alpha(x-0.5))^{2}}\) based on the exponential function. Besides, inspired by the form of the weighted BCE loss, we also try to use \(\omega=1+\Phi^{2}_{pow}(x)\) as the weight of BCE loss to increase the loss of hard pixels. The different forms and corresponding results are listed in Fig. 5 and Tab. 6. After massive experiments (Sec. 4.3.5), the proposed UAL is formulated as \[l^{i,j}_{UAL}=1-\Delta_{i,j}=1-|2\mathbf{p}_{i,j}-1|^{2}. \tag{3}\] **Total Loss Function** Finally, the total loss function can be written as: \(L=L_{BCE}+\lambda L_{UAL}\), where \(\lambda\) is the balance coefficient. We design three adjustment strategies of \(\lambda\), _i.e._, a fixed constant value, an increasing linear strategy, and an increasing cosine strategy in Sec. 4.3.5. From the results shown in Tab. 5, we find that the increasing strategies, especially "cosine", do achieve better performance. So, the cosine strategy is used by default. ## 4 Experiments ### _Experiment Setup_ #### 4.1.1 Datasets We use four image COD datasets, CAMO [26], CHAMELEON [76], COD10K [27] and NC4K [28], and two video COD datasets, MoCA-Mask [13] and CAD [12]. **Image COD** CAMO consists of 1,250 camouflaged and 1,250 non-camouflaged images. CHAMELEON contains 76 hand-annotated images. COD10K includes 5,066 camouflaged, 3,000 background and 1,934 non-camouflaged images. NC4K is another large-scale COD testing dataset including 4,121 images from the Internet. Following the data partition of [14, 15, 27, 33, 37], we use all images with camouflaged objects in the experiments, in which 3,040 images from COD10K and 1,000 images from CAMO for training, and the rest ones for testing. **Video COD** The recent MoCA-Mask is reorganized from the moving camouflaged animals (MoCA) dataset [11]. MoCA-Mask includes 71 sequences with 19,313 frames for training, and 16 sequences with 3,626 frames for testing, respectively. And it extends the original bounding box annotations of the MoCA dataset to finer segmentation annotations. The camouflaged animal dataset, _i.e._, CAD, includes 9 short video sequences in total that accompany 181 hand-labeled masks on every \(5^{th}\) frame. Except for the training set containing 71 frame sequences of MoCA-Mask for training, the rest of the video sequences are used for testing as [13]. #### 4.1.2 Evaluation Criteria For image COD, we use eight common metrics for evaluation based on [77, 78], including S-measure [79] (S\({}_{m}\)), weighted F-measure [80] (F\({}_{\beta}^{\omega}\)), mean absolute error (MAE), F-measure [81] (F\({}_{\beta}\)), E-measure [82] (E\({}_{m}\)), precision-recall curve (PR curve), F\({}_{\beta}\)-threshold curve (F\({}_{\beta}\) curve), and E\({}_{m}\)-threshold curve (E\({}_{m}\) curve). And for video COD, the seven metrics used in the recent pioneer work [13] are introduced to evaluate existing methods, including S\({}_{m}\), F\({}_{\beta}^{\omega}\), MAE, F\({}_{\beta}\), E\({}_{m}\), mDice, and mIoU. #### 4.1.3 Implementation Details The proposed camouflaged object detector is implemented with PyTorch [83]. And the training settings are on par with recent best practices [13, 14, 15, 27, 33, 37]. The encoder is initialized with the parameters of ResNet [66], EfficientNet [67] and PVTv2 [68] pre-trained on ImageNet, and the remaining parts are randomly initialized. The optimizer Adam with \(\texttt{betas}=(0.9,0.999)\) is chosen to update the model parameters. The learning rate is initialized to 0.0001 and follows a step decay strategy. The entire model is trained for 150 epochs with a batch size of 8 in an end-to-end manner on an NVIDIA 2080Ti GPU. During training and inference, the scales of the main input and the ground truth are set to \(384\times 384\). Random flipping, rotating, and color jittering are employed to augment the training data and no test-time augmentation is used in the inference phase. In addition, for a fair comparison, we follow the settings used in [13] in the video COD task, _i.e._, pre-training on COD10K-TR + fine-tuning on MoCA-Mask-TR. Fig. 6: PR, \(F_{\beta}\) and \(E_{m}\) curves of the proposed model and recent SOTA algorithms over four COD datasets. ### _Comparisons with State-of-the-arts_ In order to evaluate the proposed method, we construct a comparison of it with several recent state-of-the-art image-based and video-based methods, including [1, 13, 14, 27, 28, 29, 30, 31, 32, 33, 34, 35, 37, 38, 39, 40, 41, 42, 43, 73, 74, 75]. The results of all these methods come from existing public data or are generated by models retrained with the corresponding code. The comparison of the proposed method is summarized in Tab. 1, Tab. 2, and Fig. 6, and the qualitative performance of all methods is shown in Fig. 7 and Fig. 8. **Quantitative Evaluation** Tab. 1 and Tab. 2 show the detailed comparison results for image and video COD tasks, respec Fig. 8: Visual comparisons of “Ours (T=5)” and recent best practice STL-Net-LT [13] for video COD. For better visualization, the grey-scale predictions are thresholded by 0.5 here. Fig. 7: Visual comparisons of some recent COD methods and ours on different types of samples. tively. It can be seen that the proposed model consistently and significantly surpasses recent methods on all datasets without relying on any post-processing tricks. Compared with the recent several best methods with different backbones, ZoomNet [15], DGNet [33] and SARNet [43] for image COD and STL-Net-LT [13] for video COD, although they have suppressed other existing methods, our method still shows the obvious improvement over the conference version [15] and the performance advantages over recent methods on these datasets. Besides, PR, F\({}_{\beta}\) and E\({}_{m}\) curves marked by the red color in Fig. 6 also demonstrate the effectiveness of the proposed method. The flatness of the F\({}_{\beta}\) and E\({}_{m}\) curves reflects the consistency and uniformity of the prediction. Our curves are almost horizontal, which can be attributed to the effect of the proposed UAL. It drives the predictions to be more polarized and reduces the ambiguity. **Qualitative Evaluation** Visual comparisons of different methods on several typical samples are shown in Fig. 7 and Fig. 8. They present the complexity in different aspects, such as small objects (Row 3-5), middle objects (Row 1 and 2), big objects (Row 6-8), occlusions (Row 1 and 6), background interference (Row 1-3 and 6-8), and indefinable boundaries (Row 2, 3, and 5) in Fig. 7, and motion blur in Fig. 8. These results intuitively show the superior performance of the proposed method. In addition, it can be noticed that our predictions have clearer and more complete object regions, sharper contours, and better temporal consistency. widely-used large-scale COD dataset, and contains various objects and scenes, all subsequent ablation experiments are carried out on it. #### 4.3.1 Effectiveness of Proposed Modules In the proposed model, both the MHSIU and the RGPU are very important structures. We install them one by one on the baseline model to evaluate their performance. The results are shown in Tab. III. Note that only the inputs of the main scale are used in our baseline "0" and the models "1.x". As can be seen, our baseline shows a good performance, probably due to the more reasonable network architecture. From the table, it can be seen that the two proposed modules make a significant contribution to the performance when compared to the baseline. Besides, the visual results in Fig. 9 show that the two modules can benefit each other and reduce their errors to locate and distinguish objects more accurately. These components effectively help the model to excavate and distill the critical and valuable semantics and improve the capability of distinguishing hard objects. Under the cooperation between the proposed modules and the proposed loss function, ZoomNeXt can completely capture the camouflaged objects of different scales and generate the predictions with higher consistency. #### 4.3.2 Number of Groups in RGPU and Heads in MHSIU **RGPU** The recursive iteration strategy in the proposed RGPU stimulates the potential semantic diversity of feature representation. The results in Tab. III show the impact of different number of iteration groups on performance. _Although increasing the number of groups introduces more parameters and computation, the performance does not keep increasing._ It can be seen from the results that the best performance appears when the number of groups is equal to 6 and also achieves a good balance between performance and efficiency. So we use 6 groups as the default setting in all RGPUs. **MHSIU** As shown in Fig. 11, the multi-head paradigm of the MHSIU further enriches the spatial pattern of attention to fine-grained information in different scale spaces. We list the efficiency and performance of different settings in Tab. III. The comparison shows that an increase in the number of heads improves computing efficiency and even performance. The best performance appears when the number of heads is set to 4, which is also the choice in other experiments. The above experiments reflect from the side that the increase of model complexity does not necessarily lead to the improvement of performance, which also depends on more reasonable structural designs. #### 4.3.3 Temporal Conditional Computation Our model introduces the temporal conditional computation, _i.e.,_ difference-aware adaptive routing mechanism, which unifies the feature processing pipelines of image and video tasks into the same framework. In Tab. II, we show the performance corresponding to different versions of the model on the video COD task. Since there is no difference in the time dimension when only a single frame is input, the corresponding model only uses static processing nodes. As more frames are fed, adaptively activated network nodes by differential information for the inter-frame interaction further improve the performance on video COD. #### 4.3.4 Mixed-scale Input Scheme Our model is designed to mimic the behavior of "zooming in and out". The feature expression is enriched by combining the scale-specific information from different scales. In Fig. 10, the intermediate features from the deep MHSIU modules show that our mixed-scale scheme plays a positive and important role in locating the camouflaged object. To analyze the role of different scales, we summarize the performance of different combination forms on two large-scale datasets COD10K [27] and NC4K [28] in Tab. IV. The proposed scheme performs better than the single-scale one and simply mixed one. This verifies the rationality of such a design for the COD task. #### 4.3.5 Forms of Loss Function **Options of Setting \(\lambda\)** We compare three strategies and the results are listed in Tab. V, in which the increasing cosine strategy achieves the best performance. This may be due to the advantage of its smooth change process. This smooth intensity warm-up strategy of UAL motivates the model to take advantage of UAL in improving the learning process and to mitigate the possible negative interference of UAL on BCE due to the lower accuracy of the model during the early stage of training. **Forms of UAL** Different forms of UAL are listed in Tab. VI and the corresponding curves are illustrated in Fig. 5. As can be seen, Form 1.5 has a more balanced performance. Also, it is worth noting that, when \(\Delta\) approaches 1, the form, which can maintain a larger gradient, will obtain better performance in terms of F\({}_{\beta}^{o}\), MAE, and F\({}_{\beta}\). **Gradient of Loss** Further, from the gradient of the loss function illustrated in Fig. 12, there is a good collaboration Fig. 12: Gradient landscapes and curves of the binary cross entropy (BCE) loss and our uncertainty awareness loss (UAL) scheduled by cosine with respect to the measure \(\Delta=|2\mathbf{p}-1|\) as in Equ. 3 and the training progress \(\frac{t}{T}\), where \(t\) and \(T\) denote the current and overall iteration steps. between the proposed UAL and BCE. In the early training phase, the UAL is suppressed and the gradient is dominated by the BCE, thus facilitating a more stable optimization. However, in the later stage, the gradient of UAL is inversely proportional to \(\Delta\) representing the certainty of the prediction. Such a design enables UAL to speed up updating simple samples. At the same time, it does not rashly adjust difficult samples, because they need more guidance from the ground truth. #### 4.3.6 Effectiveness of UAL Fig. 9 and Fig. 10 intuitively show that the UAL greatly reduces the ambiguity caused by the interference from the background. We also visualize the uncertainty measure, _i.e._, UAL, of all results on four image COD datasets in Fig. 13. In Fig. 13, the peaks of the model "w/o UAL" are flatter and more upward, which means that there are more visually blurred/uncertain predictions. Besides, in the corresponding feature visualization in Fig. 10, there is clear background interference in "w/o UAL" due to the complex scenarios and blurred edges, which are extremely prone to yield false positive predictions. However, when UAL is introduced, it can be seen in Fig. 13 that the peaks of "w/UAL" are sharper than the one of "w/o UAL", that is, most pixel values approach two extremes and they have higher confidence. And the feature maps in Fig. 10 become more discriminative and present a more compact and complete response in the regions of camouflaged objects. ## 5 Conclusion In this paper, we propose ZoomNeXt by imitating the behavior of human beings to zoom in and out on images and videos. This process actually considers the differentiated expressions about the scene from different scales, which \begin{table} \begin{tabular}{c|c|c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{**No.**} & \multirow{2}{*}{**Form**} & \multirow{2}{*}{\(\mathbf{\alpha}\)} & \multicolumn{3}{c}{**CODIOK-TE**} & \multicolumn{3}{c}{**NCAK**} \\ & & & \(S_{m}\uparrow\) & \(F_{B}^{\prime}\uparrow\) & MAE \(\downarrow\) & \(F_{B}\uparrow\) & \(E_{m}\uparrow\) & \(S_{m}\uparrow\) & \(F_{B}^{\prime\prime}\) & MAE \(\downarrow\) & \(F_{B}\uparrow\) & \(E_{m}\uparrow\) \\ \hline \hline 0 & \(\mathbf{\kappa}\) & \(\mathbf{\kappa}\) & 0.861 & 0.734 & 0.029 & 0.773 & 0.924 & 0.875 & 0.792 & 0.042 & 0.826 & 0.925 \\ \hline \(1.1,1,2,1,3\) & & & \(1/8,1/4,1/2\) & — & — & — & — & — & — & — & — & — \\ 1.4 & & & 1.0859 & 0.762 & **0.026** & 0.795 & 0.923 & 0.870 & 0.810 & 0.039 & 0.841 & 0.926 \\ 1.5 & \(\Phi_{pow}^{\alpha}(x)=1-|2x-1|^{\alpha}\) & 2 & 0.861 & **0.768** & **0.026** & 0.801 & 0.925 & 0.874 & **0.816** & **0.037** & **0.846** & **0.928** \\ 1.6 & & & 4 & 0.857 & **0.768** & 0.027 & **0.802** & 0.923 & 0.868 & 0.815 & 0.038 & 0.844 & 0.925 \\ 1.7 & & & 8 & 0.859 & 0.767 & **0.026** & **0.802** & 0.924 & 0.867 & 0.814 & 0.038 & 0.843 & 0.923 \\ \hline 2.1 & & & 1/8 & **0.866** & 0.741 & 0.027 & 0.782 & **0.933** & 0.875 & 0.794 & 0.041 & 0.828 & 0.926 \\ 2.2 & & & 1/4 & 0.862 & 0.736 & 0.028 & 0.777 & 0.928 & **0.876** & 0.794 & 0.041 & 0.828 & 0.926 \\ 2.3 & & & 1/2 & 0.863 & 0.742 & 0.028 & 0.781 & 0.930 & **0.876** & 0.798 & 0.040 & 0.831 & 0.927 \\ 2.4 & & & 1 & 0.863 & 0.753 & 0.026 & 0.790 & 0.927 & 0.873 & 0.802 & 0.039 & 0.834 & 0.925 \\ 2.5 & & & 2 & 0.862 & 0.762 & **0.026** & 0.797 & 0.927 & 0.873 & 0.810 & 0.038 & 0.841 & **0.928** \\ 2.6 & & & 4 & 0.861 & 0.752 & 0.027 & 0.790 & 0.925 & 0.872 & 0.803 & 0.040 & 0.835 & 0.925 \\ 2.7 & & & 8 & 0.861 & 0.737 & 0.028 & 0.778 & 0.929 & 0.874 & 0.793 & 0.041 & 0.827 & 0.926 \\ \hline 3 & BCE w/ \(\omega=1+\Phi_{pow}^{2}(x)\) & 2 & 0.861 & 0.737 & 0.028 & 0.777 & 0.928 & 0.873 & 0.791 & 0.041 & 0.825 & 0.926 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Comparisons of mixed and single scale input schemes on COD10K and NC4K. All models are based on the baseline model in Tab. III. \begin{table} \begin{tabular}{c|c|c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Input Scale**} & \multirow{2}{*}{**Strategy**} & \multirow{2}{*}{\(\mathbf{\lambda}\)} & \multicolumn{3}{c}{**CODIOK**} & \multicolumn{3}{c}{**NCAK**} \\ & & & \(\mathbb{S}_{m}\uparrow\) & \(\mathbb{P}_{B}^{\prime}\uparrow\) & MAE \(\downarrow\) & \(F_{B}\uparrow\) & \(\mathbb{E}_{m}\uparrow\) & \(\mathbb{S}_{m}\uparrow\) & \(\mathbb{P}_{B}\uparrow\) & MAE \(\downarrow\) & \(\mathbb{F}_{B}\uparrow\) & \(\mathbb{E}_{m}\uparrow\) & \(\mathbb{E}_{m}\uparrow\) \\ \hline \(1.0x\) & — & 0.826 & 0.677 & 0.035 & 0.724 & 0.903 & 0.850 & 0.756 & 0.049 & 0.800 & 0.912 & 0.005 \\ \(0.5x\) & — & 0.751 & 0.952 & 0.049 & 0.611 & 0.860 & 0.810 & 0.686 & 0.041 & 0.737 & 0.808 & 11.35\% \\ \(0.5x\) & \(1.0x\) & Addition & 0.827 & 0.024 & 0.722 & 0.910 & 0.858 & 0.765 & 0.046 & 0.805 & 0.918 & 11.02\% \\ \(0.5x\) & \(1.0x\) & MSUU & 0.832 & 0.698 & 0.023 & 0.775 & 0.911 & 0.857 & 0.768 & 0.046 & 0.808 & 0.916 & 11.72\% \\ \(1.5x\) & \(1.5x\) & MSUU & 0.830 & 0.698 & 0.023 & 0.755 & 0.911 & 0.856 & 0.769 & 0.048 & 0.810 & 0.913 & 12.80\% \\ \(0.5x\) & \(1.5x\) & \(1.5x\) & MSUU & 0.830 & 0.717 & 0.031 & 0.764 & 0.921 & 0.856 & 0.770 & 0.048 & 0.812 & 12.68\% \\ \(0.5x\) & \(1.5x\) & \(1.5x\) & MSUU & 0.835 & 0.719 & 0.031 & 0.763 & 0.928 & 0.726 & 0.404 & 0.841 & 0.918 & 13.39\% \\ \(1.0x\) & \(1.5x\) & \(1.5x\) & MSUU & 0.852 & 0.777 & 0.031 & 0.924 & 0.864 & 0.776 & 0.046 & 0.841 & 0.919 & 13.40\% \\ \(1.0x\) & \(1.5x\) & \(1.5x\) & MSUU & 0.856 & 0.722 & 0.080 & 0.767 & 0.927 & 0.866 & 0.780 & 0.814 & 0.920 & 14.14\% \\ \(0.5x\) & \(1.0x\) & \(1.5x\) & \(1.5x\) & Addition & 0.849 & 0.719 & 0.020 & helps to improve the understanding and judgment of cam-outflaged objects. We first filter and aggregate scale-specific features through the scale merging subnetwork to enhance feature representation. Next, in the hierarchical difference propagation decoder, the strategies of grouping, mixing, and fusion further mine the mixed-scale semantics. Lastly, we introduce the uncertainty awareness loss to penalize the ambiguity of the prediction. And, to the best of our knowledge, we are the first to unify the pipelines of image and video COD into a single framework. Extensive experiments verify the effectiveness of the proposed method in both the image and video COD tasks with superior performance to existing state-of-the-art methods.
最近の camouflaged オブジェクト検出 (COD) は、周囲に溶け込んだオブジェクトを視覚的に分割することを目指していますが、これは現実世界では非常に複雑で難しいです。 camouflaged オブジェクトと背景との高い内在的類似性に加えて、オブジェクトは通常、スケールが異なる、外観がぼやけている、そしてさらに severe occluded していることが多いです。そこで、私たちは、曖昧な画像や動画を観察する際に人間がどのように行動するかを模倣した効果的な一元的な協力的なピラミッドネットワークを提案しました。具体的には、私たちのアプローチは、拡大縮小戦略を用いて、多重頭スケール統合と豊富な粒度認識ユニットを使用して、候補オブジェクトと背景の間に隠れた兆候を学習します。前者は、多重ヘッドの集約により、より多様な視覚パターンを生成します。後者は、空間時間的なシナリオにおいて、フレーム間の差を効果的に伝